(Featured) The epistemic impossibility of an artificial intelligence take-over of democracy

Daniel Innerarity explores the limits of algorithmic governance in relation to democratic decision-making. They argue that algorithms function with a 0/1 logic that is the opposite of ambiguity, and they are unable to handle complex problems that are not well-structured or quantifiable. The authors argue that politics consists of making decisions in the absence of indisputable evidence and that algorithms are of limited utility in such circumstances. Algorithmic rationality reduces the complexity of social phenomena to numbers, whereas political decisions are rarely based on binary categories. The authors suggest that the epistemological principle of uncertainty is central to democratic institutions and that our democratic institutions are a recognition of our ignorance.

The author highlights the limitations of algorithms in decision-making and suggest that they are appropriate only for well-structured and quantifiable problems. In contrast, political decisions are rarely based on binary categories, and politics consists of making decisions in the absence of indisputable evidence. The authors argue that algorithmic rationality reduces the complexity of social phenomena to numbers, which is inappropriate for democratic decision-making. Instead, they suggest that democratic institutions are a recognition of our ignorance and the importance of uncertainty in decision-making.

The author suggests that the epistemological principle of uncertainty is central to democratic institutions. They argue that democracy exists precisely because our knowledge is so limited, and we are so prone to error. Precisely where our knowledge is incomplete, we have greater need for institutions and procedures that favour reflection, debate, criticism, independent advice, reasoned argumentation, and the competition of ideas and visions. Our democratic institutions are not an exhibition of how much we know but a recognition of our ignorance.

The research presented in this paper is significant for broader philosophical issues related to the relationship between knowledge, power, and democratic decision-making. It raises questions about the role of algorithms in decision-making and the limits of rationality in politics. It also highlights the importance of uncertainty, ambiguity, and contingency in democratic decision-making, which has important implications for the legitimacy of democratic institutions.

Future research could explore the implications of these findings for the development of democratic institutions and the role of algorithms in decision-making. It could also explore the role of uncertainty, ambiguity, and contingency in decision-making more broadly and its relationship to different philosophical traditions. Furthermore, it could explore the implications of these findings for the development of more participatory and deliberative forms of democracy that allow for greater reflection, debate, and criticism.

Abstract

Those who claim, whether with fear or with hope, that algorithmic governance can control politics or the whole political process or that artificial intelligence is capable of taking charge of or wrecking democracy, recognize that this is not yet possible with our current technological capabilities but that it could come about in the future if we had better quality data or more powerful computational tools. Those who fear or desire this algorithmic suppression of democracy assume that something similar will be possible someday and that it is only a question of technological progress. If that were the case, no limits would be insurmountable on principle. I want to challenge that conception with a limit that is less normative than epistemological; there are things that artificial intelligence cannot do, because it is unable to do them, not because it should not do them, and this is particularly apparent in politics, which is a peculiar decision-making realm. Machines and people take decisions in a very different fashion. Human beings are particularly gifted at one type of situation and very clumsy in others. The part of politics that is, strictly speaking, political is where this contrast and our greatest aptitude are most apparent. If that is the case, as I believe, then the possibility that democracy will one day be taken over by artificial intelligence is, as a fear or as a desire, manifestly exaggerated. The corresponding counterpart to this is: if the fear that democracy could disappear at the hands of artificial intelligence is not realistic, then we should not expect exorbitant benefits from it either. For epistemic reasons that I will explain, it does not seem likely that artificial intelligence is capable of taking over political logic.

The epistemic impossibility of an artificial intelligence take-over of democracy

Leave a Reply

Your email address will not be published. Required fields are marked *