(Featured) Liars and Trolls and Bots Online: The Problem of Fake Persons

Keith Raymond Harris explores of the role of ‘fake persons’—bots and trolls—in online spaces and their deleterious impact on our acquisition and distribution of knowledge. Situating his analysis in a technological ecosystem increasingly swamped by these artificial entities, the author dissects the intricate issues engendered by these ‘fake persons’ into three discernible yet interwoven threats: deceptive, skeptical, and epistemic.

The deceptive threat elucidates how bots and trolls propagate false information and craft misleading representations of consensus through manipulated metrics like shares, likes, and comments. This deceptive veneer engenders a distorted perception of reality, leading to the formulation of misguided beliefs. The skeptical threat, on the other hand, stems from the awareness of the online environment’s infestation with these deceitful entities. This awareness engenders a pervasive sense of skepticism, a defensive mechanism that could result in the dismissal of valid evidence, leading to an overall decrease in the trust placed in online information. This skepticism, though justifiable, can have the unintended effect of isolating individuals from genuine knowledge sources.

Further complicating this scenario is the epistemic threat. The author draws a striking analogy between the online world inhabited by ‘fake persons’ and a natural environment populated by ‘mimic species’. In the latter, the significance of certain traits, often used to identify species, diminishes due to the presence of mimics. Analogously, in an environment teeming with bots and trolls, the perceived value of certain forms of evidence depreciates, impairing the ability to discern ‘real’ persons. In this convoluted digital milieu, the credibility of evidence—along with the authenticity of users and the perceived consensus—becomes questionable.

Grounding these digital threats in the wider philosophical discourse, this research accentuates the intricate entanglement of epistemology and ontology in online spaces. It challenges traditional conceptions of identity, reality, and knowledge, echoing Baudrillard’s premonitions of hyperreality and simulation. The presence of ‘fake persons’ obfuscates the demarcation between the real and the artificial, leading to an epistemic crisis where distinguishing between genuine and fallacious information becomes a Herculean task. Furthermore, these digital distortions provoke a profound skepticism that resonates with Cartesian doubt, while simultaneously illustrating the pervasiveness of misinformation and disinformation, reflecting the post-truth era’s cynicism. This research, hence, not only deepens our understanding of the digital world’s complexities but also underscores the shifting epistemic and ontological paradigms in the internet age.

As we navigate through this rapidly mutating digital landscape, the author’s research underscores the urgent need for further exploration. While technological solutions might offer some respite, they cannot completely eradicate these pervasive threats. Future research, therefore, should venture into developing more robust epistemological frameworks that accommodate these digital complexities. It should aim to delve into the philosophy of digital identities, exploring how they are constructed, perceived, and interacted with. There’s also a pressing need for studies that examine the intersection of ethics, technology, and epistemology, especially in the context of ‘fake persons’. Such research would not only enrich the theoretical discourse but could also guide the creation of more ethical and reliable digital spaces.

Abstract

This paper describes the ways in which trolls and bots impede the acquisition of knowledge online. I distinguish between three ways in which trolls and bots can impede knowledge acquisition, namely, by deceiving, by encouraging misplaced skepticism, and by interfering with the acquisition of warrant concerning persons and content encountered online. I argue that these threats are difficult to resist simultaneously. I argue, further, that the threat that trolls and bots pose to knowledge acquisition goes beyond the mere threat of online misinformation, or the more familiar threat posed by liars offline. Trolls and bots are, in effect, fake persons. Consequently, trolls and bots can systemically interfere with knowledge acquisition by manipulating the signals whereby individuals acquire knowledge from one another online. I conclude with a brief discussion of some possible remedies for the problem of fake persons.

Liars and Trolls and Bots Online: The Problem of Fake Persons

Leave a Reply

Your email address will not be published. Required fields are marked *