Authors: Matthew Goerzen (Data & Society Research Institute), Jeanna Matthews (Clarkson University)Abstract: This paper draws an analogy between trolling and hacking, discussing various trolling personas:
Black Hat Trolls: These individuals engage in malicious activities intended to undermine or harm their targets, often pushing agendas that contradict the interests of those they target. Their tactics are usually deceitful and harmful, causing negative repercussions in social and political spheres.
Gray Hat Trolls: These trolls exploit vulnerabilities not out of malicious intent but rather to draw attention to pressing issues that may have been overlooked. Their actions may sometimes tread on the line between ethical and unethical behavior, as they may highlight systemic problems through contentious means.
White Hat Trolls: Acting constructively, white hat trolls disclose vulnerabilities in systems to help improve them and reduce opportunities for malicious exploitation. They aim to raise awareness and promote transparency regarding issues that threaten the integrity of digital environments.
Trolling techniques are sophisticated methods used to exert influence or provoke reactions, and include:
Hacking/Journobaiting: This involves luring mainstream media outlets into covering false claims, especially during crises. This can lead to widespread misinformation as the media amplifies the unverified narratives.
Keyword Squatting: Trolls attach misleading information to trending keywords to manipulate search results and visibility, causing public discourse to shift based on false premises.
Denial of Service: By overwhelming platforms with excessive data or fake user interactions, trolls can disrupt normal discourse and frustrate efforts to engage meaningfully, leading to a breakdown in communication.
Sockpuppetry: This technique involves creating fake online personas that mimic real individuals or groups to sway opinions and simulate authentic engagement in discussions.
Astroturfing: This is a deceptive practice where fabricated grassroots movements are created to give the illusion of widespread public support for a particular agenda, often leading to misinformation among the populace.
Deep Fakes: This utilizes advanced technology to create manipulated media content, including videos and images, that can mislead viewers into believing false narratives or events.
Each technique serves strategic purposes aiming to influence public perception, amplify agendas, and create confusion among audiences.
Vulnerabilities Defined: Often identified within socio-technical systems, where technology intertwines with social dynamics, vulnerabilities arise that expose individuals and communities to manipulation.
Media Vulnerabilities: Many editorial choices prioritize sensationalism, thereby creating opportunities for trolls to exploit these weaknesses in the information ecosystem.
Economic Factors: Platforms like Facebook and Twitter are designed to amplify polarizing content due to engagement incentives, often resulting in the propagation of misinformation.
Information Spread: Distorted facts tend to propagate more rapidly than accurate information, leading to widespread misinformation and complicating public understanding of critical issues.
Poe’s Law: This principle emphasizes the challenge of distinguishing between extreme parody and sincere opinion in digital communication, especially when context collapses, further blurring the lines of understanding.
Broader Feedback of Socio-Technical Vulnerabilities: The ad tech economy encourages divisive content to maximize clicks, making it essential to address these underlying economic motivators to mitigate trolling.
Similar to hackers, trolls can be classified into categories based on their motivations and methods:
Black Hat Trolls: Often act with no regard for ethics or consequences, focusing solely on personal or political gain.
White Hat Trolls: Attempt to improve systems through constructive criticism, often aiming to inform the public of systemic failures and provide solutions.
Gray Hat Trolls: May engage in harmful methods but are often motivated by public interest and transparency, combining controversial actions with a desire to expose hard truths.
Emphasizing ethical debates surrounding trolling is crucial for evolving governance of media systems to manage these behaviors effectively.
Media ethics traditionally stressed the importance of accuracy; however, contemporary practices often diverge from this standard due to:
Challenges of Anonymity: The anonymity that social media provides complicates the ethical landscape, opening avenues for manipulation and exploitation by individuals and groups.
Should the amplification of messages be controlled by algorithms, human editors, or the community at large?
Top-Down Interventions: Efforts may include banning bot accounts to reduce artificial engagement and restructuring economic incentives to inhibit the virality of false news through policymaking.
Transparency in Moderation Policies: While fostering trust, these policies can inadvertently lead to manipulation by bad actors.
Bottom-Up Interventions: Empowering users through peer moderation and encouraging the use of countermemes can redirect harmful narratives and mitigate the influence of trolls.
An in-depth understanding of the history and techniques surrounding trolling is essential for addressing its broader societal impacts. Advocating for constructive roles for both white and gray hat trolls can lead to more informed media practices, enhancing the resilience of communities against malicious online behaviors.