Definition of platform governance through algorithmic folklore.
Focus on user discussions regarding 'shadow banning'—a form of content moderation alleged by users but denied by platforms.
Methodological implications for studying algorithmic governance—how users generate expectations and cope with perceived injustices.
Asserted tension between platform governance practices and ideal governance values (clarity, consistency).
User experiences of shadow banning characterized by claims of ideological censorship.
Shadow banning defined as moderation limiting visibility without user awareness—a practice reportedly coined in 2001.
Public awareness increased in 2018 as accusations of bias arose against platforms like Twitter.
Platforms deny use of shadow banning; accusations often correspond to broader debates on content moderation.
Historical Context: Coined in 2001, first seen in forums as a way to hide posts from visibility while the user remains unaware.
Controversy: Definitions vary, often conflating with notions of content suppression; prominent in political discourse.
Perceived Implications: Claims of shadow banning raise concerns about user rights within platform governance frameworks.
Defined as systems utilizing automated tools to manage user interactions and content—subject to inconsistencies and opacity.
Moderation Practices: Considered critical to platform identity, enabling regulatory tasks yet simultaneously cultivating user frustration.
The challenge of accountability arises from a lack of transparency in content screening—reflecting asymmetries in power dynamics.
Users often share beliefs and narratives about algorithmic behavior—termed algorithmic folklore.
Users report experiences of perceived shadow bans shaped by emotions, expectations, and community narratives.
Emotions drive user discussions, revealing patterns in expectations about visibility and performance metrics.
Narratives of Control: Users frustrated by opaque policies create speculative theories about algorithm motives and behavior.
Folk Theories: Users develop informal theories regarding moderation, often seeing themselves as 'damned' by hidden rules—illustrated by anecdotal evidence.
Common beliefs relate to hashtag usage and content characteristics, influencing user behavior in attempts to align with perceived algorithms.
The perception that algorithms enforce hidden guidelines contributes to users crafting complex narratives about visibility and engagement outcomes.
Algorithmic Effects: Users express feelings of powerlessness as they navigate a system that they perceive to be governed by arbitrary decisions.
Revealing Governance Issues: Shadow banning folklore reflects users' struggle with and against platform governance—raising questions about accountability.
The disconnect between user experience and platform practice highlights ongoing tensions around algorithmic decision-making and transparency.
Users emerge as active but frustrated participants in a system that remains unresponsive to their concerns, indicating a troubling dynamic for regulatory governance of online platforms.
Broader Implications: Calls for transparency and the ethical reconfiguration of algorithmic governance challenge platforms’ legitimacy in operationalizing democratic values.