1/14
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
What are the origins of Trust and Safety? What kind of work did it emerge out of? What problems was it originally trying to solve?
Trust & Safety grew out of early internet moderation work dealing with spam, scams, harassment, and abuse. Early online communities used moderators, bans, and spam filters to keep spaces usable. The goal was to protect users and manage harmful behavior in user-generated content communities.
Name and briefly describe three types of trolling.
Baiting: posting provocative comments to provoke angry reactions. Sealioning: repeatedly asking bad-faith questions to exhaust or derail someone. Dogpiling: many users attacking or harassing one person at once.
Why is Trust and Safety described as Sisyphean in nature?
Online harms constantly evolve, meaning moderation work is never finished. Even rare harms occur frequently at large scale. Platforms solve one problem but new ones appear.
What does Tarleton Gillespie mean by “the politics of platforms”?
The word “platform” makes companies sound neutral, but platforms actually make rules and decisions about speech. These decisions shape what people see online and who gets heard.
In “Content or Context Moderation,” what are Robyn Caplan’s three styles of platform moderation? Provide an example of each.
Artisanal moderation: small teams manually review content (example: Vimeo). Community moderation: users help enforce rules (example: Reddit). Industrial moderation: large companies moderate at scale using policies, large teams, and automation (example: Facebook or YouTube).
What is CDA 230 (spell it out) and what impact does it have on Trust and Safety?
CDA 230 refers to Section 230 of the Communications Decency Act (1996). It states that platforms are not legally responsible for user-generated content. It also allows platforms to moderate harmful content in good faith, which enables large-scale Trust and Safety systems.
Why does Kate Klonick call platform companies “the new governors,” and what are two concerns she raises?
Platforms now set the rules governing online speech similarly to how governments regulate public spaces. Klonick raises concerns about private control over speech and the lack of democratic accountability for these companies.
Why is the Oversight Board often called the “Facebook Supreme Court,” and why is that comparison imperfect?
It is similar because the board reviews difficult moderation cases and can overturn Meta’s decisions, functioning like an appeals court. However, it differs because it was created and funded by Meta, is not a government institution, and its authority only applies to Meta platforms.
What is agnotology and why do Trust and Safety professionals care about it?
Agnotology is the study of how ignorance or doubt is intentionally produced. Trust and Safety teams study it because bad actors often create confusion rather than obvious lies in order to manipulate public understanding.
What is the difference between misinformation and disinformation, and why is the distinction both useful and challenging for Trust and Safety?
Misinformation is false information shared without intent to deceive, while disinformation is false information shared intentionally to mislead. The distinction is useful because platforms may respond differently, such as education versus enforcement. However, it is difficult because intent is hard to prove and disinformation often spreads as misinformation when others unknowingly share it.
Why does Facebook want to increase “time spent on platform,” and what three features help achieve this?
Platforms want users to stay longer because it increases engagement, advertising revenue, and data collection. Features that encourage this include infinite scroll, autoplay videos, and push notifications.
What are the 3Cs used to categorize kids’ online safety risks, and give an example of each?
Content: exposure to harmful material such as hate speech or sexual content. Contact: harmful interaction with strangers, such as an adult messaging a child online. Conduct: harmful behavior between users, such as cyberbullying.
Name three approaches to age verification and explain their weaknesses.
Self-declaration (entering a birthdate) is weak because users can lie. Government ID verification raises privacy concerns and excludes people without IDs. Biometric age estimation using selfies can have accuracy issues and privacy risks.
In “Techno-Legal Solutionism,” what do Maria Angel and danah boyd criticize about the Kids Online Safety Act’s “duty of care”?
They argue the law assumes platform design can solve complex social problems, which reflects technological solutionism. They worry it could lead to overbroad restrictions on platforms without addressing the underlying social causes of harm.
What is CSAM (spell it out) and why is it treated differently from other types of content?
CSAM stands for Child Sexual Abuse Material. It is illegal in nearly all jurisdictions, so platforms must actively detect, remove, and report it to law enforcement such as the National Center for Missing and Exploited Children. Unlike most speech, it is not protected by free expression laws.