Chapter 11 - Online Falsehoods: Fake News, Information Warfare & Deepfakes

0.0(0)
studied byStudied by 0 people
GameKnowt Play
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/45

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

46 Terms

1
New cards

Define misinformation

False or inaccurate information shared without intent to deceive; the sharer believes it or is careless.

2
New cards

Define disinformation

False information deliberately created and shared with intent to mislead, manipulate, or cause harm.

3
New cards

What is fake news?

Fabricated, news-like content that mimics journalism’s form but lacks editorial standards and is designed to deceive.

4
New cards

How fake news differs from misinformation/disinformation

It specifically imitates news media format while lacking verification/editorial processes; typically organized and systematic.

5
New cards

Define information warfare (IW)

Strategic manipulation or suppression of information to gain advantage in conflict or politics; a component of hybrid warfare.

6
New cards

Define deepfake

AI-generated or manipulated media (audio/video/image) that convincingly alters identity, speech, or events to fabricate reality.

7
New cards

Define troll farm

Organized workforce running coordinated, often inauthentic accounts to seed and amplify narratives and harass opponents.

8
New cards

Key cognitive biases behind fake news spread

Selective exposure, confirmation bias, desirability bias—users prefer and believe content aligning with prior beliefs and wishes.

9
New cards

How platform algorithms fuel fake news

Engagement-optimized feeds create echo chambers/filter bubbles, repeatedly surfacing similar content and limiting viewpoint diversity.

10
New cards

Role of bots in disinformation

Automated accounts rapidly amplify links and memes, inflate perceived popularity, and drive virality across platforms.

11
New cards

Why monetization matters

Ad and view-based revenue incentivizes sensational falsehoods and funds further disinformation activity.

12
New cards

COVID-19 origin falsehoods and harms

Lab/bioweapon claims, racialized rhetoric; correlated with anti-Asian bias and hate incidents during the pandemic.

13
New cards

COVID-19 “5G causes COVID” claims—impacts

Reduced health-protective behaviors; harassment of telecom workers; dozens of arson attacks on cell masts in the UK.

14
New cards

COVID-19 vaccine disinformation effects

Increased vaccine hesitancy/refusal across multiple countries; undermined public health campaigns.

15
New cards

Examples of fake preventions/cures (COVID-19)

Garlic/kimchi, hot baths/saunas, alcohol, hydroxychloroquine, ivermectin, bleach ingestion/injection—none are effective and some are dangerous.

16
New cards

Documented harms from COVID-19 disinformation

Poisonings, hospitalizations, deaths; measurable public health damage during the “infodemic.”

17
New cards

Define conspiracy thinking

Attributing events to secret plots by powerful actors, often resistant to counterevidence; fuels disinformation uptake.

18
New cards

Define hybrid warfare

Blending conventional military action with cyber operations and information operations/disinformation.

19
New cards

Russia–Ukraine IW core narratives (2014→)

Crimea as “rightfully” Russian; protection of “compatriots”; Ukraine labeled “Nazi”; claims of “genocide” in Donbas to justify intervention.

20
New cards

How Russia distributed disinformation globally

State media (RT/Sputnik), social platforms, Internet Research Agency (IRA) troll farm networks, and bot amplification.

21
New cards

Example of atrocity denial/rewriting (Ukraine)

Bucha massacre reframed as staged by Ukraine or committed by Ukrainians; dissent criminalized via “false info” laws.

22
New cards

Domestic information suppression in Russia (2022)

Criminalization of “false” reporting about the military (up to 15 years), closure of independent outlets, blocking Western platforms.

23
New cards

Define cheap fake

Manipulated media using simple edits (speed changes, cuts, overdubs, recontextualization) rather than advanced AI.

24
New cards

Cheap fake with deadly impact—example

Edited Pakistani PSA recirculated on WhatsApp India as “child-kidnapping CCTV” → mob violence and multiple deaths.

25
New cards

Photo manipulation harm—example

Emma Gonzalez image altered to show her tearing the U.S. Constitution → harassment and discrediting campaign.

26
New cards

Audio/video manipulation in politics—example

Nancy Pelosi slurred-speech videos spread widely; moderation responses varied across platforms.

27
New cards

Deepfake in active war—example

Zelensky deepfake video instructing Ukrainians to surrender during 2022 invasion; used as battlefield disinformation.

28
New cards

Goal of media/information literacy

Equip users to evaluate sources/claims critically and reduce susceptibility to falsehoods.

29
New cards

Platform fact-checking approach

Independent raters review items; labels/warnings applied; downranking to reduce reach; removal for the most harmful cases.

30
New cards

Germany’s NetzDG (2017) summary

Requires rapid removal of unlawful content by large platforms; imposes heavy fines; criticized as privatized censorship.

31
New cards

France’s election disinfo law (2018) summary

Mandates removal of manifestly false information that could harm electoral integrity during campaigns.

32
New cards

Core regulatory tension

Balancing free expression (especially in U.S.-style systems) with harm reduction and democratic integrity.

33
New cards

Deepfake detection—key forensic cues

Image warping, lighting inconsistencies, unnatural smoothness, odd pixel patterns; ML-based detectors learn these artifacts.

34
New cards

Authentication/provenance idea for deepfakes

Device-level watermarking/provenance (“truth layer”) to verify originals; requires broad, cross-industry adoption.

35
New cards

Common platform policy gaps on deepfakes

Parody/satire exemptions, limited coverage of “cheap fakes,” inconsistent enforcement for political content.

36
New cards

Why legal remedies often fall short

Defamation/privacy/copyright suits are reactive, slow, costly; anonymity/jurisdictional issues; state actors often immune.

37
New cards

U.S. state actions on deepfakes (examples)

Bans on political deepfakes near elections in some states; civil remedies for deepfake porn; no comprehensive federal statute.

38
New cards

EU Digital Services Act—deepfakes

Providers that become aware content is a deepfake must label it as inauthentic so users are informed.

39
New cards

Why deepfakes are uniquely dangerous

They erode evidentiary trust (“seeing is believing”), enable plausible deniability, and scale political and image-based abuse harms.

40
New cards

Echo chambers’ impact on democracy

Narrow information diets polarize citizens, entrench false narratives, and distort electoral decision-making.

41
New cards

Define filter bubble

Algorithmically curated environment reflecting past preferences, limiting exposure to diverse/contrary viewpoints.

42
New cards

Misinformation vs disinformation (exam tip)

Intent difference: misinformation lacks intent to deceive; disinformation is deliberate deception to mislead/manipulate.

43
New cards

Why reactive legal remedies are insufficient

Harm often occurs before takedown; litigation delays/costs; cross-border and anonymous perpetrators evade accountability.

44
New cards

Comprehensive strategy against online falsehoods

Media literacy, robust fact-checking, detection/authentication tech, targeted regulation, and carefully scoped laws.

45
New cards

Summary of real-world harms from online falsehoods

Illness/death (COVID), infrastructure attacks (5G), hate incidents, voter manipulation, wartime atrocity denial.

46
New cards

Core challenge going forward

Coordinating cross-jurisdictional, rights-respecting responses as platforms, technologies, and adversaries rapidly evolve.