Detailed Notes on Fake News Presentation
- April 23, 2013: The Associated Press Twitter hack.
- False tweet: "Breaking news: Two explosions at the White House and Barack Obama has been injured."
- Retweeted 4,000 times in less than five minutes.
- Automated trading algorithms reacted, causing a stock market crash.
- Consequence: $140 billion in equity value wiped out in a single day.
- Robert Mueller indictment: Russian interference in the 2016 US presidential election.
- Internet Research Agency (Kremlin's social media arm).
- Reached 126 million people on Facebook.
- Issued 3 million tweets and 43 hours of YouTube content.
- Purpose: Misinformation to sow discord.
- Oxford University study: One-third of information about Swedish elections on social media was fake.
- "Genocidal propaganda": Example of misinformation leading to mob killings (e.g., against the Rohingya in Burma, India)..
The Science of Fake News Spread
- Longitudinal study on Twitter from 2006-2017.
- Verified true and false news stories.
- Six independent fact-checking organizations used for verification.
- Measured diffusion, speed, depth, and breadth.
- Key Finding: False news diffuses further, faster, deeper, and more broadly than true news.
- False political news is the most viral.
Why Does False News Spread Faster?
- Initial Hypotheses:
- False news spreaders have more followers, are more active, etc.
- Finding: False-news spreaders have fewer followers, follow fewer people, are less active, not often verified, and newer to Twitter.
- False news is 70% more likely to be retweeted than the truth, even controlling for other factors.
- Novelty Hypothesis:
- Human attention is drawn to novelty.
- Sharing novel information provides status.
- Measured novelty of tweets compared to the individual's 60-day Twitter history.
- Sentiment Analysis:
- False news elicits more surprise and disgust.
- True news elicits more anticipation, joy, and trust.
- Surprise supports the novelty hypothesis.
- Role of Bots:
- Bots accelerate the spread of both false and true news at approximately the same rate.
- Humans are primarily responsible for the differential spread.
- Synthetic media: fake video, fake audio that is very convincing.
- Powered by two technologies:
- Generative adversarial networks (GANs):
- Discriminator: Determines if something is true or false.
- Generator: Generates synthetic media to fool the discriminator.
- Democratization of AI: Easy deployment of AI algorithms for synthetic media creation.
- Example: Doctored White House video of a journalist interacting with an intern.
- Frames removed to make actions seem more aggressive.
- Used to justify revoking Jim Acosta's (CNN reporter) press pass.
Potential Solutions and Their Challenges
- Labeling:
- Extensive labeling of food vs. no labeling of information.
- Challenges: Who decides what’s true/false? Governments, Facebook, independent fact-checkers?
- Incentives:
- Misinformation during US presidential election from Macedonia driven by economic motives.
- Reduce the economic incentive by depressing the spread of misinformation.
- Regulation:
- Exploring regulation of Facebook and political speech.
- Dangers: Authoritarian regimes using misinformation laws to suppress minority opinions (e.g., Malaysia's six-year prison sentence).
- Transparency:
- Need to understand how Facebook's algorithms work.
- Transparency paradox: Balancing openness with data security.
- Algorithms and Machine Learning:
- Technology to root out fake news.
- Humans must be in the loop.
- Ethics and philosophy are essential to defining truth and falsity; technology cannot solve this.
Conclusion
- Truth is at the core of human decision-making and cooperation.
- The rise of fake news threatens our ability to distinguish reality from falsehood.
- Need vigilance in defending the truth through technology, policies and individual actions.