Lecture 4 - Disinformation & Privacy

0.0(0)
studied byStudied by 0 people
full-widthCall with Kai
GameKnowt Play
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/21

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

22 Terms

1
New cards

Potential Criteria for a Deepfake

A deepfake is manipulated media, judged by its data type, creation technology, degree of manipulation, effect on the information, whether it shows a real or fictional person, and how believable it is to viewers.

2
New cards

Legal Problems Caused by Deepfakes

  • Disinformation – spreading false information

  • False Evidence – can prolong trials as evidence is challenged

  • Affronts to Dignity – e.g., nudify apps

  • Commercial Exploitation – using people’s likeness for profit

3
New cards

NO FAKES Act (just know it exist)

  • NO FAKES Act (draft, USA)

    • beschermt stem, uiterlijk en gelijkenis van personen (you cant replicated it)

  • Key idea: It’s about protecting a person’s likeness.

  • Everything else — life +10–70, exclusions, takedown procedures, criticisms — is extra detail and not core for exams or slides.

4
New cards

Disinformation

  • Intentional, unlike misinformation

  • Threatens:

    • Democracy

    • Public health

    • National security

  • Creates echo chambers and filter bubbles

    • Echo chamber: Alleen info van mensen met dezelfde mening

    • Filter bubble: Algoritmes laten vooral zien wat je al leuk vindt/mee instemt

5
New cards

Facebook Algorithm

  • Facebook’s algoritme toont content die veel reacties/likes krijgt → clickbait en boze posts verspreiden zich snel.

  • Troll farms misbruiken dit om fake news te verspreiden voor clicks en reclame-inkomsten.

6
New cards

How to Tackle Disinformation

  • Transparency – Algorithms must be public

  • Intelligibility – Users must understand them

  • Accountability – Platforms should be responsible

7
New cards

Regulatory Landscape – Algorithmic Disinformation

  • US: No laws; relies on self-regulation, market-based approach

  • EU (some countries, e.g., France): Laws like “Manipulation of Information Law”; transparency rules, but limited accountability

  • China: Stricter laws; transparency + platform accountability + more user choice

8
New cards

Regulatory Challenges – Algorithmic Disinformation

  • Transparency may be limited for average users

  • Users can learn about systems but have little power to change them

  • Must consider Freedom of Expression

9
New cards

EU AI Act – Article 50(2)

AI outputs must be marked as synthetic(clearly made artificial) ; solutions should be reliable; exceptions for minor edits, assistive tools, or legal crime detection.

10
New cards

3 Types of Watermarks

  1. Visible – Can be seen on the content (e.g., logo on image/video)

  2. Invisible – Hidden in the content, not noticeable to the eye or ear

  3. Metadata – Embedded information in the file about its origin or copyright

11
New cards

AI-Generated Content Risks

  • False positives: Legit content flagged as AI-generated

  • Spoofing attacks: Fakes used to damage reputation

12
New cards

EU AI Act – Article 50(4)

  • Deployers of AI deepfakes must disclose content as artificially generated or manipulated

  • Exception: Authorized by law for crime detection/investigation

  • Artistic/creative content: Only need to disclose that manipulation exists, without harming enjoyment

13
New cards

Who is a “Deployer”?

A deployer is everyone and everything using an AI system under their control, except for personal, non-professional use.

14
New cards

Dangers of Watermarking

  • Can stifle freedom of speech

  • May slow down technology

  • Could hinder whistleblowing

  • Enables mass surveillance and government abuse

15
New cards

GDPR (TheGeneralDataProtectionRegulation)

  • Applies only to personal data

  • De-identified, pseudonymised, or encrypted data still counts if it can re-identify

  • Lawful processing requires:

    • Consent (not only option)

    • Contract performance

    • Public interest or legitimate interest (must outweigh person’s rights)

  • Sensitive data (race, politics, religion, genetics, biometrics, sexual orientation) = higher protection, only processed with explicit consent, public interest, or legal basis (Art. 9)

16
New cards

Deepfakes: Personal Data?

  • Not real persons

  • Deceased people

  • Companies or states

  • Composites of people (depends on resemblance to originals)

17
New cards

Article 8 vs Article 10 (EuropeanCourt of Human Rights)

  • Article 8 – Right to Privacy: Protects honour; public figures tolerate more intrusion/mockery

  • Article 10 – Freedom of Expression: Right to shock, offend, disturb; includes satire

18
New cards

Digital Services Act – Article 34

  • Very Large Online Platforms/Search Engines must identify, analyse, and assess systemic risks

  • Risks include: illegal content, effects on fundamental rights, civic discourse, public security

19
New cards

Digital Services Act – Article 35

  • Mitigation measures must be reasonable, proportionate, effective

  • Must make manipulated content (fake news/deepfakes) clearly marked

  • Provide easy way for users to flag false information

20
New cards

2022 Code of Practice on Disinformation – Key Measures

  • Avoid placing ads next to disinformation

  • Increase transparency (e.g., label political ads)

  • Reduce manipulative behaviour (fake accounts, bots, deepfakes)

  • Enhance tools to recognise, flag, and access authoritative sources

  • Support research and fact-checking

21
New cards

2022 Code of Practice – Participation

  • Voluntary – 33 companies signed

  • Not all measures agreed

  • Some major platforms did not sign

22
New cards

What are the problems with deepfake enforcement?

Deepfake enforcement is tricky because it involves privacy, free speech, satire, evolving social norms, and tech limitations.