1/21
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Potential Criteria for a Deepfake
A deepfake is manipulated media, judged by its data type, creation technology, degree of manipulation, effect on the information, whether it shows a real or fictional person, and how believable it is to viewers.
Legal Problems Caused by Deepfakes
Disinformation – spreading false information
False Evidence – can prolong trials as evidence is challenged
Affronts to Dignity – e.g., nudify apps
Commercial Exploitation – using people’s likeness for profit
NO FAKES Act (just know it exist)
NO FAKES Act (draft, USA)
beschermt stem, uiterlijk en gelijkenis van personen (you cant replicated it)
Key idea: It’s about protecting a person’s likeness.
Everything else — life +10–70, exclusions, takedown procedures, criticisms — is extra detail and not core for exams or slides.
Disinformation
Intentional, unlike misinformation
Threatens:
Democracy
Public health
National security
Creates echo chambers and filter bubbles
Echo chamber: Alleen info van mensen met dezelfde mening
Filter bubble: Algoritmes laten vooral zien wat je al leuk vindt/mee instemt
Facebook Algorithm
Facebook’s algoritme toont content die veel reacties/likes krijgt → clickbait en boze posts verspreiden zich snel.
Troll farms misbruiken dit om fake news te verspreiden voor clicks en reclame-inkomsten.
How to Tackle Disinformation
Transparency – Algorithms must be public
Intelligibility – Users must understand them
Accountability – Platforms should be responsible
Regulatory Landscape – Algorithmic Disinformation
US: No laws; relies on self-regulation, market-based approach
EU (some countries, e.g., France): Laws like “Manipulation of Information Law”; transparency rules, but limited accountability
China: Stricter laws; transparency + platform accountability + more user choice
Regulatory Challenges – Algorithmic Disinformation
Transparency may be limited for average users
Users can learn about systems but have little power to change them
Must consider Freedom of Expression
EU AI Act – Article 50(2)
AI outputs must be marked as synthetic(clearly made artificial) ; solutions should be reliable; exceptions for minor edits, assistive tools, or legal crime detection.
3 Types of Watermarks
Visible – Can be seen on the content (e.g., logo on image/video)
Invisible – Hidden in the content, not noticeable to the eye or ear
Metadata – Embedded information in the file about its origin or copyright
AI-Generated Content Risks
False positives: Legit content flagged as AI-generated
Spoofing attacks: Fakes used to damage reputation
EU AI Act – Article 50(4)
Deployers of AI deepfakes must disclose content as artificially generated or manipulated
Exception: Authorized by law for crime detection/investigation
Artistic/creative content: Only need to disclose that manipulation exists, without harming enjoyment
Who is a “Deployer”?
A deployer is everyone and everything using an AI system under their control, except for personal, non-professional use.
Dangers of Watermarking
Can stifle freedom of speech
May slow down technology
Could hinder whistleblowing
Enables mass surveillance and government abuse
GDPR (TheGeneralDataProtectionRegulation)
Applies only to personal data
De-identified, pseudonymised, or encrypted data still counts if it can re-identify
Lawful processing requires:
Consent (not only option)
Contract performance
Public interest or legitimate interest (must outweigh person’s rights)
Sensitive data (race, politics, religion, genetics, biometrics, sexual orientation) = higher protection, only processed with explicit consent, public interest, or legal basis (Art. 9)
Deepfakes: Personal Data?
Not real persons
Deceased people
Companies or states
Composites of people (depends on resemblance to originals)
Article 8 vs Article 10 (EuropeanCourt of Human Rights)
Article 8 – Right to Privacy: Protects honour; public figures tolerate more intrusion/mockery
Article 10 – Freedom of Expression: Right to shock, offend, disturb; includes satire
Digital Services Act – Article 34
Very Large Online Platforms/Search Engines must identify, analyse, and assess systemic risks
Risks include: illegal content, effects on fundamental rights, civic discourse, public security
Digital Services Act – Article 35
Mitigation measures must be reasonable, proportionate, effective
Must make manipulated content (fake news/deepfakes) clearly marked
Provide easy way for users to flag false information
2022 Code of Practice on Disinformation – Key Measures
Avoid placing ads next to disinformation
Increase transparency (e.g., label political ads)
Reduce manipulative behaviour (fake accounts, bots, deepfakes)
Enhance tools to recognise, flag, and access authoritative sources
Support research and fact-checking
2022 Code of Practice – Participation
Voluntary – 33 companies signed
Not all measures agreed
Some major platforms did not sign
What are the problems with deepfake enforcement?
Deepfake enforcement is tricky because it involves privacy, free speech, satire, evolving social norms, and tech limitations.