1/19
Vocabulary-style flashcards covering the key concepts from the lecture notes on privacy law, AI governance, copyright, and surveillance capitalism.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Facebook–Cambridge Analytica scandal
A major privacy incident where approximately 87 million users' data was accessed through personality quizzes and the Facebook Open API without their consent for psychographic profiling.
OCEAN Model
A framework for psychographic profiling consisting of five personality traits: Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism.
Psychographic profiling
The practice of using digital behavior (likes, shares) to predict personality traits and build thousands of data points per person to enable microtargeting.
Microtargeting
The use of tailored political ads that are invisible to non-targets, often blending real news with misinformation to bypass disclosure and consent requirements.
Transparency (Privacy Principle)
The requirement for companies to clearly define what data is collected, how long it is stored, and who it is shared with.
Disclosure (Privacy Principle)
A principle granting users the ability to see all personal data retained by a platform.
Control (Privacy Principle)
A set of user rights including opt-in (rather than opt-out) consent, the right to delete data, and the respect of "Do Not Track" requests.
Notification (Privacy Principle)
The requirement for mandatory data breach disclosures and the labeling of political and paid content.
Inferred Vulnerabilities
Classifications derived by AI systems from user chats (such as health risks or emotional states) that can flow into advertising or insurance ecosystems.
Blurred Data Boundaries
The practice of multiproduct companies merging chat data, search history, purchases, and social media activity for profiling purposes.
Human Authorship Requirement
A core U.S. copyright principle stating that fully AI-generated works are not copyrightable; protection is only granted to human selection and arrangement.
Digital Replicas
Unauthorized AI voice or likeness clones, such as "Fake Drake," which highlight the need for a federal right protecting individual voice and likeness.
EU AI Act
A product-safety-oriented regulation that uses a risk-based system (unacceptable, high, low) to govern AI systems extraterritorially.
Fundamental Rights Impact Assessments
Evaluations required by the EU AI Act before public deployment of high-risk systems, though critics note they lack mandatory mitigation requirements.
Surveillance Capitalism
An economic system that extracts human experience as raw material, converting behavior into data to predict and modify future behavior.
Behavioral Surplus
Data captured beyond what is needed for service improvement, which is monetized through prediction products and behavioral futures markets.
Instrumentarian Power
A new form of power that operates through continuous monitoring and computational control rather than violence or ideology.
Big Other
The pervasive sensing infrastructure that constitutes the heart of instrumentarian power and surveillance capitalism.
Right to Future Tense
A human right challenged by surveillance capitalism, characterized by individual autonomy and the freedom to exercise free will without behavioral shaping.
Risk-based system (EU AI Act)
A governance framework where AI systems are categorized as unacceptable (banned), high-risk (regulated), or low-risk (exempt).