Privacy, Governance, Creativity, & Surveillance Lecture Notes

0.0(0)
Studied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/19

flashcard set

Earn XP

Description and Tags

Vocabulary-style flashcards covering the key concepts from the lecture notes on privacy law, AI governance, copyright, and surveillance capitalism.

Last updated 7:28 PM on 5/2/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

20 Terms

1
New cards

Facebook–Cambridge Analytica scandal

A major privacy incident where approximately 8787 million users' data was accessed through personality quizzes and the Facebook Open API without their consent for psychographic profiling.

2
New cards

OCEAN Model

A framework for psychographic profiling consisting of five personality traits: Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism.

3
New cards

Psychographic profiling

The practice of using digital behavior (likes, shares) to predict personality traits and build thousands of data points per person to enable microtargeting.

4
New cards

Microtargeting

The use of tailored political ads that are invisible to non-targets, often blending real news with misinformation to bypass disclosure and consent requirements.

5
New cards

Transparency (Privacy Principle)

The requirement for companies to clearly define what data is collected, how long it is stored, and who it is shared with.

6
New cards

Disclosure (Privacy Principle)

A principle granting users the ability to see all personal data retained by a platform.

7
New cards

Control (Privacy Principle)

A set of user rights including opt-in (rather than opt-out) consent, the right to delete data, and the respect of "Do Not Track" requests.

8
New cards

Notification (Privacy Principle)

The requirement for mandatory data breach disclosures and the labeling of political and paid content.

9
New cards

Inferred Vulnerabilities

Classifications derived by AI systems from user chats (such as health risks or emotional states) that can flow into advertising or insurance ecosystems.

10
New cards

Blurred Data Boundaries

The practice of multiproduct companies merging chat data, search history, purchases, and social media activity for profiling purposes.

11
New cards

Human Authorship Requirement

A core U.S. copyright principle stating that fully AI-generated works are not copyrightable; protection is only granted to human selection and arrangement.

12
New cards

Digital Replicas

Unauthorized AI voice or likeness clones, such as "Fake Drake," which highlight the need for a federal right protecting individual voice and likeness.

13
New cards

EU AI Act

A product-safety-oriented regulation that uses a risk-based system (unacceptable, high, low) to govern AI systems extraterritorially.

14
New cards

Fundamental Rights Impact Assessments

Evaluations required by the EU AI Act before public deployment of high-risk systems, though critics note they lack mandatory mitigation requirements.

15
New cards

Surveillance Capitalism

An economic system that extracts human experience as raw material, converting behavior into data to predict and modify future behavior.

16
New cards

Behavioral Surplus

Data captured beyond what is needed for service improvement, which is monetized through prediction products and behavioral futures markets.

17
New cards

Instrumentarian Power

A new form of power that operates through continuous monitoring and computational control rather than violence or ideology.

18
New cards

Big Other

The pervasive sensing infrastructure that constitutes the heart of instrumentarian power and surveillance capitalism.

19
New cards

Right to Future Tense

A human right challenged by surveillance capitalism, characterized by individual autonomy and the freedom to exercise free will without behavioral shaping.

20
New cards

Risk-based system (EU AI Act)

A governance framework where AI systems are categorized as unacceptable (banned), high-risk (regulated), or low-risk (exempt).