INFO 1200 Prelim Prep

studied byStudied by 16 people
5.0(1)
Get a hint
Hint

technology narratives

1 / 23

encourage image

There's no tags or description

Looks like no one added any tags here yet for you.

24 Terms

1

technology narratives

The stories, discourses, or frameworks that describe and interpret the development, use, and impact of technology on society. These narratives often shape the way technologies are perceived, their potential benefits or harms, and the broader social, political, or economic changes they might bring about.

Ex: Technological determinism

A relevant example of a technology narrative is technological determinism, which is the belief that technology drives social change and that human agency plays a secondary role. In this narrative, technology is seen as an autonomous force that shapes culture, economics, and politics in a linear and inevitable way. For instance, the rise of social media platforms like Facebook or Twitter might be framed as causing the erosion of traditional media and transforming social interaction, with little attention paid to how humans choose to use these platforms or how regulation might shape their impact.

Sign:

How we frame and talk about technology shapes how we think about and approach policy

Ethics: Technology narratives influence public perception of moral issues related to technology. If prominent narratives downplay ethical concerns over data exploitation or environmental degradation, it makes it harder to argue for ethical limits on tech development.

Law: Narratives shape legal frameworks by influencing lawmakers' views on technology’s role in society. If the dominant narrative around artificial intelligence (AI) is one of economic growth and efficiency, policymakers might prioritize legislation that promotes innovation over laws that protect against algorithmic bias or AI-driven unemployment.

Policy: Understanding technology narratives is crucial for creating balanced policies. Narratives shape public opinion and can lead to regulatory gaps if the complexities of technology are oversimplified. For instance, narratives that overly celebrate tech innovation might ignore the need for stronger data protection laws or antitrust regulations.

New cards
2

technological neutrality

Tech is just a mere tool, not an agent. It does not have beliefs or values, is not good nor bad. Tech has no impact on our own values, goals, or beliefs. Humans are fully in control of technology and whether it is used for good or bad.

Ex: Brey Hammer

For example, a hammer can be used to hammer nails, but also to break objects, to kill someone, to flatten dough, to keep a pile of paper in place or to conduct electricity. These uses have radically different effects on the world, and it is difficult to point to any single effect that is constant in all of them. The hammer example, and other examples like it (a similar example could be given for a laptop), suggest strongly that the neutrality thesis is true.

Sign:

Law: A belief in technological neutrality might lead to under-regulation or "hands-off" legal approaches. Lawmakers might assume that because the technology itself is neutral, it is only user behavior that needs regulation. However, this can result in laws that overlook the role of design choices, data collection practices, or built-in biases in shaping the outcomes of technology.

Brey says - Under technological neutrality, ethics should not pay much attention to technological artefacts themselves, because they in themselves do not ‘do’ anything. Rather, ethics should focus on their usage alone.

New cards
3

technological determinism

The narrative that technology development is inevitable, tech imposes its own values on us and transforms us socially, we are not in control, tech is

Ex: Nick Carr - Atlanic Essay

In a widely read essay in the Atlantic (2008), Nick Carr posited that

Google is "making us stupid."

As Carr tells it, "someone, or something," changed him. He was the passive recipient transformed by an outside force. As he himself articulates, Carr's essay is in keeping with a long-standing tradition of technological determinism in which the technology is conceptualized as an external agent that acts upon and changes society.

Sign:

Ethics: Technological determinism can shift ethical responsibility away from human actors, implying that negative consequences of technology (like social media addiction or algorithmic bias) are inevitable outcomes of technological progress. This can lead to a fatalistic attitude toward addressing ethical concerns, as it suggests that we are powerless to shape how technologies evolve and impact society.

Law: Legal frameworks based on technological determinism might be reactive rather than proactive. For example, if lawmakers assume that automation or AI will inevitably lead to job displacement, they may focus on managing its consequences (like unemployment benefits) rather than regulating the development of AI in ways that protect workers or promote more inclusive innovation.

New cards
4

SCOT (social construction of technology)

Baym - people are the primary sources of change in both technology and society. Focuses on the various actors and institutions that involved in making tech. Looks at the ways users and non-users change the meaning of tech.

Ex: Baym Video Game Females

In the contemporary context, one might look at the female avatars available in online games, characters that are almost uniformly shaped like pornographic fantasy figures, and posit that this is related to their having been designed by people- primarily male - who are embedded in a patriarchal culture that views women as sex objects and thinks of their primary audience as men and boys. The view of this technology aligns with SCOT, in that it focuses on how the actors involved in making the technology change its meaning.

Sign:

Ethics: SCOT highlights that technology is not value-neutral, as it is shaped by the interests and values of different social groups. This understanding is important for ethical analysis because it shows that the ethical implications of technology—such as privacy concerns, bias, or inequality—are the result of human decisions. By recognizing this, we can hold developers and other stakeholders accountable for ensuring ethical considerations are included in the design and use of technology.

Law: From a legal perspective, SCOT emphasizes that laws governing technology should not treat it as a static entity but as something shaped by social forces. Laws and regulations should focus on the social dynamics that influence technology’s development, such as who has access to create or modify it and whose interests it serves. This perspective is critical for crafting laws around emerging technologies, like AI or genetic engineering, where societal impacts are shaped by various groups and interests.

Under SCOT, we can influence tech’s development in ways that align with social justice, equity, and ethical principles. Recognizing this opens up space for more democratic participation in the shaping of technology and ensures that it reflects a broader range of societal values.

New cards
5

domestication of technology

The taken-for-grantedness of tech. Concerned with particular stage in development and adoption of tech - when it goes from being noteworthy to mundane. Explains why deterministic narratives are so pervasive in the early stages of technological adoption.

Ex: Domestication of the internet through advice columns

In early letters, particularly those prior to 2000, there was a very clear norm that the internet was dangerous. Internet users were often described as junkies, addicts, recluses or, at best and on average "fairly decent people" (as Ann Landers wrote in 1994).

Once this more nuanced understanding had been reached, the internet continued to appear as a character in letters to advice columns, but the tone changed considerably. For instance, the writer of a 2004 letter about a fiance who had placed a personal ad on an online dating site was told that her fiance "does not under-stand the responsibilities and obligations of marriage" and that "he might run off with the neighbor's wife." In contrast to earlier replies in which Ann and Abby bemoaned an "epidemic" of home-wrecking due to the internet, the internet was not even mentioned in this response. By 2004, it had become almost invisible.

Sign:

Ethics: The domestication of technology raises ethical questions about how pervasive technologies shape human behavior, relationships, and privacy. For example, the widespread use of smartphones has introduced ethical concerns related to digital addiction, surveillance, and the erosion of work-life boundaries. As technologies become domesticated, it's important to examine how they affect users' autonomy, well-being, and freedom in both personal and societal contexts.

Policy: Policymakers need to consider the long-term impacts of domesticated technologies on society. For instance, as digital devices and platforms have become integral to education, work, and socialization, policy must address the digital divide to ensure equitable access. Policies regarding internet access and tech infrastructure become critical as technology integrates into every facet of modern life. Additionally, policies may need to address emerging concerns such as the environmental impact of producing and discarding domesticated technologies like smartphones and laptops.

New cards
6

public values

Brey - Moral and social values that are widely accepted by society (public values)

Ex: Brey proposed a set of values that disclosive computer ethics should focus on. This list included justice (fairness, non-discrimination), freedom (of speech, of assembly), autonomy, privacy and democracy. He write that many other values could be added, like trust, community, human dignity and moral accountability.

Public value of privacy

As data collection by companies like Google and Facebook expanded, concerns about individuals' right to privacy became a central public value. This concern led to regulatory frameworks like the European Union’s General Data Protection Regulation (GDPR), which enforces strict privacy protections and gives citizens more control over their personal data.

Sign:

Ethics: Public values play a crucial role in shaping the ethical use of technology. For instance, the value of equity is central in debates over algorithmic bias, where AI systems used in hiring, criminal justice, or lending may reinforce discrimination. Ethical technology development requires aligning design choices with public values, ensuring that technologies promote fairness, transparency, and inclusivity.

Law: Public values directly shape legal frameworks. Laws around intellectual property, freedom of speech, or data protection are created to reflect the public's interest in balancing innovation with societal well-being. For example, laws protecting free speech on the internet are weighed against the public value of security, which leads to legal limits on hate speech or misinformation that could threaten public safety or social order.

Policy: Policymakers must prioritize public values when crafting technology regulations and policies. For example, the value of environmental sustainability has led to policies promoting the responsible disposal of electronic waste and reducing the carbon footprint of tech industries. Similarly, public values like accessibility and equity are central to policies aimed at closing the digital divide, ensuring that all individuals, regardless of socioeconomic status, have access to the benefits of the internet and digital tools.

Effective technology policies and regulations must reflect and protect public values to ensure that technological development aligns with the broader goals of justice, equality, privacy, and public welfare. In turn, this helps build public trust in both technology and governance.

New cards
7

modes of regulation

Lawrence Lessig's framework in “Code is Law” refers to the four "modes" or mechanisms through which behaviors can be regulated in society. Norms, market, architecture, and law. These forces are interdependent, how we connect them represent different narratives for describing the relationship between technology and society

Ex: Brazil’s ban on X

Brazil’s supreme court upheld a ban on Elon Musk’s platform over “illegal conduct”. One of the judges said that X’s refusal to comply with local laws suggests it considers itself “above the rule of law”.

Example of law regulating behavior by banning the platform. Norms suggest that users may transfer to similar platforms. Market incentives like the value of free expression explain why users may gravitate towards X in the first place. Architecture shown in the reasons for original ban, because X allowed for misinformation to spread by not banning certain accounts.

Sign:

Understanding Lessig's framework of modes of regulation is critical for developing effective ethical analysis, laws, and policies surrounding technology. It highlights the interplay between different regulatory mechanisms, allowing for a more holistic approach to governance. By recognizing that behavior can be shaped through law, norms, architecture, and markets, stakeholders can create more nuanced and effective strategies for addressing the complex challenges posed by emerging technologies. This comprehensive perspective can help ensure that technologies are developed and utilized in ways that align with public values and promote societal well-being.

New cards
8

Capital-P vs lower case-p policy

Capital-P Policy

Formal laws and rules that govern tech. Developed by law makers, regulatory agencies, judges, international standard orgs. Follows well-defined procedures

Lower case-p policy

De facto policy as enacted by the actors who control the tech we use. Developed by private companies, engineers, online communities. Procedures for development not always transparent or public. Can be formally or informally enforced.

Capital-P Policy Example:

General Data Protection Regulation (GDPR) in the European Union is a capital-P policy that sets comprehensive legal standards for data protection and privacy, detailing obligations for organizations that handle personal data, enforcement mechanisms, and penalties for non-compliance.

Lower case-p policy example:

Remote work practices. Within organizations, there are often informal practices or unwritten rules that employees follow regarding remote work like an informal expectation to check in with their team every morning via a messaging platform, a cultural norm that encourages employees to be available for communication after regular work hours, or an unspoken expectation that employees dress more professionally for video calls, reflecting the company's culture and values.

Sign:

Ethics: Understanding the difference between Capital-P and lowercase-p policies is crucial for ethical analysis. Formal policies (Capital-P) often undergo rigorous scrutiny and debate, incorporating ethical principles, whereas informal policies (lowercase-p) can reflect cultural values that may not align with formal ethical standards. For example, an organization may have a formal policy on data security but may face ethical dilemmas if employees frequently bypass these rules due to ingrained cultural practices.

Policy: Policymakers must consider both types of policies when designing effective regulations. Capital-P policies provide the structure needed to enforce standards, but understanding lowercase-p policies helps in assessing how these regulations will be implemented in real-world scenarios. For example, if a government enacts a new cybersecurity regulation (Capital-P), it should also consider existing organizational cultures and practices (lowercase-p) that could affect compliance and effectiveness.

New cards
9

balancing trade-offs

The dilemma that arises in the regulation of behavior - controlling which modalities are given the most influence and attempting to balance these

The process of weighing competing values, interests, or priorities when making decisions, particularly in complex scenarios where resources are limited or conflicting objectives exist. In the context of technology, this often involves assessing the pros and cons of different approaches to regulation, ethical considerations, and policy implementation, aiming to find an optimal solution that minimizes negative impacts while maximizing benefits.

Ex: A pertinent example of balancing trade-offs can be seen in the privacy rights of individuals versus the interests of private companies in data collection.

Privacy Rights: Individuals have a fundamental right to privacy under the 4th Amendment. Laws such as the General Data Protection Regulation (GDPR) in the European Union emphasize protecting consumers’ data privacy by requiring explicit consent for data collection and providing individuals with the right to access and delete their information.

Interests of Private Companies: On the other hand, tech companies often argue that the ability to collect and analyze user data is crucial for their business models, allowing them to tailor services, improve user experience, and drive innovation. For example, companies like Facebook and Google rely on targeted advertising driven by extensive data collection to generate revenue, arguing that this business model supports free services for users.

Sign:

Ethics: The ethical implications of this trade-off are significant, as it involves balancing individual rights against the business interests of corporations. Ethical frameworks can guide decisions by emphasizing the importance of informed consent, the right to privacy, and the need for corporate responsibility in data handling practices.

Law: Legal frameworks must navigate these competing interests, creating laws that uphold privacy rights while allowing for reasonable business operations. For example, laws like the California Consumer Privacy Act (CCPA) strive to give consumers control over their data while considering the operational realities of businesses.

Policy: Policymakers are tasked with finding solutions that respect individual privacy without stifling innovation. This may involve engaging stakeholders—including consumers, advocacy groups, and businesses—in dialogue to develop balanced policies. For example, establishing clear guidelines for data usage that protect consumer rights while allowing for legitimate business interests can help ensure a fair compromise.

New cards
10

glitch

Ruha Benjamin - Glitches are symptoms of systemic issues. Two ways to think about technical and social glitches: The glitch is diagnostic of deeper, systemic social issues OR Technology has unique ways to amplify bias and channel it into harm (developers should be held responsible).

Ex: Ruha Benjamin Malcolm X example

A Twitter user posted about Google maps reading out Malcolm X Boulevard as “Malcolm Ten Boulevard”.

Diagnostic of systemic social issues like racism and limited recognition of Black leaders. Ironically, this problem of misrecognition actually reflects a solution to a difficult coding challenge. A computer's ability to parse Roman numerals, interpreting an "X" as "ten," was a hard-won design achievement. That is, from a strictly technical stand-point, "Malcolm Ten Boulevard" would garner cheers.

This illustrates how innovations reflect the priorities and concerns of those who frame the problems to be solved, and how such solutions may reinforce forms of social dismissal, regardless of the intentions of individual programmers.

Sign:

Ethics: Glitches highlight the ethical implications of relying on biased technology. Benjamin emphasizes that technology is not neutral; it reflects the values and biases of its creators. Understanding glitches forces ethical considerations into the design and deployment of technologies, urging developers and policymakers to address systemic biases to promote fairness and equity.

Law: The existence of glitches in technology raises legal questions about accountability and liability. For instance, if facial recognition technology leads to wrongful arrests or discrimination, it challenges the legal frameworks surrounding privacy, civil rights, and the use of technology in law enforcement. This prompts a reevaluation of existing laws to ensure they adequately protect against the harms caused by biased technologies.

Policy: Policymakers must grapple with the implications of these glitches when considering regulations around technology use. Recognizing that technology can perpetuate social inequities, policies may need to focus on enforcing stricter standards for the testing and deployment of algorithms, especially those used in sensitive areas like law enforcement, employment, and healthcare. Policies that prioritize transparency, accountability, and public input can help mitigate the adverse effects of technological glitches.

New cards
11

noticing technology through breakdown

People view technology as “infrastructure”, only notice when it breaks.

Winner: judgments on tech are made on narrow ground, only later does the broader significance become clear

The process of becoming aware of and critically examining the underlying structures, assumptions, and social implications of technology when it fails to function as intended. Ruha Benjamin emphasizes that these breakdowns or "glitches" can reveal the biases and inequities that are often obscured during normal operations. Instead of being seen merely as failures, these breakdowns can prompt a deeper investigation.

Ex: One significant example Benjamin explores is predictive policing software, which is designed to analyze crime data to forecast where crimes are likely to occur and which individuals might be involved.

Benjamin points out that these algorithms often rely on historical crime data, which can reflect and reinforce systemic biases present in law enforcement practices. For instance, if the data is primarily drawn from areas with high levels of police activity—often low-income neighborhoods with predominantly Black or minority populations—the algorithm can perpetuate and amplify existing racial biases. This breakdown is evident when predictive policing systems lead to increased surveillance and policing in these communities, further entrenching cycles of over-policing and criminalization.

Sign:

Noticing technology through breakdowns in policing software reveals the complexities and challenges of integrating algorithmic systems into law enforcement. Ruha Benjamin emphasizes that these breakdowns are critical moments of reflection that expose the biases and inequities embedded in technology. By critically examining these failures, stakeholders can advocate for more equitable policing practices and push for the development of technologies that do not perpetuate systemic injustices. This understanding fosters a more informed and nuanced dialogue about the role of technology in policing and the need for ethical considerations in its application, ultimately contributing to a more just society.

New cards
12

“law lag” narrative

Claim that the law moves too slowly to keep up with the fast pace of tech. Often relies on perception of regulators as ill informed.

Ex:

The Congressional hearing with the CEO of TikTok, in which he was asked “crazy” questions like “can TikTok access the home Wifi?”. The detailing of these questions as “crazy” in public news articles adds to the narrative that policymakers are ill-informed and cannot understand the advancements of technology.

Sign:

Tech deterministic, Fatalist, Empowers technologists, Dismisses existing social norms, legal obligations

Policy: This lag can result in a lack of accountability for the government, leaving individuals vulnerable to harm. Ignores state investment in innovation and involvement in promoting commercialization. Calls for policymakers to "finally catch up" prevent deeper debate on what intervention is desirable.

It keeps us from making collective decisions about what public wellbeing is and how we should pursue it. If instead we were to drop the law-lag narrative, we could direct our attention to policymakers’ and regulators’ presumed mandate to dedicate significant public resources toward unfettered innovation in the name of the public good.

New cards
13

the “privacy paradox”

People say they care about privacy but don’t do anything about it

Ex: Facebook Cambridge

Beginning in March 2018, news reports emerged about the unauthorized use of Facebook data by the political marketing firm Cambridge Analytica during the 2016 US Presidential election and the British referendum regarding withdrawal from the European Union in the same year. Although use of the acquired data for political campaigning violated Facebook’s policies, critics pointed to the social network site’s lenient rules governing data collection by third party apps as having enabled the misuse. These reports were followed by a slew of calls for individuals to delete Facebook. While some news reports highlighted anecdotal evidence that individuals were closing their accounts, others described instances where people had decided to maintain their presence on Facebook despite growing privacy concerns.

Sign:

The privacy paradox is crucial to understand in discussions of information ethics, law, and policy because it highlights the gap between user intentions and actions. It challenges assumptions about consumer behavior and raises questions about whether current models of data protection are sufficient. To create more effective privacy protections, stakeholders must acknowledge this paradox and work toward solutions that do not rely solely on individual behavior, ensuring that privacy is safeguarded through design, regulation, and corporate responsibility. This understanding is essential for developing policies that protect users without placing the burden entirely on them to navigate complex digital environments.

New cards
14

digital resignation

Draper & Turow

the condition produced when people desire to control the information digital entities have about them but feel unable to do so. As a result, they resign themselves to participating in digital platforms and services, accepting the trade-off of sacrificing privacy because they feel it’s impossible to opt out of modern digital life.

Ex: Draper and Turow Supermarket Study

Study finds that a large proportion of Americans—43%—say they would agree to let supermarkets collect data about them despite indications elsewhere in the survey that they disagree with consumer surveillance. The report also finds that knowledge of marketplace realities does not neatly correlate with support for or rejection of consumer tracking. Moreover, the more that Americans know about the laws and practices of digital marketing, the more likely they are to be resigned.

Sign:

Ethics: This form of digital resignation highlights an ethical dilemma in the current digital landscape, where individuals’ consent to data collection is not truly voluntary. Even if users click "agree" to privacy policies or accept cookies, their consent is often given out of resignation, not genuine willingness. This suggests that the ethical standards for how companies collect and use data need to be reexamined, with a focus on minimizing harm and creating real choices for users.

Policy: Policymakers must recognize that digital resignation undermines the notion of informed consent in data collection practices. Addressing this requires policies that create stronger default privacy protections, limit the scope of data tracking, and make opting out of tracking easier and more practical for users. Policies could also promote greater transparency in advertising ecosystems, ensuring users understand how their data is used and giving them meaningful ways to opt out.

New cards
15

government vs commercial surveillance

Government surveillance refers to the monitoring, collection, and analysis of individuals' data by state actors, often for purposes of national security, law enforcement, or public safety.

Commercial surveillance, on the other hand, involves private companies tracking, collecting, and analyzing user data for commercial purposes, often to tailor advertisements, improve services, or sell data to third parties.

Ex: NSA's PRISM Program: In 2013, whistleblower Edward Snowden revealed the existence of the PRISM program, which allowed the U.S. National Security Agency (NSA) to access data from major tech companies (like Google, Facebook, and Apple) for intelligence gathering. The program enabled mass surveillance of internet communications without individual warrants, sparking global debates on privacy and the role of government surveillance in counter-terrorism.

Facebook and Targeted Ads: Facebook collects vast amounts of personal data, including user activity, preferences, and behaviors, which it uses to sell targeted advertising to companies. This commercial surveillance has raised concerns about the manipulation of user behavior, as well as potential misuse of data, as was seen in the Cambridge Analytica scandal, where user data was exploited to influence political campaigns.

Sign:

Ethics: Both government and commercial surveillance raise serious ethical concerns about privacy, consent, and autonomy. While government surveillance is often justified on the grounds of national security, it can lead to invasive practices that infringe on civil liberties, such as unlawful searches or disproportionate monitoring of marginalized groups. Commercial surveillance is ethically troubling because users often do not fully understand how their data is being collected or used, leading to issues of informed consent and the commodification of personal information.

Policy: Policymakers face a delicate balancing act between security and privacy. For government surveillance, policies need to ensure that national security interests are pursued in a way that respects human rights and civil liberties. For commercial surveillance, there is a need for stronger regulations that protect users from data exploitation, limit the extent of data collection, and increase transparency and accountability for how data is used. Both areas require policy reforms that consider the long-term impacts on individual freedoms and societal trust in both governments and corporations.

New cards
16

the 4th Amendment

right against unreasonable searches and seizures (constraints law enforcement surveillance) 

Tested, negotiated, and defined through Supreme Court cases often prompted by new technologies (ex: Katz vs US, Riley vs California, Olmstead vs US, Smith vs Maryland, Carpenter vs US)

Ex: Katz v. United States (1967): This landmark Supreme Court case expanded the interpretation of the Fourth Amendment to include privacy in modern communications. The FBI had wiretapped a public phone booth to record Charles Katz's conversations without a warrant. The Court ruled that this violated the Fourth Amendment because Katz had a reasonable expectation of privacy in his phone calls, even though he was in a public space. This case established the principle that the Fourth Amendment protects people, not just places.

Sign:

Law: The Fourth Amendment plays a crucial role in shaping legal standards for privacy and government surveillance. Courts regularly interpret the Fourth Amendment in light of new technologies, and it has become central to debates over electronic surveillance, such as the warrantless collection of metadata or email monitoring by agencies like the NSA.

Policy: Policymakers must consider the Fourth Amendment when developing laws related to law enforcement practices, surveillance, and national security. For example, the Fourth Amendment has been at the heart of discussions surrounding legislation like the Patriot Act, which expanded government surveillance powers in response to terrorism. Balancing the need for national security with the protection of individual rights under the Fourth Amendment is a constant challenge in policy-making, particularly as digital surveillance grows more widespread.

New cards
17

sectoral vs omnibus approach

Sectoral approach - laws regulate particular sectors of human activity in a way that is both piecemeal and more specific where it applies

Omnibus approach - In contrast, the omnibus approach creates a comprehensive, overarching law that applies to all industries and data types. It provides broad, uniform privacy protections across sectors and is not limited to specific industries.

Ex:

US is sectoral - different industries regulated by different laws

  • Health Insurance Portability and Accountability Act (HIPAA): Regulates health data privacy.

  • Gramm-Leach-Bliley Act (GLBA): Regulates financial institutions' use of consumer data.

  • Children's Online Privacy Protection Act (COPPA): Protects the online privacy of children under 13.

EU is omnibus

  • GDPR (General Data Protection Regulation)

  • applies to all sectors and industries, offering a comprehensive legal framework for data protection that covers all personal data, regardless of the industry

Sign:

Understanding the differences between the sectoral and omnibus approaches is essential for effective policymaking in the digital age. The sectoral approach can be advantageous for addressing specific industry needs, but it risks creating a patchwork of protections that are difficult to navigate and enforce. The omnibus approach, while more comprehensive, can be harder to implement uniformly across industries with differing needs. As digital data becomes increasingly central to both government and corporate practices, the choice between these approaches has significant implications for individual privacy, regulatory enforcement, and corporate compliance.

Effective privacy laws and policies must carefully consider which model better balances the need for comprehensive data protection with the flexibility to address specific industry challenges.

New cards
18

notice and consent

requires consent from the user to be “freely given, specific, informed and unambiguous”. This means that companies are required to alert users on how their data is being used and must gain their explicit permission.

Ex: Facebook and Cambridge Analytica Scandal (2018)

In the Facebook-Cambridge Analytica scandal, Facebook provided notice to its users that third-party apps could access their data, including information about their friends, through Facebook's API. When users signed up for certain apps (such as a quiz or a game), they would see a notification or message explaining that the app would collect certain data from their profile, and by clicking "accept" or "allow," they were giving consent for that data sharing.

However, many users were unaware of the extent of data being accessed. Cambridge Analytica, a political consulting firm, exploited this system by using an app to collect personal information from millions of Facebook users for targeted political advertising. While Facebook technically provided notice and obtained consent through its user interface, the scandal revealed that users often did not understand the scope of what they were agreeing to, especially the extent to which their friends' data could also be accessed without explicit consent from those individuals.

Sign:

Ethics: Notice and consent raises ethical issues around informed consent and the fairness of data practices. While the framework is intended to respect user autonomy, it often falls short because users may not have the time, ability, or knowledge to fully understand complex privacy policies. This creates an ethical challenge regarding transparency and the balance of power between users and organizations.

Law: Notice and consent is a cornerstone of many legal frameworks for data privacy, such as the GDPR in the European Union and the California Consumer Privacy Act (CCPA). These laws require companies to provide clear notice and obtain consent for data collection, especially when dealing with sensitive personal information.

New cards
19

GDPR

General Data Protection Regulation - Europe’s omnibus data privacy and security law, applies to all sectors and industries, offering a comprehensive legal framework for data protection that covers all personal data, regardless of the industry.

Ex: Meta was fined $400 million dollars by Irish regulators under the GDPR.

Said that Instagram violated the GDPR’s data protection rules by forcing users to accept personalized ads on Facebook and Instagram by making their access to the platforms conditional on agreeing to the terms of service, which essentially meant they could not use the services without allowing their data to be used for targeted advertising.

Sign:

Law: Legally, the GDPR has had a significant impact, not only in the EU but globally, as many organizations operating in international markets must comply with its requirements. It has set a new standard for data protection laws worldwide, influencing similar regulations in other countries, like the California Consumer Privacy Act (CCPA). The GDPR holds organizations accountable for protecting personal data, introducing the potential for substantial fines and legal consequences for non-compliance, which motivates companies to prioritize data protection.

Policy: For policymakers, the GDPR represents a shift toward more robust privacy regulation, serving as a model for how governments can protect individual rights in the digital age. The regulation encourages data minimization (collecting only the necessary data), purpose limitation (using data only for specified purposes), and accountability in data processing practices. GDPR has sparked broader conversations about data sovereignty, the global nature of data flows, and the responsibilities of tech companies toward consumers worldwide.

New cards
20

contextual integrity

Helen Nissenbaum proposed concept

Asserts that privacy is not merely about controlling personal information but about respecting the norms and expectations surrounding the flow of information in specific contexts. According to Nissenbaum, individuals have different expectations of privacy based on the context of the information exchange, and breaches of privacy occur when information flows outside of these established contexts without appropriate norms being respected.

Ex: Consider the example of sharing personal health information.

In a medical setting, patients typically expect that their health data will be shared with healthcare providers for treatment purposes. This is an example of contextual integrity, as patients consent to share this sensitive information under specific norms and expectations of privacy. However, if that same information is shared with a third-party marketing company without the patient's knowledge or consent, this would violate the principle of contextual integrity, as the flow of information breaches the expectations set in the healthcare context.

Sign:

Ethics: Contextual integrity emphasizes the importance of understanding the social norms governing data sharing in different contexts. It highlights the ethical responsibility of organizations to respect users' expectations about how their information will be used. By focusing on context rather than solely on individual consent, this approach encourages a more nuanced understanding of privacy that considers the relational aspects of information sharing and the implications for trust and social norms.

New cards
21

the “going dark” debate

This concept can be best explained as the inability for law enforcement to access encrypted data and communication. These agencies emphasize the need to access certain data as a part of criminal investigations, in order to protect more vulnerable populations (like children or victims of human trafficking). Law enforcement fears that as encryption becomes more widespread, digital data will “go dark” and impede the success of these investigations.

Concern: what if so many devices and services are encrypted that law enforcement/intelligence agencies can’t access important information to fight crime, even with a warrant?

Ex:

A notable example of the "going dark" debate occurred during the Apple-FBI standoff in 2016. After the San Bernardino terrorist attack, the FBI sought access to the iPhone of one of the attackers. Apple refused to create a backdoor that would allow the FBI to bypass the phone's encryption, arguing that doing so would compromise the security and privacy of all iPhone users. This incident ignited a nationwide debate about the balance between encryption for personal privacy and the need for law enforcement to access data for national security purposes.

Sign:

Ethics: The "going dark" debate raises profound ethical questions about the right to privacy versus the need for security. On one hand, strong encryption is vital for protecting individuals' privacy and securing sensitive data from cyberattacks. On the other hand, law enforcement agencies argue that access to encrypted data is essential for public safety. Ethical considerations revolve around how to strike a balance between these competing interests and whether individuals should have an absolute right to privacy in their digital communications.

Law: From a legal perspective, the debate touches on issues such as the interpretation of the Fourth Amendment (protection against unreasonable searches and seizures) and how it applies to digital data. Courts have been asked to weigh the rights of individuals to keep their communications private against the government's need to access information for investigative purposes. The implications of this debate may influence future legislation and legal precedents regarding data privacy, encryption, and law enforcement access.

Policy: Policymakers are faced with the challenge of formulating laws that account for the technological realities of encryption while addressing the concerns of law enforcement. Some propose solutions like lawful access mechanisms or backdoors that would allow authorities to access encrypted data when necessary. However, these proposals often face significant opposition from privacy advocates, who argue that creating backdoors undermines the very security that encryption provides. The "going dark" debate thus influences discussions about cybersecurity policy, data protection laws, and national security measures.

New cards
22

escrowed encryption

A proposed method of encryption that involves a third party—often a government or a trusted authority—holding a copy of the encryption keys used to secure data. The idea is that while individuals or organizations maintain control of their encrypted data, the escrowed keys can be accessed by the third party under specific circumstances, such as in response to a legal warrant or for national security purposes.

Ex: During 1990s “Crypto Wars”, DOJ was the trusted third party, and provided warrant-based access to law enforcement if needed.

Clipper Chip was proposed during the Crypto Wars:

Hardware encryption device developed by the U.S. government that incorporated a system of escrowed encryption. Under this system, each Clipper Chip had a unique key that was split into two parts: one part was held by the user, while the other was kept in escrow by the government. Law enforcement could access the escrowed keys if necessary, but this initiative faced significant backlash from privacy advocates and technologists, who were concerned about potential government overreach and the inherent vulnerabilities in having a centralized key storage system.

Sign:

Ethics: Escrowed encryption raises ethical dilemmas regarding privacy and trust. While the intention is to provide law enforcement with necessary access to encrypted communications, many individuals fear that this could lead to abuse of power and violations of privacy rights. Ethical questions arise around whether it is acceptable to create a mechanism that could potentially weaken the overall security of encrypted data, exposing individuals to greater risks from malicious actors.

Policy: Policymakers face the challenge of balancing security interests with the fundamental rights of individuals. While escrowed encryption has been proposed as a solution to the "going dark" debate, it raises significant concerns about the feasibility and reliability of such systems. There is also skepticism about whether escrowed encryption could lead to a "backdoor" for government surveillance, which many privacy advocates oppose. Policymakers must carefully consider how to address encryption in a way that protects both public safety and individual privacy.

New cards
23

end-to-end encryption (E2EE)

Encryption occurs on sender’s and recipient’s devices. Private keys to decrypt messages are held by users, not companies. The means even the platform/service can’t access the data. For law enforcement or anyone else to break the security, they would have to do so on the sender or receiver’s device, can’t just sent a retention request to the platform. This is hard to do at a larger scale. All 3 models we are discussing can be categorized as E2EE (but some consider only the fully asymmetric model to be true E2EE).

Ex: WhatsApp

A widely known example of E2EE is the messaging application WhatsApp. When users send messages on WhatsApp, those messages are encrypted on the sender’s device and can only be decrypted by the intended recipient's device. Even WhatsApp itself cannot access the content of the messages, as the encryption keys are stored solely on users' devices. This feature protects user privacy and ensures that even if the data is intercepted while in transit, it remains unreadable to unauthorized parties.

Sign:

Ethics: E2EE raises important ethical considerations regarding user privacy, security, and trust. It empowers users by ensuring their communications are private and protected from surveillance, promoting the ethical principle of autonomy in the digital realm. However, it also poses challenges for law enforcement, as it can hinder the ability to investigate and prevent criminal activity. Balancing these competing ethical considerations is crucial for creating policies that respect user privacy while addressing public safety concerns.

Policy: Some governments have proposed regulations that require technology companies to provide backdoor access to encrypted communications for law enforcement, which raises significant debate over the security risks associated with such backdoors. The development of balanced policies that protect both privacy and security is a critical challenge for policymakers in the digital age.

New cards
24

Apple v. FBI

  • 2015 mass shooting at community center in San Bernardino

  • Law enforcement obtained iPhone from one of shooters

    • It was locked and encrypted

    • They couldn’t brute force the password because Apple wipes the phone after 10 incorrect attempts

  • They suspected terrorism and wanted to continue investigation, phone was “critical” to investigation

  • FBI got court order to make Apple write code to override the feature and break encryption (a “backdoor”/a “golden key”)

    • “Just this one investigation”, not trying to set a precedent

  • Apple “cannot betray promise to customer”

    • Challenges it in court

    • Argues that encryption is key to protecting customers’ privacy

    • A backdoor is “too dangerous to create”

  • Conclusion (not really): FBI withdrew its legal request because it hacked the phone without Apple’s help

  • Advocates claimed there was nothing useful in the phone otherwise

Sign:

Law: Legally, the case underscored the challenges that courts and lawmakers face in navigating issues of digital privacy and security. It prompted discussions about the limits of government power regarding access to encrypted data and set important precedents regarding how future cases might be handled. The case also influenced ongoing discussions about encryption regulations and the extent of corporate responsibility to aid law enforcement.

Policy: The Apple v. FBI case had significant implications for policy discussions surrounding encryption technologies. It sparked debates about the need for potential backdoors in encryption for law enforcement purposes and raised concerns among privacy advocates about the risks associated with creating such access points. Policymakers were prompted to consider how to balance national security needs with the rights of individuals to maintain private communications in an increasingly digital world.

New cards

Explore top notes

note Note
studied byStudied by 5 people
... ago
4.0(1)
note Note
studied byStudied by 13 people
... ago
5.0(1)
note Note
studied byStudied by 6 people
... ago
5.0(1)
note Note
studied byStudied by 84 people
... ago
5.0(2)
note Note
studied byStudied by 14 people
... ago
5.0(1)
note Note
studied byStudied by 1 person
... ago
5.0(1)
note Note
studied byStudied by 28 people
... ago
5.0(1)
note Note
studied byStudied by 21136 people
... ago
4.7(91)

Explore top flashcards

flashcards Flashcard (84)
studied byStudied by 4 people
... ago
5.0(1)
flashcards Flashcard (36)
studied byStudied by 7 people
... ago
5.0(1)
flashcards Flashcard (22)
studied byStudied by 3 people
... ago
4.0(1)
flashcards Flashcard (67)
studied byStudied by 10 people
... ago
5.0(1)
flashcards Flashcard (57)
studied byStudied by 27 people
... ago
5.0(1)
flashcards Flashcard (30)
studied byStudied by 6 people
... ago
5.0(1)
flashcards Flashcard (200)
studied byStudied by 18 people
... ago
5.0(1)
robot