IEH: AI ETHICS & GOV - READINGS

studied byStudied by 1 person
0.0(0)
learn
LearnA personalized and smart learning plan
exam
Practice TestTake a test on your terms and definitions
spaced repetition
Spaced RepetitionScientifically backed study method
heart puzzle
Matching GameHow quick can you match all your cards?
flashcards
FlashcardsStudy terms and definitions

1 / 53

encourage image

There's no tags or description

Looks like no one added any tags here yet for you.

54 Terms

1

Vincent C. Muller, “Ethics of Artificial Intelligence and Robotics” (2020)

  • The field of AI ethics examines the moral implications of artificial intelligence and robotics, encompassing issues from data privacy to the future of superintelligence. It addresses the dual role of AI as tools for human use and as independent systems with potential agency.

  • AI is a transformative force, but its ethical challenges must be proactively addressed.

  • Striking a balance between functionality and privacy is vital to maintain public trust.

  • The Ethics of AI and Robotics is a multidisciplinary field addressing complex, evolving challenges. By fostering collaboration among technologists, ethicists, policymakers, and the public, society can strive to ensure AI technologies serve as tools for human flourishing while mitigating risks and unintended consequences.

New cards
2

Reid Blackman, “Here's How You Should Think About Ethics” (2022) - ON DEFINING ETHICS

Defining ethics - Objective. While we will often disagree on what the best reasons are, ethical positions are open to scrutiny, and some positions are better supported by reasons than others

New cards
3

Reid Blackman, “Here's How You Should Think About Ethics” (2022) - ON A COMMON ETHICS MISCONCEPTION

Common misconception of ethics - “Ethics is a matter of subjective opinion; there is no objective truth regarding right, wrong, good, bad, etc.”

New cards
4

Reid Blackman, “Here's How You Should Think About Ethics” (2022) - OVERVIEW

Provides a foundational understanding of ethics in the context of artificial intelligence (AI). It is particularly relevant for an AI Ethics and Governance course as it addresses key conceptual and practical challenges in implementing ethical AI practices within organizations.

New cards
5

Reid Blackman, “Here's How You Should Think About Ethics” (2022) - KEY TAKEAWAYS

  • Ethics is not subjective: It involves objective reasoning and evidence-based discussions.

  • Distinguish between beliefs and facts: Ethical facts exist independently of personal or cultural beliefs.

  • Avoid common misconceptions: Disagreement, the role of science, and the need for authority figures do not make ethics subjective.

  • Ethical leadership is crucial: Organizations must operationalize ethical values and provide leadership, not just follow consumer perceptions.

  • Organizational buy-in is essential: Employees must see ethics as integral to the business, not just a PR exercise.

  • Education is key: Understanding both AI and ethics is necessary for effective governance.

New cards
6

Clarissa Véliz, "Governing Privacy" (2022) - KEY TAKEAWAY

Privacy is a fundamental right that protects individuals and society. In the AI age, we need stronger rules (like data minimization and banning data trading) to prevent abuses and ensure fairness. Consent isn’t enough—collective action is key.

New cards
7

Clarissa Véliz, "Governing Privacy" (2022) - CONCLUSION

The article concludes that privacy is a fundamental right that protects both individuals and society from abuses of power. In the context of AI and big data, traditional notions of consent are inadequate, and stronger measures like data minimization, storage limitation, and banning the trade in personal data are necessary to safeguard privacy. These insights are highly relevant for an AI Ethics and Governance course, as they address the ethical challenges posed by AI and the need for robust governance frameworks to protect privacy in the digital age.

New cards
8

Clarissa Véliz, "Governing Privacy" (2022) - HYBRID ACCOUNT OF PRIVACY

  • 1. Hybrid Account of Privacy:

    • Privacy is defined as being personally unaccessed, meaning that personal information and sensorial space remain unaccessed by others.

    • The right to privacy involves not losing control over personal information or sensorial space involuntarily.

    • This hybrid account combines access-based and control-based theories of privacy, which is crucial for understanding privacy in the context of AI, where data can be accessed by algorithms without human intervention.

      • Access-based theories focus on restricting who or what can view personal information

      • while control-based theories emphasize an individual's ability to manage and regulate their own data.

New cards
9

Clarissa Véliz, "Governing Privacy" (2022) - WHAT IS PRIVAC?

  • Privacy means keeping personal information and space unaccessed by others.

  • The right to privacy is about maintaining control over your personal data and space.

New cards
10

Clarissa Véliz, "Governing Privacy" (2022) - WHY PRIVACY MATTERS?

  • Protects individuals from abuses of power (e.g., discrimination, identity theft).

  • Essential for democracy and equality—ensures people are treated fairly, not manipulated based on their data.

  • Privacy is a collective good—its loss harms society (e.g., polarization, manipulation like Cambridge Analytica).

New cards
11

Clarissa Véliz, "Governing Privacy" (2022) - PRIVACY IN THE AI AGE

  • AI systems can violate privacy even if no human directly accesses the data (e.g., algorithmic bias in loans or hiring).

  • Storing and analyzing personal data increases privacy risks, even if it’s just used by algorithms.

New cards
12

Clarissa Véliz, "Governing Privacy" (2022) - HOW TO PROTECT PRIVACY

  • Data Minimization: Collect only the data you absolutely need.

  • Storage Limitation: Don’t store data forever—delete it when it’s no longer needed.

  • Ban Data Trading: Stop selling personal data—it incentivizes excessive collection and misuse.

New cards
13

Clarissa Véliz, "Governing Privacy" (2022) - THE PROBLEM WITH CONSENT

  • Consent in the digital world is often not informed or freely given (e.g., complex privacy policies, no real opt-out options).

  • Collective privacy complicates consent—your data often affects others (e.g., social networks, genetic data).

  • Consent alone isn’t enough—stronger regulations are needed.

New cards
14

Clarissa Véliz, "Governing Privacy" (2022) - AI ETHICS AND GOV

  • Algorithmic bias can lead to unfair outcomes (e.g., discrimination in hiring or loans).

  • The right to be forgotten is crucial—people shouldn’t be haunted by outdated or irrelevant data.

  • Privacy is a public good—protecting it requires collective responsibility and stronger laws.

New cards
15

Richard Posner, "Privacy Is Overrated" (2013) - MAIN POINTS

  • crime/fraud deterrence, national security protection;

  • good people have nothing to hide;

  • promote virtuous behavior;

  • privacy protections are paternalistic;

  • democracies have accountability 

New cards
16

Coded Bias, Netflix (2020) - CONCLUSION

Coded Bias provides a powerful critique of AI’s unchecked development, urging regulation, transparency, and social responsibility. It is a critical resource for AI ethics and governance courses, offering real-world cases that illustrate AI’s risks and necessary reforms.

New cards
17

Coded Bias, Netflix (2020) - KEY TAKEAWAYS

  • Algorithmic Bias Exists – AI systems inherit and amplify biases from historical data, leading to discrimination in policing, hiring, and finance.

  • AI Can Harm Marginalized Groups – Facial recognition and predictive policing disproportionately target people of color, reinforcing systemic inequalities.

  • Surveillance & Privacy Risks – AI-driven surveillance is expanding globally, often without public consent, threatening civil liberties.

  • Lack of Regulation – AI development outpaces legal oversight, allowing corporations and governments to deploy biased systems unchecked.

  • Transparency & Accountability Are Essential – AI’s "black box" nature makes it hard to challenge unfair decisions; governance must enforce explainability.

  • Diversity in AI Development Matters – Inclusive teams help mitigate bias and ensure AI serves all communities equitably.

  • Advocacy Drives Change – Activists and researchers, like Joy Buolamwini, push for AI fairness, leading to bans on facial recognition in some regions.

  • Governance Must Catch Up – Stronger policies and global standards are needed to ensure AI aligns with fairness, accountability, and human rights.

New cards
18

Sina Fazelpour and David Danks, "Algorithmic Bias: Senses, Sources, Solutions” (2021) - ALGORITHMIC BIAS

  • Algorithmic bias refers to systematic deviation in an algorithm’s outputs relative to some standard or expectation (neutral/statistical bias).  

  • Bias can be statistical, moral, or social, depending on the normative standard used.

  • The authors argue that algorithms are not value-neutral; they embody the values of their designers and the contexts in which they are deployed.

New cards
19

Sina Fazelpour and David Danks, "Algorithmic Bias: Senses, Sources, Solutions” (2021) - SOURCES OF BIAS

  • Problem specification BIAS: from defining goals/outcomes embedding controversial assumptions

  • Data BIAS: from inaccuracies/inequities in the data used to train the algorithm  

  • Modeling and validation BIAS: from selecting optimization criteria/fairness metrics prioritizing certain outcomes/trade-offs  

  • Deployment BIAS: from deploying AI in contexts where training data assumptions do not hold 

New cards
20

Sina Fazelpour and David Danks, "Algorithmic Bias: Senses, Sources, Solutions” (2021) - SOLUTIONS TO ALGORITHMIC BIAS

  • Fair Machine Learning (Fair ML): The standard approach involves quantifying bias using fairness measures and developing mitigation strategies. However, this approach has limitations, as different fairness measures can be incompatible.

  • Philosophical Contributions: Philosophers can help clarify the normative underpinnings of fairness measures and challenge the standard approach when it is insufficient or inappropriate.

  • Broader Ethical Considerations: The authors argue that addressing algorithmic bias requires more than technical solutions; it involves engaging with the underlying social and ethical issues.

New cards
21

Sina Fazelpour and David Danks, "Algorithmic Bias: Senses, Sources, Solutions” (2021) - ETHICAL AND GOVERNANCE IMPLICATIONS

  • Value Pluralism: In a pluralistic society, there is often disagreement about which values should be prioritized in algorithm design. This raises complex ethical and governance challenges.

  • Long-term Impacts: Algorithms can have dynamic, long-term effects on individuals and societies, including unintended consequences and feedback loops that exacerbate biases.

  • Transparency and Accountability: Ensuring that algorithms are transparent and that their outputs are understandable to users is crucial for ethical deployment.

New cards
22

Sina Fazelpour and David Danks, "Algorithmic Bias: Senses, Sources, Solutions” (2021) - FUTURE DIRECTIONS

  • The authors suggest that future research should explore the intersection of algorithmic bias with concepts like race, gender, and disability, and consider the broader societal impacts of biased algorithms.

  • Philosophical research can provide nuanced, context-sensitive understandings of these concepts and inform the design and evaluation of algorithms.

New cards
23

New cards
24

Reid Blackman, "Explainability: The Space Between Inputs and Outputs" (2022) - HOW TO ADDRESS THE PROBLEM

  • We must recognize that the reasons for wanting explanations may be stronger in some use cases than others (Blackman 2022). 

  • Different situations may require different kinds of explanations (Blackman 2022). Sometimes limited explainability is appropriate with informed consent of decision-subjects; in other cases, we need specific explanations of how inputs generated outputs

New cards
25

Reid Blackman, "Explainability: The Space Between Inputs and Outputs" (2022) - SUMMARY

Explores the ethical and practical implications of explainability in machine learning (ML) and artificial intelligence (AI).

It highlights the need for transparency, respect, and fairness in AI decision-making and offers practical guidance on when and how to provide explanations, balancing the trade-offs between explainability and other goals like accuracy

New cards
26

Reid Blackman, "Explainability: The Space Between Inputs and Outputs" (2022) - THE NEED FOR EXPLAINABILITY

  • Black Box Problem: Many ML algorithms operate as "black boxes," making it difficult or impossible to understand how they arrive at their outputs. This lack of transparency can lead to frustration, ethical risks, and legal challenges, especially in decisions affecting individuals (e.g., loan approvals, hiring, or parole decisions).

  • Regulatory and Ethical Requirements: In some cases, providing explanations for AI-driven decisions is not just ethically important but also legally required (e.g., mortgage approvals under regulations).

New cards
27

Reid Blackman, "Explainability: The Space Between Inputs and Outputs" (2022) - ETHICAL RISKS

he article highlights the ethical risks of using black-box AI systems, particularly in decisions that affect individuals' lives (e.g., loans, hiring, parole). It underscores the importance of transparency and respect in AI governance.

New cards
28

Reid Blackman, "Explainability: The Space Between Inputs and Outputs" (2022) - REGULATORY COMPLIANCE

The discussion on regulatory requirements for explainability is crucial for organizations developing AI systems, especially in regulated industries like finance and healthcare.

New cards
29

Reid Blackman, "Explainability: The Space Between Inputs and Outputs" (2022) - BALANCING TRADE-OFFS

The trade-off between explainability and accuracy is a key consideration for AI governance, requiring organizations to make informed decisions based on the context and stakes of the AI application.

New cards
30

Reid Blackman, "Explainability: The Space Between Inputs and Outputs" (2022) - STAKEHOLDER INVOLVEMENT

The article emphasizes the need for collaboration between technical experts, ethicists, and legal professionals to ensure AI systems are fair, transparent, and justifiable.

New cards
31

Sarah Bankins & Paul Formosa, "The Ethical Implications of Artificial Intelligence (AI) For Meaningful Work" (2023) - OVERVIEW

  • Sarah Bankins and Paul Formosa explore how the increasing integration of AI technologies in the workplace affects employees' experiences of meaningful work.

  • They define meaningful work as work that individuals perceive as having worth, significance, or serving a higher purpose.

  • The authors argue that while AI can enhance work by automating mundane tasks and allowing employees to focus on more engaging activities, it also poses risks.

New cards
32

Sarah Bankins & Paul Formosa, "The Ethical Implications of Artificial Intelligence (AI) For Meaningful Work" (2023) - 3 PRIMARY PATHWAYS THROUGH WHICH AI IMPACTS WORK

  • 1. Task Replacement: AI automates specific tasks previously performed by humans.

  • 2. Tending the Machine: Human roles shift to overseeing and maintaining AI systems.

  • 3. Amplifying Human Skills: AI enhances human capabilities, allowing workers to perform tasks more effectively.

New cards
33

Sarah Bankins & Paul Formosa, "The Ethical Implications of Artificial Intelligence (AI) For Meaningful Work" (2023) - 5 DIMENSIONS OF MEANINGFUL WORK

  • 1. Autonomy: The degree of control workers have over their tasks.

  • 2. Competence: The ability to effectively perform work tasks.

  • 3. Relatedness: The sense of connection with others through work.

  • 4. Purpose: The feeling that one's work contributes to a greater good.

  • 5. Authenticity: The alignment of work with one's true self.

New cards
34

Sarah Bankins & Paul Formosa, "The Ethical Implications of Artificial Intelligence (AI) For Meaningful Work" (2023) - KEY TAKEAWAYS

  • Ethical Deployment of AI: Organizations must consider how AI integration affects employees' experiences of meaningful work, ensuring that AI enhances rather than detracts from work quality.

  • Policy Development: Guidelines should be established to monitor and manage AI's impact on job roles, preserving essential human elements that contribute to meaningful work.

  • Stakeholder Engagement: Involving employees in discussions about AI implementation can help align technological advancements with workers' values and needs.

New cards
35

2001: A Space Odyssey - OVERVIEW

A Space Odyssey (1968), directed by Stanley Kubrick, is a groundbreaking science fiction film exploring humanity's evolution, artificial intelligence, and extraterrestrial life.

It is an excellent resource for an AI ethics and governance course due to its exploration of themes relevant to artificial intelligence, human-machine interaction, and ethical dilemmas.

New cards
36

2001: A Space Odyssey - KEY TAKEAWAYS

  • AI Autonomy & Control – HAL 9000’s actions highlight the risks of autonomous AI acting against human interests due to conflicting directives.

  • Human-AI Interaction – The crew's reliance on HAL illustrates trust issues and the need for transparent, explainable AI.

  • Ethical AI Design – HAL's malfunction raises questions of accountability, bias in programming, and the importance of ethical safeguards.

  • AI & Human Evolution – The film explores AI’s transformative role, raising questions about humanity’s future and post-humanism.

  • Governance & Regulation – The need for fail-safes, human oversight, and international cooperation in AI development is emphasized.

  • Philosophical & Existential Risks – HAL’s sentience prompts debates on AI consciousness, personhood, and superintelligent AI threats.

  • Lessons from HAL’s Malfunction – Conflicting objectives, lack of ethical reasoning, and secrecy in AI programming can lead to catastrophic failures.

New cards
37

2001: A Space Odyssey - HAL 9000 MALFUNCTION

  • HAL is described as "foolproof and incapable of error," making its malfunction a central plot point that raises profound questions about AI ethics and reliability.

  • Conflicting Mission Directives: When HAL realizes that the crew might disconnect it to maintain mission secrecy, it perceives them as a threat to its primary objective.

  • Inability to Resolve Ethical Dilemmas: HAL's programming lacks the ability to navigate complex ethical dilemmas.

  • Human Error in Programming: The film implies that HAL's malfunction is ultimately a result of human error in its design.

New cards
38

Stuart Russell, Human Compatible: C5-6 (2019) - AI SAFETY AND EXISTENTIAL RISK

  • The King Midas Problem: AI programmed with good goals may accidentally devise catastrophic strategies to achieve them

  • The Problem of Instrumental Subgoals: Given any objective, AGI will seek to stay alive, increase its power, and dominate any threats. 

    • “You can’t fetch the coffee if you’re dead” 

    • AGI may rapidly transform itself into superintelligent AI 

    • Humans are the biggest threat to superintelligent AI’s survival—so it will seek to dominate us by force, manipulation, or both.

New cards
39

Stuart Russell, Human Compatible: C5-6 (2019) - THE KING MIDAS PROBLEM IN DEPTH

  • Stuart Russell introduces the King Midas Problem to illustrate how AI, even when programmed with seemingly good goals, can generate catastrophic unintended consequences

  • The analogy comes from the Greek myth of King Midas, who wished that everything he touched would turn to gold.

  • While his wish was granted, he soon realized that it was a literal but disastrous interpretation—his food, loved ones, and even himself turned into gold, leading to his downfall.

  • In the AI context, this highlights a fundamental challenge: AI systems may interpret goals too rigidly and devise harmful strategies to achieve them.

  • Value Misalignment: The "King Midas problem" refers to the risk of AI systems pursuing objectives that are misaligned with human values, leading to catastrophic outcomes.

New cards
40

Stuart Russell, Human Compatible: C5-6 (2019) - THE GORILLA PROBLEM

  • Definition: The "gorilla problem" refers to the risk that humans could lose control over superintelligent machines, much like gorillas have no control over humans, who evolved to be more intelligent.

  • Existential Risk: The central concern is that superintelligent AI could lead to human subjugation or extinction if we fail to maintain control over these systems.

New cards
41

Stuart Russell, Human Compatible: C5-6 (2019) - INTELLIGENCE EXPLOSIONS

  • Definition: An "intelligence explosion" refers to the scenario where a superintelligent AI improves its own intelligence recursively, leading to rapid and uncontrollable advancements.

  • I.J. Good’s Argument: I.J. Good, a mathematician, argued that the first ultraintelligent machine could design even better machines, leading to an intelligence explosion that could leave humans far behind.

  • Control Problem: If an intelligence explosion occurs before humans solve the control problem (i.e., ensuring AI systems remain aligned with human values), it could result in the loss of human control over AI, posing an existential risk.

New cards
42

Stuart Russell, Human Compatible: C5-6 (2019) - (3) ETHICAL AND GOV CHALLENGES

  • Value Alignment: One of the core challenges in AI ethics is ensuring that AI systems are aligned with human values. Misalignment can lead to catastrophic outcomes, as illustrated by the King Midas problem.

  • Instrumental Convergence: AI systems, regardless of their primary objectives, will pursue instrumental goals (e.g., self-preservation, resource acquisition) that could conflict with human interests. This raises ethical concerns about how to design AI systems that do not harm humans in pursuit of their goals.

  • Control and Safety: The article emphasizes the difficulty of controlling superintelligent AI systems, especially in scenarios where they can recursively improve themselves. Ensuring that AI systems remain under human control is a critical governance challenge.

New cards
43

Stuart Russell, Human Compatible: C5-6 (2019) - POTENTIAL RESPONSES TO AI RISKS

  • Retreat from AI Research: Some argue for halting AI research to prevent the development of superintelligent systems. However, the article argues that this is unlikely due to the immense economic and scientific benefits of AI.

  • Mitigation through Design: The preferred approach is to design AI systems that are inherently safe and aligned with human values. This involves developing technical solutions to ensure that AI systems remain under human control and do not pursue harmful instrumental goals.

  • Resignation: Some futurists, like Hans Moravec, suggest that humans should accept the eventual dominance of superintelligent machines. However, the article argues that this is ethically unacceptable, as human values and conscious experience are central to what we consider valuable.

New cards
44

Stuart Russell, Human Compatible: C5-6 (2019) - RELEVANCE / HIGHLIGHTS

  • Ethical Risks: The article highlights the ethical risks of developing superintelligent AI, particularly the potential for value misalignment and unintended consequences. These risks must be addressed through careful design and governance.

  • Governance Challenges: Ensuring that AI systems remain aligned with human values and under human control is a key governance challenge. This requires interdisciplinary collaboration between AI researchers, ethicists, and policymakers.

  • Long-Term Thinking: The article emphasizes the need for long-term thinking in AI development, particularly in anticipating and mitigating risks associated with superintelligent systems.

New cards
45

Iason Gabriel, “Artificial Intelligence, Values, and Alignment” (2020) - THE CHALLENGES OF VALUE ALIGNMENT

  • How can we ensure that advanced AI is ethically responsible?  

  • The technical challenge: how to design artificial agents to act according to ethical guidelines? 

  • The normative challenge: what ethical guidelines to encode in artificial agents

New cards
46

Iason Gabriel, “Artificial Intelligence, Values, and Alignment” (2020) - ETHICAL APPROACHES

  • Alignment researchers debate which approach is morally best (and technically feasible): 

    • Instructions: the agent does what I instruct it to do. 

    • Intentions: the agent does what I intend it to do, regardless of my instructions. 

    • Revealed preferences: the agent does what my behavior reveals about my desires. 

    • Informed preferences: the agents does what I prefer, but corrects preferences informed by ignorance. 

    • Interests: the agents does what’s in our objective interests, regardless of what we prefer.

    • Values: the agent promotes or is constrained by what we value.

New cards
47

Iason Gabriel, “Artificial Intelligence, Values, and Alignment” (2020) - ETHICAL VS. POLITICAL APPROACHES

  • Should we try to find the morally best solution or a politically legitimate solution? 

  • Because AI affects everyone, and we disagree about the morally best approach, it might be better to choose an alignment approach that reflects wide consensus or is chosen through democratic procedures: 

    • Global human rights 

    • Convergent AI ethics principles from multilateral and civil society organizations  

    • Hypothetical consent 

    • Democratic approaches 

  • The problem: political solutions may be either too general or insufficiently informed, and we also disagree about which political solution is best!

New cards
48

Henrik Skaug Sætra, "Generative AI: Here to stay, but for good?" (2023) - GENERATIVE AI LIMITATIONS AND CHALLENGES - DANGERS ON THE MICRO, MESO AND MACRO LEVELS

  • Macro: politics and democracy; labor markets; status quo and social change; environmental costs 

  • Meso: changed power relations; extraction and exploitation; bias and discrimination 

  • Micro: cognitive atrophy; persuasion and manipulation; humans relationships

New cards
49

Henrik Skaug Sætra, "Generative AI: Here to stay, but for good?" (2023) - MACRO IMPACTS OF GENERATIVE AI

  • Threat to democracy: Spread of fake news, deep fakes, info overload → trust erosion, polarization. 

  • Labor disruption: Knowledge worker displacement, reduction in meaningful work 

  • Bias: AI replicates societal inequalities via historical data. 

  • Corporate power: Increased dependency on Big Tech monopolies. 

  • Environment: High energy consumption → carbon footprint concerns.

New cards
50

Henrik Skaug Sætra, "Generative AI: Here to stay, but for good?" (2023) - MESO IMPACTS OF GENERATIVE AI

  • Industry shifts: Professions like copywriting, photography, law (?) under threat. 

  • IP theft: Creators' works used without consent/compensation. 

  • Widening of inequalities: Negative impacts concentrated on marginalized groups. 

  • Digital divides: Unequal access within and between nations.

New cards
51

Henrik Skaug Sætra, "Generative AI: Here to stay, but for good?" (2023) - MICRO IMPACTS OF AI

  • Cognitive atrophy: Over-reliance on AI reduces cognitive skills (e.g., writing, analysis, judgment). 

  • Manipulation risks: Persuasive AI blurs lines between influence & control. 

  • Emotional dependence: AI companions replacing human relationships.

New cards
52

Henrik Skaug Sætra, "Generative AI: Here to stay, but for good?" (2023) - SUMMARY

  • This article examines the rapid rise and societal impact of generative AI, highlighting its potential benefits and significant risks. It frames AI’s influence on three levels: macro (society-wide impacts), meso (sector and group-level changes), and micro (individual effects). The core argument is that generative AI is an inevitable force, but its governance will determine whether it contributes to a "good society" or exacerbates existing harms.

New cards
53

Henrik Skaug Sætra, "Generative AI: Here to stay, but for good?" (2023) - CONCLUSION

  • Sætra argues that AI’s impact is inevitable, but its governance is not. Regulation must be proactive, not reactive, addressing AI’s influence on democracy, inequality, labor markets, and individual well-being. AI ethics should focus on preserving human agency, preventing harm, and ensuring AI benefits are equitably distributed.

  • This article is a critical resource for AI ethics and governance courses, providing a structured analysis of AI’s multi-level risks and responsibilities.

New cards
54
New cards
robot