AIG Final Exam

0.0(0)
studied byStudied by 2 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/155

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

156 Terms

1
New cards

What is AI?

Computers and machines that can reason, learn, and act in a way that would normally require human intelligence

2
New cards

Examples of AI Use Cases

Recognition (image), Event detection (fraud detection), Forecasting (revenue), Personalization, Interaction support (chatbots), Goal-driven optimization (supply chain management), Reccomendations (health)

3
New cards

AI Safety Risks

Safety and contrability of AI is a risk, example is Uber self-driving car

4
New cards

AI Risks

  • Safety

  • Civil Liberty/Civil Rights

  • Social Risks

  • Privacy Risks

  • Economic Risks

  • Environmental Risks

5
New cards

AI Risks: Civil Liberty and Civil Rights

AI should be fair and equitable (free of bias), example: Facial Recognition Recognition Systems

6
New cards

AI Risks: Societal Risks

Deepfakes, misinformation, can we count on companies regulating themselves

7
New cards

AI Risks: Privacy Risks

Issues of transparency, controllability, and privacy. Need for solution that ensures people retain control over collection and use of their data, example Amazon Echo

8
New cards

AI Risks: Economic Risks

Will AI accelerate inequality, create new inequalities, help reduce inequalities?

9
New cards

AI Risks: Environmental Risks

Large amount of electricity use

10
New cards

Governance Definition

Process of making and enforcing decisions, can take the form of laws and regulations, processes and policies implemented within an organization

11
New cards

AI Governance Defitinion

A set of principles, organizational mechanisms, processes, practices, methodologies and technologies to help organizations identify and mitigate potential risks entailed by AI solutions. Can be madated by laws and regulation and/or informed by best practices

12
New cards

OSTP Blueprint for an AI Bill of Rights

  • Safe and Effective Systems

  • Protection against Algorithmic Discrimination

  • Data Privacy

  • Notice and Explanation

  • Human Alternatives, Consideration and Fallback

13
New cards

What should AI Governance be (Orchestration Process)

  • Culture of trust and collaboration

  • Systematic and consistent

  • Document Process

    • Identifying risks and documenting mitigation

14
New cards

AI Governance is about entire Lifecycle

  • Conception

  • Design

  • Implementation

  • Deployment

Never ending process and highly iterative one (Agile), one that needs to be reconciled with practical corporate considerations-finite time and resources

15
New cards

Open Letter Requesting 6 Month Moratorium

AI systems are becoming human competitive at general tasks, calls on AI labs to pause for at least 6 months of training of systems more powerful than GPT-4

16
New cards

Basic Ethical Principles

  • Fairness

  • Transparency

  • Justice

  • Accountability

  • Agency

17
New cards

Asilomar Conference on “Beneficial AI”

Argues for more AI research to focus on economic, ethical, social, and legal issues (not just technical)

18
New cards

Asilomar Conference - Ethics and Values

  • Safety: safe and secure

  • Failure Transparency: If AI system causes harm, it should be possible to ascertain why

  • Judicial Transparency: Any involvement in judicial decision making should provide satisfactory explanation auditable by a competent human authority

  • Responsibility: Designers and builders

  • Value Alignment; Highly autonomous AI systems should be designed with goals and behaviors that align with human values

  • Human Values: AI systems should be compatible with human dignity, rights, freedoms, and cultural diversity

  • Personal Privacy: People should have right to access, manage, and control data

  • Liberty and Privacy: application of AI to personal data must not curtail perceived liberty

  • Shared Benefit: should benefit as many people as possible

  • Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity

  • Human Control: humans should choose how and whether to delegate decisions to AI systems to accomplish human chosen objectives

  • Non-subversion: AI should respect and improve, rather than subvert, the social and civic processes

  • AI Arms Race: should be avoided

19
New cards

Ethical and RIght-based Approaches to Principles for AI

  • Privacy

  • Accountability

  • Safety and Security

  • Security

  • Transparency

  • Explainability

  • Fairness and non-discrimination

  • Human control/agency

  • Professional Responsibility

  • Respect of human values

20
New cards

NIST AI Risk Management Framework

Intended for voluntary use to improve ability to incorporate trustworthiness considerations into design, development, use, and evaluation of AI products, services, and systems

21
New cards

NIST AI RMF: Map

Context is recognized and risks related to context are identified

22
New cards

NIST AI RMF: Measure

Identified risks are assessed, analyzed, or tracked

23
New cards

NIST AI RMF: Govern

A culture of risk management is cultivated and present

24
New cards

NIST AI RMF: Manage

Risks are prioritized and acted upon based on a projected impact

25
New cards

NIST AI RMF: Govern 1

Policies, processes, procedures, and practices are in place, transparent, and implemented effectively.

Includes:

  • legal and regulatory requirements

  • policies, processes, and procedures

  • Policies aligned with organizations tolerance for risk

  • risk management processes align with risk priority

  • Organizational roles and responsibilities

  • AI Systems inventory (documentation, incident response plan)

  • Processes are in place for decommissioning/phasing out AI systems

26
New cards

NIST AI RMF: Govern 2

Teams and individuals are empowered, responsible, and trained for mapping, measuring, and managing AI risks

27
New cards

NIST AI RMF: Govern Continued

Govern 3: workforce diversity, equity, inclusion, and accessibility

Govern 4: culture that considers and communicates AI risk

Govern 5: Robust engagement with AI actors

Govern 6: policies address 3rd party software

28
New cards

NIST AI RMF: Map 1

Risk analysis requires understanding the context of use/deployment

Intended purpose, context-specific laws are understood and documented. Considers type of users, expectations, impacts

29
New cards

NIST AI RMF: Map

Map 2: Categorization of AI system is performed, limits and how output may be overseen by humans is documented

Map 3: Expected benefits and costs compared with appropriate benchmarks are understood, encouraging human centered design, transparency and documentation

30
New cards

Map Continued

Map4: Risks/benefits mapped, including 3rd party data

Map 5: impacts to individuals, groups, communities are characterized

31
New cards

NIST AI RMF: Measure 1

Appropriate mehtods and metrics are identified and applied

AI metrics and effectiveness regularly asses, including reports of errors, non front-line developers involved in regualr assessments

32
New cards

Measure 2

AI systems are evaluated for trustworthy characteristics, transparency accountability privacy fairness bias enviornmental impact assessed and documented

33
New cards

Measure 3 and 4

Measure 3: mechanisms tracking identified AI risks over time are in place

Measure 4: Feedback about efficacy of measurement

34
New cards

NIST AI RMF: Manage

Manage 1: risks are prioritized, responded to, and managed

Manage 2: strategies to maximize AI benefits and minimize negative impacts are planned

Manage 3: risks and benefits from 3rd parties are managed

Manage 4: risk treatments, including response and recovery, are documented and monitored

35
New cards

NIST AI RMF Model Profile (LLMs)

Guidelines to address unique risks:

  • Generation of misleading or harmful content

  • Privacy concerns

  • Lack of explainability

  • Alignment issues (social norms and values)

36
New cards

Classes of Privacy

  • Data/Information Privacy: collection and handling of sensitive personal information

  • Bodily Privacy: includes drug testing, strip search

  • Territorial Privacy: limits on the ability to intrude into another individual’s space

  • Communications Privacy: includes postal mail, email, telephone, etc,

37
New cards

Tensions Society has with Privacy

  • National Security: tension between security and privacy

  • Public Health: tension between public health and privacy

  • Data “in return for free services”: tension between privacy and commercial considersations

  • Employee data: tnesion between corporate and employee considerations

  • Free speech/free press and right to prevent disclosure of private facts and/or protection against libel

38
New cards

AI New and/or Exacerbated Privacy Challenges

Sanitizing models (how do you ensure they don’t containe personal data), Deepfakes, Hallucinations, Agentic AI (how do you reconcile purpose limitation principles with general purpose AI agents)

39
New cards

Transparency

People should be adequetly informed about the data that might be collected about them and how it might be used

40
New cards

Autonomy

Each individual should have some control over the collection and use of their information

41
New cards

FIPPS

  1. Notice/awareness: Disclose data collection and use practices

  2. Choice/consent: opt-in/out for different practices

  3. Access/participation: User should be able to review and correct their info

  4. Security/Integrity: Ensure info is accurate and secure

  5. Enforcement/Redress: Mechanisms for handling violations

42
New cards

2 Approaches to Legal Privacy Landscape

  1. Sectoral approach: seen in US, different Federal laws for different sectors and groups (finance, health vs children) + State and Local laws

  2. Comprehensive Laws: applied to all sectors of activity, seen in Europe

43
New cards

EU GDPR

More stringent requirements and significantly higher penalties (20M or 4% of global revene, highest of two), adopted privacy by design principles

44
New cards

GDPR Data Protection Principles

  • Lawfulness, fairness and transparency: processing must be lawful, fair, and transparent to data subject

  • Purpose limitation: data processing limited to legitimate purposes explicitly communicated to data subject

  • Data minimization: only as much data as necessary for purposes communicated to data subject

  • Accuracy: data has to be accurate and uptodate

  • Storage Limitation: only for as long as necessary for specified purpose

  • Integrity and confidentiality: ensure appropriate security, confidentiality, and integrity

  • Accountability: data controller responsible for being able to demonstrate compliance

45
New cards

Data Controller

Person who determined the purposese and means of processing personal data

46
New cards

Data Processors

Process data on behalf of the controller

47
New cards

Data Protection by Design - Data Minimization

Only collect what you need, only for specified pupose and for as long as you need it (de-identification when possible)

48
New cards

Privacy by Default

All default settings should be privacy protective (resulting in an “opt-in” regime)

49
New cards

GDPR 6 Lawful Grounds for Processing Personal Data

  • Data subject’s consent for specific purpose

  • Data is necessary for a contract

  • Necessary for controller to comply with a legal obligation

  • Necessary to protect vital interests of a natural person

  • Necessary for the public interest

  • Legitimate interest pursued by controller of third party

50
New cards

GDPR Data Subject Rights

Right to:

  • be informed

  • access data collected

  • erasure

  • restrict processing

  • data portability

  • object

  • in relation to automated decision making and prfiling

51
New cards

CCPA

  • Transparency of data practices

  • Ability to request access, transfer, deletion and limit usage of sensitive personal data

  • No discrimination

  • Ability to opt out, prior consent from people under 16 and parental consent if under 13

52
New cards

PII

Any information that can be used to distinguish or trace an individual’s identity and any other info that is linkable to an individual

53
New cards

Non-PII

ddata that cannot be tied to an individual

54
New cards

What to do in Practice (AI in Privacy)

  • Set up processes and identify roles and responsibilities

  • Carefully design and understand what you collect and why

  • identify applicable laws

  • understand users

  • ensure product/service are consistent with commitments you make; including 3rd party libraries and partners

  • Document everything

55
New cards

Threat Modeling

Can we develop methodology to help organizations systematically identify possible threats, and mitigate these threates

56
New cards

Threat Modeling Approach

Use a taxonomy of possible threats, a way fo modeling systems and systematically look for threats and mitigations strategies available

57
New cards

LINDDUN Taxonomy of Privacy Threats

Linkability, Identifiability, Non-repudiation, Detectability, Disclosure of information, Unawareness, Non-compliance

58
New cards

Privacy Enhancing Technologies

An increasingly broad collection of techniques designed to help protect people’s privacy; Ex: Differential Privacy, Onion Routing/TOR, Privacy Assistants

59
New cards

Differential Privacy

Inclusion or Exclusion of a single individual should not significantly impact the outcomes, thus safeguarding individual privacy

60
New cards

Differential Privacy: Epsilon

When Epsilon is smaller, indicates stronger privacy protection, however this makes accuracy of F worse

61
New cards

EU AI Act

Gradual introduction over 36 months period

  • Ban on prohibited practices (within 6 months)

  • codes of practice (within 9 months)

  • general purpose ai rules and governance (within 12 months)

  • Obligations for high-risk systems within 36 months

62
New cards

EU AI Act Scope

All provider, distributors, or deployers of AI systems and models in the EU or market their AI system or model within the EU. Does not impact research activities “without commercial objectives”

63
New cards

EU AI Act Risk Categories

  • Unacceptable Risk AI: prohibited (incl. export)

  • High-risk AI: requires comprehensive impact assessment, CE makring, incl. compliance w. transparency, security and quality requirements

  • Limited/Low-risk AI: must meet transparency obligations

  • Minimal-risk AI: must adhere to code of conduct

  • GenAI models: special rules based on open source or not, and computing power

64
New cards

Unacceptable Risk

  • manipulative techniques causing significant harm

  • exploiting vulnerabilities related to age, etc. causing significant harm

  • biometric inferring sensitive attributes (race) unless lawfully aquired

  • social scoring causing detrimental treatment of those people

  • assessing the risk of committing criminal offenses, profiling personality traits

  • compiling facial recognition databases

  • inferring emotions in workplaces

  • real time remote biometric identification in public (except searching for missing persons, preventing threats, or identifying suspects)

  • Remote Biometric Identifications

  • police must compute fundamental rights impact assessment and register in EU database

  • authorization from a judicial authority

65
New cards

Unacceptable Risk Treatment/Steps

Prohibition to palce on market

66
New cards

High Risk Treatment/Steps

Declaration of conformity, registration in the EU database, and CE Marking (for high-risk use cases)

67
New cards

Limited Risk Treatment/Steps

Ex: artistic deepfakes or chatbots; Obligation to inform users that content was generated by AI

68
New cards

Minimal Risk Treatment/Steps

Ex: AI in video games, anti-spam filters; Voluntary application of codes of conduct

69
New cards

Artistic Deep Fakes Scenario (category and obligations)

Limited Risk, adhere to code of conduct which should inform viewer that the image of Einstein was generated using AI (transparency)

70
New cards

10 Compliance Steps for High-Risk Systems

  1. Risk Management: risk identification and mitigation

  2. Data Governance: high quality training data, unbiased data

  3. Technical Documentation

  4. Traceability: keeping records across the entire lifecycle

  5. Human supervising: incl. explainability

  6. Accuracy, Robustness, and Security: across the entire lifecycle

  7. Quality Management System should be documented

  8. EU Declaration of Conformity - submit copies to national authorities

  9. CE Makring: indicating compliance with EU applicable laws

  10. Registration in EU database prior to deployment

71
New cards

High Risk Example Steps

  • Step 1: Risk determination

  • Step 2: Initial Compliance Process: unless oganization can prove it isn’t high risk If…

    • does not create significant risk to health, safety, or rights of individuals

    • does not substantially influence outcome of decision making

72
New cards

Gen AI Models EU AI Act

Base level: Copyright directive, detailed documentation on training data, and if not open source technical documentation

Systemic models: Copyright directive, detailed documentation on training data, technical documentation and if not open source other obligations (red teaming, energy consumption, etc)

73
New cards

Regulatory Sandboxes

Each national authority to deploy a “regulatory sandbox” to develop, train, evaluate, and validate their systems prior to deployment, working with national authorities

74
New cards

Differences between EU AI Act and NIST RMF

  • EU AI Act is LAW and NIST AI RMF is Framework

  • EU AI Act comes with obligations and penalties, NIST AI RMF is a “risk profile”

75
New cards

EU AI Act and NIST AI RMF Commonalities

  • Both aim to identify and mitigate risks

  • documentation of process and decisions

  • rely on similar ethical principles

  • room for interpretations, expect interpretations to evolve

76
New cards
77
New cards

What is Transparency about?

Informing people about practices they would likely want to know about (What is being done? Why it is being done (purpose)? How it is executed (type or training, type of reasoning)?

78
New cards

Examples of Transpaency taking usability into account

Layered FAQs, ability to ask questions, succinct summaries (ex: labels), just-in-time notifications

79
New cards

AI Transparency Key Issues

How was the ML model built? How reliable is it? Does it contain any potential bias? Which organization developed it and what guarantees do they provide? Is it possible to understand what the AI/ML model does (interpretability)? Is the AI capable of explaining its decisions/actions/reccomendations? How can I correct/improve/control the AI/ML?

80
New cards

Opaqueness (Transparency)

Insufficient disclosure, lack of awareness, lack of understandablity, no support for auditing

81
New cards

Selective Transparency

Disclosing some practices but not all of them

82
New cards

Manipulation and Deception (Transparency)

Persuasive computing/nudging to the detriment of a user, AI pretending to be a human, deepfakes

83
New cards

Interpretability

Comprehending how a machine learning model works as a whole, relies on being able to inspect the model

84
New cards

Explainability

used to refer to the ability to explain specific decsions/recommendations/outputs, may be provided by other predictive models, it does not rely on being able to inspect the model, generally more realistic for deep learning models

85
New cards

Example of Interpretability

"After processing the convulational layers, the activation for the “dog” output in the softmax layer was higher than any other class”

86
New cards

Example of Explainability

“It has four legs, fur, a tail, floppy ears, and a long snout, it is smaller than a wolf and is lying in a dog bed, so I think it is a dog”
Build an explanation module that can explain

87
New cards

Model-Based XAI

The decision trees are viewed as interpretable/explainable, Ex: learning decision trees, learning linear regression models

88
New cards

Post-hoc XAI

Ex: sensitivity analysis to identify which parameters have the most weight under different conditions

89
New cards

Steps in DARPA’s Depicition of XAI

Training Data → New Machine Learning Process → Explainability Model && Explanation Interface <=> User

90
New cards

Bias

Prejudice in favor of or against one thing, person, or group compared with another - usually in a way considered to be unfair. In ML, refers to systematic errors or inaccuracies that discriminate against certain groups. Ex: training data is ___ (not representative of the general population), inaccuracy in the data (ex: replicating human ___, poor algorithm)

91
New cards

Fairness

Absence of unjust treatment, no generally agreed upon definition or metric of fairness

92
New cards

Disparate Treatment

Deliberately favoriing some people based on gender, race, national origin, age. Aimed to capture intentional discrimination or “procedural fairness”

93
New cards

Disparate Impact

Regardless of intent (disproportionately adverse effect on protected group), Ex: should hire protected groups at least 80% rate of majority group

94
New cards

Fairness through Unawareness

Not train based on sensitive features (ex: gender, race, etc.), lots of different proxy attributes (zip code correlated with race/ethnicity), examples: amazon same day delivery, redlining

95
New cards

Algorithmic Fairness requires a…..

Fairness-relevant statistic (or metric) to be (approximately) equalized across different groups (Ex: Demographic Parity)

96
New cards

Demographic Parity

Equalizing acceptance rate (ex: loan, college admission, job offer). For any 2 groups a and b = D (the decision). Ex: If you approve loans for 80% of White applicants, you should also approve loans for 80% of Black applicants

97
New cards

Demographic Parity Problems

Just focuses on equalizing acceptance rate, doesn’t take into account ability to repay the loan, does not look at whether someone needs the loan (You could be rejecting a lot of people who need the loan and would likely be able to repay the loan) (could be losing a lot of good business). Ex: In group A, 3 out of 4 people who receive the loan repay it, but in group B, only 1 out of 4.

98
New cards

When Demographic Parity makes sense

We want to remedy a situation, Ex: Want to see more women in STEM, we want to see more minorities own their home

99
New cards

Disparate Impact (metric)

Is a variation of demographic parity, instead of requiring equality we require that the difference does not exceed a given threshold. By doing so, we may allow more people who should quality for loans to get the loans but ensure that the disparity does not exceed a threshold

100
New cards

Which error rates could we equalize?

The false positive rate - people who are granted the loan but are unlikely to repay it

The false negative rate - people who are denied the loan but would have been likely to repay it