1/155
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
What is AI?
Computers and machines that can reason, learn, and act in a way that would normally require human intelligence
Examples of AI Use Cases
Recognition (image), Event detection (fraud detection), Forecasting (revenue), Personalization, Interaction support (chatbots), Goal-driven optimization (supply chain management), Reccomendations (health)
AI Safety Risks
Safety and contrability of AI is a risk, example is Uber self-driving car
AI Risks
Safety
Civil Liberty/Civil Rights
Social Risks
Privacy Risks
Economic Risks
Environmental Risks
AI Risks: Civil Liberty and Civil Rights
AI should be fair and equitable (free of bias), example: Facial Recognition Recognition Systems
AI Risks: Societal Risks
Deepfakes, misinformation, can we count on companies regulating themselves
AI Risks: Privacy Risks
Issues of transparency, controllability, and privacy. Need for solution that ensures people retain control over collection and use of their data, example Amazon Echo
AI Risks: Economic Risks
Will AI accelerate inequality, create new inequalities, help reduce inequalities?
AI Risks: Environmental Risks
Large amount of electricity use
Governance Definition
Process of making and enforcing decisions, can take the form of laws and regulations, processes and policies implemented within an organization
AI Governance Defitinion
A set of principles, organizational mechanisms, processes, practices, methodologies and technologies to help organizations identify and mitigate potential risks entailed by AI solutions. Can be madated by laws and regulation and/or informed by best practices
OSTP Blueprint for an AI Bill of Rights
Safe and Effective Systems
Protection against Algorithmic Discrimination
Data Privacy
Notice and Explanation
Human Alternatives, Consideration and Fallback
What should AI Governance be (Orchestration Process)
Culture of trust and collaboration
Systematic and consistent
Document Process
Identifying risks and documenting mitigation
AI Governance is about entire Lifecycle
Conception
Design
Implementation
Deployment
Never ending process and highly iterative one (Agile), one that needs to be reconciled with practical corporate considerations-finite time and resources
Open Letter Requesting 6 Month Moratorium
AI systems are becoming human competitive at general tasks, calls on AI labs to pause for at least 6 months of training of systems more powerful than GPT-4
Basic Ethical Principles
Fairness
Transparency
Justice
Accountability
Agency
Asilomar Conference on “Beneficial AI”
Argues for more AI research to focus on economic, ethical, social, and legal issues (not just technical)
Asilomar Conference - Ethics and Values
Safety: safe and secure
Failure Transparency: If AI system causes harm, it should be possible to ascertain why
Judicial Transparency: Any involvement in judicial decision making should provide satisfactory explanation auditable by a competent human authority
Responsibility: Designers and builders
Value Alignment; Highly autonomous AI systems should be designed with goals and behaviors that align with human values
Human Values: AI systems should be compatible with human dignity, rights, freedoms, and cultural diversity
Personal Privacy: People should have right to access, manage, and control data
Liberty and Privacy: application of AI to personal data must not curtail perceived liberty
Shared Benefit: should benefit as many people as possible
Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity
Human Control: humans should choose how and whether to delegate decisions to AI systems to accomplish human chosen objectives
Non-subversion: AI should respect and improve, rather than subvert, the social and civic processes
AI Arms Race: should be avoided
Ethical and RIght-based Approaches to Principles for AI
Privacy
Accountability
Safety and Security
Security
Transparency
Explainability
Fairness and non-discrimination
Human control/agency
Professional Responsibility
Respect of human values
NIST AI Risk Management Framework
Intended for voluntary use to improve ability to incorporate trustworthiness considerations into design, development, use, and evaluation of AI products, services, and systems
NIST AI RMF: Map
Context is recognized and risks related to context are identified
NIST AI RMF: Measure
Identified risks are assessed, analyzed, or tracked
NIST AI RMF: Govern
A culture of risk management is cultivated and present
NIST AI RMF: Manage
Risks are prioritized and acted upon based on a projected impact
NIST AI RMF: Govern 1
Policies, processes, procedures, and practices are in place, transparent, and implemented effectively.
Includes:
legal and regulatory requirements
policies, processes, and procedures
Policies aligned with organizations tolerance for risk
risk management processes align with risk priority
Organizational roles and responsibilities
AI Systems inventory (documentation, incident response plan)
Processes are in place for decommissioning/phasing out AI systems
NIST AI RMF: Govern 2
Teams and individuals are empowered, responsible, and trained for mapping, measuring, and managing AI risks
NIST AI RMF: Govern Continued
Govern 3: workforce diversity, equity, inclusion, and accessibility
Govern 4: culture that considers and communicates AI risk
Govern 5: Robust engagement with AI actors
Govern 6: policies address 3rd party software
NIST AI RMF: Map 1
Risk analysis requires understanding the context of use/deployment
Intended purpose, context-specific laws are understood and documented. Considers type of users, expectations, impacts
NIST AI RMF: Map
Map 2: Categorization of AI system is performed, limits and how output may be overseen by humans is documented
Map 3: Expected benefits and costs compared with appropriate benchmarks are understood, encouraging human centered design, transparency and documentation
Map Continued
Map4: Risks/benefits mapped, including 3rd party data
Map 5: impacts to individuals, groups, communities are characterized
NIST AI RMF: Measure 1
Appropriate mehtods and metrics are identified and applied
AI metrics and effectiveness regularly asses, including reports of errors, non front-line developers involved in regualr assessments
Measure 2
AI systems are evaluated for trustworthy characteristics, transparency accountability privacy fairness bias enviornmental impact assessed and documented
Measure 3 and 4
Measure 3: mechanisms tracking identified AI risks over time are in place
Measure 4: Feedback about efficacy of measurement
NIST AI RMF: Manage
Manage 1: risks are prioritized, responded to, and managed
Manage 2: strategies to maximize AI benefits and minimize negative impacts are planned
Manage 3: risks and benefits from 3rd parties are managed
Manage 4: risk treatments, including response and recovery, are documented and monitored
NIST AI RMF Model Profile (LLMs)
Guidelines to address unique risks:
Generation of misleading or harmful content
Privacy concerns
Lack of explainability
Alignment issues (social norms and values)
Classes of Privacy
Data/Information Privacy: collection and handling of sensitive personal information
Bodily Privacy: includes drug testing, strip search
Territorial Privacy: limits on the ability to intrude into another individual’s space
Communications Privacy: includes postal mail, email, telephone, etc,
Tensions Society has with Privacy
National Security: tension between security and privacy
Public Health: tension between public health and privacy
Data “in return for free services”: tension between privacy and commercial considersations
Employee data: tnesion between corporate and employee considerations
Free speech/free press and right to prevent disclosure of private facts and/or protection against libel
AI New and/or Exacerbated Privacy Challenges
Sanitizing models (how do you ensure they don’t containe personal data), Deepfakes, Hallucinations, Agentic AI (how do you reconcile purpose limitation principles with general purpose AI agents)
Transparency
People should be adequetly informed about the data that might be collected about them and how it might be used
Autonomy
Each individual should have some control over the collection and use of their information
FIPPS
Notice/awareness: Disclose data collection and use practices
Choice/consent: opt-in/out for different practices
Access/participation: User should be able to review and correct their info
Security/Integrity: Ensure info is accurate and secure
Enforcement/Redress: Mechanisms for handling violations
2 Approaches to Legal Privacy Landscape
Sectoral approach: seen in US, different Federal laws for different sectors and groups (finance, health vs children) + State and Local laws
Comprehensive Laws: applied to all sectors of activity, seen in Europe
EU GDPR
More stringent requirements and significantly higher penalties (20M or 4% of global revene, highest of two), adopted privacy by design principles
GDPR Data Protection Principles
Lawfulness, fairness and transparency: processing must be lawful, fair, and transparent to data subject
Purpose limitation: data processing limited to legitimate purposes explicitly communicated to data subject
Data minimization: only as much data as necessary for purposes communicated to data subject
Accuracy: data has to be accurate and uptodate
Storage Limitation: only for as long as necessary for specified purpose
Integrity and confidentiality: ensure appropriate security, confidentiality, and integrity
Accountability: data controller responsible for being able to demonstrate compliance
Data Controller
Person who determined the purposese and means of processing personal data
Data Processors
Process data on behalf of the controller
Data Protection by Design - Data Minimization
Only collect what you need, only for specified pupose and for as long as you need it (de-identification when possible)
Privacy by Default
All default settings should be privacy protective (resulting in an “opt-in” regime)
GDPR 6 Lawful Grounds for Processing Personal Data
Data subject’s consent for specific purpose
Data is necessary for a contract
Necessary for controller to comply with a legal obligation
Necessary to protect vital interests of a natural person
Necessary for the public interest
Legitimate interest pursued by controller of third party
GDPR Data Subject Rights
Right to:
be informed
access data collected
erasure
restrict processing
data portability
object
in relation to automated decision making and prfiling
CCPA
Transparency of data practices
Ability to request access, transfer, deletion and limit usage of sensitive personal data
No discrimination
Ability to opt out, prior consent from people under 16 and parental consent if under 13
PII
Any information that can be used to distinguish or trace an individual’s identity and any other info that is linkable to an individual
Non-PII
ddata that cannot be tied to an individual
What to do in Practice (AI in Privacy)
Set up processes and identify roles and responsibilities
Carefully design and understand what you collect and why
identify applicable laws
understand users
ensure product/service are consistent with commitments you make; including 3rd party libraries and partners
Document everything
Threat Modeling
Can we develop methodology to help organizations systematically identify possible threats, and mitigate these threates
Threat Modeling Approach
Use a taxonomy of possible threats, a way fo modeling systems and systematically look for threats and mitigations strategies available
LINDDUN Taxonomy of Privacy Threats
Linkability, Identifiability, Non-repudiation, Detectability, Disclosure of information, Unawareness, Non-compliance
Privacy Enhancing Technologies
An increasingly broad collection of techniques designed to help protect people’s privacy; Ex: Differential Privacy, Onion Routing/TOR, Privacy Assistants
Differential Privacy
Inclusion or Exclusion of a single individual should not significantly impact the outcomes, thus safeguarding individual privacy
Differential Privacy: Epsilon
When Epsilon is smaller, indicates stronger privacy protection, however this makes accuracy of F worse
EU AI Act
Gradual introduction over 36 months period
Ban on prohibited practices (within 6 months)
codes of practice (within 9 months)
general purpose ai rules and governance (within 12 months)
Obligations for high-risk systems within 36 months
EU AI Act Scope
All provider, distributors, or deployers of AI systems and models in the EU or market their AI system or model within the EU. Does not impact research activities “without commercial objectives”
EU AI Act Risk Categories
Unacceptable Risk AI: prohibited (incl. export)
High-risk AI: requires comprehensive impact assessment, CE makring, incl. compliance w. transparency, security and quality requirements
Limited/Low-risk AI: must meet transparency obligations
Minimal-risk AI: must adhere to code of conduct
GenAI models: special rules based on open source or not, and computing power
Unacceptable Risk
manipulative techniques causing significant harm
exploiting vulnerabilities related to age, etc. causing significant harm
biometric inferring sensitive attributes (race) unless lawfully aquired
social scoring causing detrimental treatment of those people
assessing the risk of committing criminal offenses, profiling personality traits
compiling facial recognition databases
inferring emotions in workplaces
real time remote biometric identification in public (except searching for missing persons, preventing threats, or identifying suspects)
Remote Biometric Identifications
police must compute fundamental rights impact assessment and register in EU database
authorization from a judicial authority
Unacceptable Risk Treatment/Steps
Prohibition to palce on market
High Risk Treatment/Steps
Declaration of conformity, registration in the EU database, and CE Marking (for high-risk use cases)
Limited Risk Treatment/Steps
Ex: artistic deepfakes or chatbots; Obligation to inform users that content was generated by AI
Minimal Risk Treatment/Steps
Ex: AI in video games, anti-spam filters; Voluntary application of codes of conduct
Artistic Deep Fakes Scenario (category and obligations)
Limited Risk, adhere to code of conduct which should inform viewer that the image of Einstein was generated using AI (transparency)
10 Compliance Steps for High-Risk Systems
Risk Management: risk identification and mitigation
Data Governance: high quality training data, unbiased data
Technical Documentation
Traceability: keeping records across the entire lifecycle
Human supervising: incl. explainability
Accuracy, Robustness, and Security: across the entire lifecycle
Quality Management System should be documented
EU Declaration of Conformity - submit copies to national authorities
CE Makring: indicating compliance with EU applicable laws
Registration in EU database prior to deployment
High Risk Example Steps
Step 1: Risk determination
Step 2: Initial Compliance Process: unless oganization can prove it isn’t high risk If…
does not create significant risk to health, safety, or rights of individuals
does not substantially influence outcome of decision making
Gen AI Models EU AI Act
Base level: Copyright directive, detailed documentation on training data, and if not open source technical documentation
Systemic models: Copyright directive, detailed documentation on training data, technical documentation and if not open source other obligations (red teaming, energy consumption, etc)
Regulatory Sandboxes
Each national authority to deploy a “regulatory sandbox” to develop, train, evaluate, and validate their systems prior to deployment, working with national authorities
Differences between EU AI Act and NIST RMF
EU AI Act is LAW and NIST AI RMF is Framework
EU AI Act comes with obligations and penalties, NIST AI RMF is a “risk profile”
EU AI Act and NIST AI RMF Commonalities
Both aim to identify and mitigate risks
documentation of process and decisions
rely on similar ethical principles
room for interpretations, expect interpretations to evolve
What is Transparency about?
Informing people about practices they would likely want to know about (What is being done? Why it is being done (purpose)? How it is executed (type or training, type of reasoning)?
Examples of Transpaency taking usability into account
Layered FAQs, ability to ask questions, succinct summaries (ex: labels), just-in-time notifications
AI Transparency Key Issues
How was the ML model built? How reliable is it? Does it contain any potential bias? Which organization developed it and what guarantees do they provide? Is it possible to understand what the AI/ML model does (interpretability)? Is the AI capable of explaining its decisions/actions/reccomendations? How can I correct/improve/control the AI/ML?
Opaqueness (Transparency)
Insufficient disclosure, lack of awareness, lack of understandablity, no support for auditing
Selective Transparency
Disclosing some practices but not all of them
Manipulation and Deception (Transparency)
Persuasive computing/nudging to the detriment of a user, AI pretending to be a human, deepfakes
Interpretability
Comprehending how a machine learning model works as a whole, relies on being able to inspect the model
Explainability
used to refer to the ability to explain specific decsions/recommendations/outputs, may be provided by other predictive models, it does not rely on being able to inspect the model, generally more realistic for deep learning models
Example of Interpretability
"After processing the convulational layers, the activation for the “dog” output in the softmax layer was higher than any other class”
Example of Explainability
“It has four legs, fur, a tail, floppy ears, and a long snout, it is smaller than a wolf and is lying in a dog bed, so I think it is a dog”
Build an explanation module that can explain
Model-Based XAI
The decision trees are viewed as interpretable/explainable, Ex: learning decision trees, learning linear regression models
Post-hoc XAI
Ex: sensitivity analysis to identify which parameters have the most weight under different conditions
Steps in DARPA’s Depicition of XAI
Training Data → New Machine Learning Process → Explainability Model && Explanation Interface <=> User
Bias
Prejudice in favor of or against one thing, person, or group compared with another - usually in a way considered to be unfair. In ML, refers to systematic errors or inaccuracies that discriminate against certain groups. Ex: training data is ___ (not representative of the general population), inaccuracy in the data (ex: replicating human ___, poor algorithm)
Fairness
Absence of unjust treatment, no generally agreed upon definition or metric of fairness
Disparate Treatment
Deliberately favoriing some people based on gender, race, national origin, age. Aimed to capture intentional discrimination or “procedural fairness”
Disparate Impact
Regardless of intent (disproportionately adverse effect on protected group), Ex: should hire protected groups at least 80% rate of majority group
Fairness through Unawareness
Not train based on sensitive features (ex: gender, race, etc.), lots of different proxy attributes (zip code correlated with race/ethnicity), examples: amazon same day delivery, redlining
Algorithmic Fairness requires a…..
Fairness-relevant statistic (or metric) to be (approximately) equalized across different groups (Ex: Demographic Parity)
Demographic Parity
Equalizing acceptance rate (ex: loan, college admission, job offer). For any 2 groups a and b = D (the decision). Ex: If you approve loans for 80% of White applicants, you should also approve loans for 80% of Black applicants
Demographic Parity Problems
Just focuses on equalizing acceptance rate, doesn’t take into account ability to repay the loan, does not look at whether someone needs the loan (You could be rejecting a lot of people who need the loan and would likely be able to repay the loan) (could be losing a lot of good business). Ex: In group A, 3 out of 4 people who receive the loan repay it, but in group B, only 1 out of 4.
When Demographic Parity makes sense
We want to remedy a situation, Ex: Want to see more women in STEM, we want to see more minorities own their home
Disparate Impact (metric)
Is a variation of demographic parity, instead of requiring equality we require that the difference does not exceed a given threshold. By doing so, we may allow more people who should quality for loans to get the loans but ensure that the disparity does not exceed a threshold
Which error rates could we equalize?
The false positive rate - people who are granted the loan but are unlikely to repay it
The false negative rate - people who are denied the loan but would have been likely to repay it