1/77
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
What do we mean by AI for not bad?
to use AI for one’s goals while also mitigating its risks
What is the distinction with AI for good?
AI for good is using AI to create a positive social impact
What are the consequences of ethical risks (happen at scale, reputational, regulatory, and legal)?
costly in terms of money, resources, and money to address and loss of reputation and consumer trust
Why are corporate codes of conduct and current regulations not adequate to cover AI ethical risks?
does not account for AI bad behavior
What are the big three AI ethical challenges?
bias, lack of explainability, and privacy
What is the difference between content and structure for developing AI ethics programs?
content is to determine what is good or bad (what are the risks we’re trying to avoid? what does the organization see as good or bad?) structure are the formal mechanisms for identifying and mitigating ethical risks (how do we operationalize ethical risk mitigation?)
Why are misperceptions about the nature of ethics a major obstacle to organizational buy-in to developing AI ethics programs?
often put a stop to fruitful discussions and are an impediment to genuine organizational buy in
What are these misperceptions?
ethics is subjective because people disagree about what is right or wrong, science delivers us truth and since ethics isn’t a science it doesn’t deliver us truth, and ethics requires an authority figure to say what’s right or wrong otherwise its subjective
Why is it not advisable to focus on consumer ethical beliefs (perceptions) as the basis for the organization’s AI ethics program?
too coarse-grained for fine-grained problems and they have not even thought about problems yet
What is responsible AI?
a governance framework that documents how a specific organization is addressing the challenges around AI from both an ethical and legal pov
Why is there a movement towards the necessity of responsible AI practices?
we need this as AI scales to control risks and bias
We saw examples of Responsible AI frameworks for many organizations. What are the common (and I would say, essential) characteristics of these frameworks?
fairness, interpretability, privacy, security, reliability (could also include transparency, accountability)
What is fairness?
impartial and just treatment or behavior without favoritism or discrimination
What is a major challenge to achieving fairness?
hard to determine what fairness actually means
What is equity vs equality?
equity = equal access to the same opportunity while equality = being treated the same
What is disparate treatment vs. disparate impact?
disparate treatment = liability could be imposed if there is an explicit classification based on the protected attribute or if there was an intent/motive to discriminate
disparate impact = even if the policy is neutral on its face, if there is a disproportionately adverse impact on minority groups, liability will be imposed
What are the three steps needed to define and quantify fairness in building fair ML systems?
what is the right definition of fair outcome for the specific use case? who selects and defines what is a fair outcome and for whom? what are the metrics we use?
Distinguish between individual fairness approaches and group fairness approaches.
group fairness = achieve the same outcomes across different demographics
individual fairness = treating similar individuals similarly
Describe the two individual fairness approaches in your slides (aware and unaware approaches) and their relative strengths and limitations.
awareness approach = relies on how you define similarity between applicants as 2 similar individuals should be treat similarly — run the risk of introducing new fairness problems if your similarity metric misses important info & hard to determine what is an appropriate metric function — aware of protected attributes
unaware approach = algorithm is blinded or unaware of an identifiable factor and prohibited attributes by law such as gender, race, sexuality — growing criticism as success is not only talent/merit but what resources/opportunities one had access to & legal scholars concluded that individual fairness doesn’t promote fair outcomes & computational perspective other factors can serve as hidden attributes — works when inequality is not an issues (highly sterilized homogenic environments)
demographic (statistical) parity
population ratios should be consistent — ensures minority populations are being represented — majority populations may be underrepresented
equality of false negatives
ensures false negative ratio is consistent — reduces harm from missed diagnoses — doesn’t amend false positive errors
equal opportunity
ensures false positive ratio is consistent — fairness among those who truly need treatments — doesn’t amend false negative errors
equality as equalized odds
same true positive and false positive rates — balances fairness better than simple parity & reduces discrimination from unequal mistakes — more complex and can reduce overall accuracy as well as costly
Why is fairness complex?
not a one size all, no set answers, cost and benefit decisions (there are tradeoffs and certain fairness requirements cannot be satisfied simultaneously, continuous process
What is AI Bias?
occurs when an algorithm produces results that are systematically prejucied against a specific group(s)
Why is it important? (AI bias)
AI algorithms are increasingly used to distribute goods and services, can have a significant impact on opportunities, health, and quality of life
What are the sources of AI bias?
problems with training data & problems with testing and how you think about the use case
How can we identify and measure bias?
rely on definitions of fairness and related computational measures of bias based on these definitions
What are the major challenges in measuring bias?
metrics of fairness, timing of identifying bias, allocative and representational harms, legal issues, choosing risk mitigation strategies
What are some suggested guidelines in terms of structure that can be helpful? (AI bias)
input data, decisions/outputs — adjusting thresholds for outputs, identifying bias — lack of demographic data against which to compare outputs
What are representational harms vs. allocative harms? Which one is easier to identify through computational AI fairness measures and which is not?
representational harms = algorithmically filtered depictions that are discriminatory
allocative harms = unfairly assigned opportunities or resources by the algorithm — computational measures capture allocative harms of unjust distribution of goods and services
What are some bias mitigation strategies (in terms of input data, adjusting thresholds, identifying bias)?
input data — get more data, get better proxies, examine if one size fits all is right, alter inputs/adjust weights, use bias corrected synthetic data
decisions/outputs — use a binary yes or no
identifying bias — unclear, synthetic data may be an option
What is AI explainability?
the ability to understand and explain the outputs of a ML in human terms
What are its objectives? Why is it important? Who are the various stakeholders that need explanations?
explain to justify = justify outcome
explain to control = prevent things from going wrong
explain to discover = help humans learn
explain to improve = continuously improve the model
importance = enables monitoring and accountability both during production and in the world to ensure fairness and debiasing and mitigate against model drift
lack of explanation may lead to ethical, regulatory, or legal risk
all system builders, end user decision makers, end consumers, regulatory bodies
What are people explanations and what are machine explanation? How do they differ?
people explanations = why people made the decisions they did
machine explanation = how the model arrived at its outputs given the inputs
What are the various types of machine explanations?
global explanation = understanding the logic of a model as a whole
local explanation = understanding the reasons behind a specific prediction
intrinsic (direct) = models are interpretable due to their simple nature
post-hoc = interpreting a black box model like a neural network by applying interpretability methods after training the model or while in production
model specific = limited to specific model or groups of models
model agnostic = work on any model, do not look into the black box of the algorithm but working with input - output pairs
Who determines what explanations are needed for a specific use case?
end user, developer, executive, regulator
Under what conditions are explanations not needed and under what conditions are they needed?
not needed:
when model does not directly deal with decisions about how anyone should be treated
when people explanations of why you want to use a black box plus informed consent justifies use
needed:
when expressing respect is ethically required
when people need to know how to get better results
when people need to know how to approach and make a design
when outputs are strange
when you need to justify treatment
What makes for good explanations?
objective — respect, well informed decisions
characteristics — truth, ease/efficiency/effectiveness of use, intelligibility
How can we use explanations to build the appropriate level of trust (avoid overreliance and avoid algorithmic aversion)? How can we appropriately calibrate trust?
helps users understand when to trust systems vs their own judgment
explain in the moment and overall
articulate data sources
explain what is important
account for situational stakes
decide if and how to show model confidence to manage AI influence
What are example-based explanations? How are they useful?
loan approval, hiring, medical
similar past examples or representative cases that influence or resemble current prediction
What are confidence displays?
AI level of certainty in prediction, recommendation, or output
helps users to know when to trust vs question system
human in the loop for exception
helps prioritize uncertain cases for human intervention
What is privacy?
the claim of groups, individuals, and institutions to determine for themselves when, how, and to what extent info about them is communicated to others
What are the three aspects of privacy?
ethics, regulatory compliance, cybersecurity
The privacy act of 1974 centers around “notice and consent.” What issues around consent make this not an effective privacy protection regulation?
too long to read through privacy policies — worsened by IoT devices, smartphones, AI chatbots
What are the privacy concern issues that are unique to AI and AI privacy issues?
AI is data hungry
magnifies ability to use personal info (raising analysis of personal info to new levels of power and speed)
personal info in predictions
data persistence (data existing longer than the humans that created it)
data repurposing (data being used beyond their originally imagined purpose
data spillovers (data collected on people who are not the target
What are the four elements that organizations can combine to define their level of privacy? What does each mean?
transparency = knowing what info is being collected, what is done with it, what decisions contribute to, who it has been shared with or sold to
data control = have the ability to collect, edit, or delete info about oneself and opt in or opt out of being treated in a certain way
opt out of by default = do users auto opt in for collection of their info or is the default opt out
full services = orgs may need to adjust level of services provided based on the amount and type of data a person shares
Describe the five levels of privacy for organizations
blindfolded and handcuffed = not knowledgeable and not in control (passive)
handcuffed = knowledgeable but not in control (passive)
pressured = knowledgeable and some degree of control
slightly curtailed = knowledgeable and data has not been collected/used without their consent
free and grateful = full independent service offered by orgs — knowledgeable and users decide what data to give consent
What do we mean by surveillance capitalism, attention economy, and intention economy? Why are these important with respect to AI privacy issues?
surveillance capitalism = orgs collect large amounts of data, monitor behavior, and turn data into products to predict and influence future behavior
attention economy = human attention is scarce and valuable so platforms need to capture and keep it
intention economy = AI systems predict and influence what users want
important = risk is not just about having data but how it is used — reducing autonomy, prediction, influence, manipulation, targeting
What are the main components of structure?
roles and responsibilities
policies
processes and procedures
What is the objective of AI governance?
how to create, scale, and maintain an AI ethical risk governance structure in your organization to systematically and comprehensively identify and manage the ethical, reputational, regulatory, and legal risks of AI
What is the typical first step in creating AI governance?
articulating ethical principles for AI development and deployment
Describe the problems with current ethics statements that prevent them from being helpful.
they lump together — content & structure, ethical and nonethical values, instrumental & noninstrumental values, and describe overly abstract values
What is the recommended 4-step approach to enable an organization to develop relevant and actionable ethical value statements?
state your values by thinking about your ethical nightmares
explain why you value what you do in a way that connects to your organization’s mission or purpose
connect your values to what you take to be ethically impermissible
articulate how you will realize your ethical goals or avoid your ethical nightmares
What are the advantages of this approach to developing AI ethical values?
Defined values and strategies in a way that → enables action (helps determine KPIs) and connects to what is ethically off limits
Can perform gap analysis based on the values you have specified to see where your company is and where it needs to be
If you involve stakeholders across the organization, you create awareness and buy in
By articulating what is ethically impermissible and why you enable people to think about ethically tough cases
Can be used for branding and public relations
How do we create an AI Governance Structure? Describe the 7-step approach outlined in the chapter and the importance of executive leadership and ownership of this structure.
Articulate clear AI ethical standards
Create organizational awareness of AI ethical standards
Provide teams with the tools and processes to identify and mitigate AI ethical risk
Expert oversight (beyond data scientists)
Assign role specific responsibilities for accountability
AI ethical risk program with KPIs
Executive ownership of the AI ethical risk program → gives authority and resources, drives organization wide adoption, aligns incentives, ensures accountability, signals ethics is strategic not optional
What is ethical case law? How do we create it? Why is this important and useful to have in deciding on AI ethical questions? Why should we create this a priori?
Ethical case law = like legal case precedents — prior cases where the org faced an ethical dilemma and how it resolved it well, prior cases from other similar orgs, fictional cases and their resolution
One way → make statement and see how much consensus there is
Develops a skill to tackle tough ethical questions well
Resolution is not compromised because the right thing to do in the specific situation is painful
Why is it critical to create AI ethical risk organizational awareness? (AI applications are procured across the organization [e.g., HR, marketing, etc.] by people who are not aware of the potential for, and are unable to assess, AI ethical risk)
AI is procured and used by many departments and units within an organization
Personnel in these units need to be aware of sources of AI ethical risk and be able to assess it
What do product development teams need to identify and mitigate AI ethical risk?
Concrete tools and processes
Why shouldn’t the identification and mitigation of AI ethical risk be solely the responsibility of data scientists and product developers?
AI applications are procured and deployed across the organization by people who are not aware of the potential for and are unable to assess AI ethical risk
Discuss the function, membership, and jurisdiction of an AI ethics committee (AIEC).
Function → to play an oversight role in systematically and comprehensively identifying and mitigating the ethical risks of AI products that are developed in house or that are procured from third party vendors
Membership → data scientist, product design expertise, ethics adjacent experts, ethicist, subject matter experts, 1 member that is unaffiliated with the org
Jurisdiction → review AI applications across the org, especially higher risk use cases procured or deployed in functions such as HR, marketing, and other business areas
What are the risks of not having an AI ethics committee?
Increased risk of not identifying AI ethical risks
Risks of identifying AI ethical risks only in deployment, when it is costly to address
Inconsistencies in ethical standards across departments
More opportunities for conflict between short term career goals, short term profit and the long term welfare of the org
Discuss what is involved in creating an accountability structure in the organization
Assign role specific responsibilities aimed at identifying and mitigation AI ethical risks
Incentives to fulfill responsibilities and disincentives for those who do not fulfill responsibilities
Recognize those who adhere to AI ethical standards
Make this part of the annual evaluation process and incentives
o monitor how well we are doing on our AI Ethics program, it is helpful to have KPIs on (a) the extent to which the organization is adopting or complying with these standards; and (b) the extent to which meeting these standards sufficiently mitigates AI ethical risk. Understand both these aspects and what kinds of KPIs are involved in assessing each.
Measure the extent to which the org is adopting or complying with standards → part of compliance and can be measured by compliance teams
Measure the extent to which meeting those standards sufficiently mitigates risk → are we actually achieving our ethical goals and avoiding our ethical nightmares & what are the KPIs for fairness, respect, and privacy
Why may tools provided to developers not be effective in identifying and mitigating AI ethical risk?
Tools need to be used to be effective
You do not have buy in so that they are interested in using the tools
If the tools do not fit their workflow
If there is no organizational incentive to use the tools
If the team does not possess the requisite concepts, knowledge, and training to use the tools
Why is it important for development teams to take a practical approach to ethics (rather than viewing ethics from the lens of ethical theories)?
The goal is to identify and mitigate AI ethical risk in real use cases → development teams need actionable guidance focused on what could go wrong, who could be wronged, and how to reduce those risks
Differentiate between harming someone and wronging someone. Why is it more productive to look at AI ethics from the perspective of wronging someone?
Harming someone = harmed people’s psychological states
Wronging someone = whether we are defaulting on our obligations to them, whether we are violating their rights, or stopping them from receiving something they deserve (what is ethically permissible, what rights might be violated, what obligations might be defaulted on)
More productive → helps identify and mitigate ethical risk by focusing on how people may be wronged by what you create, how you create it, what people do with it, what impacts it has, and what you do about those impacts
Why is it important to include people with training in ethics in identifying and mitigating ethical risk?
They identify ethical problems much faster
Skills is needed to navigate projects with complex ethical questions and to help others navigate
Help with value sensitive design in which products are designed in light of the values of people who will be affected directly or indirectly by the deployment of the product
Understand the five categories that help us identify and mitigate ethical risk through asking questions about what are the ethical risks by virtue of (a) What you create; (b) How you create it; (c) What people do with it; (d) What impacts it has; (e) What you do about those impacts.
What you create → what are the ethical risks, should we do this, how can we develop and deploy it in a way that mitigates the risk
How you create it → does the way the product is created give rise to ethical risk
What people do with it → what ethical risks could result from what ignorant and malicious users made do with it, what features should we (not) include to mitigate these risks, ethical best practices do we need to articulate for use of the product
What impacts it has → how AI behaves in the wild, how do we continue to train it with more data/kind of data
What do you about those impacts → how do we continue to create this product in a way that mitigates risk that we did not foresee that we now need to account for, what kinds of people are using our products in ways we did not foresee and now need to account for, do we pull the product or modify it
Be aware of the various types of tools available
Checklists/lists of questions
AI ethics statements and AI ethical case law
5 ethical levels of privacy
Decision tree to determine whether explainability is important
Premortem analysis
Ethical red teaming
Playing devils advocate
Stakeholder interviews and analyses
Metrics of fairness
What is “Ethics by Design”?
The intentional embedding of ethical and human use principles into the process of designing, developing, and delivering software and services
What is the main balancing act for AI regulation?
Balance regulation to safeguard against AI negative consequences while not stifling innovation
Why is there a need to regulate AI?
Mitigate risks, ensure accountability, protect fundamental rights, foster trust and innovation
What is the spectrum of approaches to regulation?
No regulation → self regulation → regulation redesign → new regulation → moratorium
Hard law, soft law, sectoral regulation, self regulation
Describe the EU AI Act.
EU AI Act = risk based tiers (unacceptable) → high → limited → minimal; strict oversight of high risk systems
Unacceptable = banned
High risk = have to be registered in an EU database → assessed before being put on the market and also throughout their lifecycle
Generative AI = comply to transparency requirements → what content is generated, design model to prevent illegally generated content, publish summaries of copyrighted data for training
Minimal risk = minimal transparency requirements that allow users to make informed decisions, after interaction users can decide if they want to continue, users must be made aware they are interacting with AI
Describe the AI Bill of Rights
AI bill of rights = 5 principles and associated practices to help guide the design, use, and deployment of automated systems to protect the rights of the American public
Safe and effective systems
Algorithmic discrimination protections
Notice and explanation
Human alternatives, consideration, and fallback
Applies to AI systems that impact our rights, opportunities, or access to critical resources or services
What are some important intellectual property rights questions that AI gives rise to?
Who owns AI creations
Who is responsible
What issues does awarding personhood status to AI create?
Lack of consciousness, intent, accountability