1/23
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
What are the three key questions to ask about algorithmic decisions?
Is the decision fair?
Who made the decision? Who is responsible? Where rests accountability?
How was the decision made? Can we understand the decision-process? How transparent is the process?
What is the relationship between ethics, FAT, and product adoption?
Ethics → Trust-enhancing Factors (FAT) → Product Adoption
What percentage of Americans do not trust autonomous vehicles?
78% of Americans do not trust AVs (American Automobile Association, Nature 2017)
What are examples of domains where algorithmic decision-making occurs?
Predicting employees' success (hiring), predicting academic performance, predictive policing and jurisdiction (recidivism), predicting driving outcomes, predicting sport judgments
What does COMPAS stand for?
Correctional Offender Management Profiling for Alternative Sanctions
What is the purpose of COMPAS?
To predict how likely a defendant is to commit a felony or misdemeanor once released from prison
Who developed COMPAS and when has it been in use?
Developed by company Northpointe (now "equivant"); in use since 2000 for predictions on >1 million offenders
What is the COMPAS risk score range?
Scores from 1-10 (10 = highest risk score)
Why is COMPAS controversial regarding transparency?
The algorithm is proprietary and thus a trade secret, providing little transparency over the decision-making process
What were the key findings of ProPublica's 2016 investigation of COMPAS?
Only 20% of people predicted to commit violent crimes actually did so
For misdemeanors, algorithm was just above 50% correct
Overall: 61% of those deemed likely to re-offend were arrested for subsequent crimes within two years
What is Northpointe's definition of fairness?
Scores map to equal probability in actual re-offending among both blacks and whites (race does not matter). Within each risk category, the proportion of defendants who reoffend is approximately the same regardless of race.
What is ProPublica's definition of fairness?
Keep false positive and false negative rates equal between races. Focus on defendants who ultimately did not reoffend.
Why can't algorithmic systems implement multiple conceptualizations of fairness?
It's mathematically impossible to simultaneously satisfy both definitions of fairness, especially when there are different base rates between groups (e.g., black defendants recidivate at 51% vs 39% for white defendants in Broward County data).
What were the false positive rates by race in COMPAS?
False positive rate for blacks: 44.9%; False positive rate for whites: 23.5%
What were the false negative rates by race in COMPAS?
False negative rate for blacks: 28.0%; False negative rate for whites: 47.7%
What is the Broken Window Theory (1982) and how does it relate to bias in data?
Neighborhoods with visible civil disorder → More police forces → More arrests, creating a cycle of crime that biases the data