4 Fairness, Accountability & Transparency in AI Systems

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/23

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

24 Terms

1
New cards

What are the three key questions to ask about algorithmic decisions?

  • Is the decision fair?

  • Who made the decision? Who is responsible? Where rests accountability?

  • How was the decision made? Can we understand the decision-process? How transparent is the process?

2
New cards

What is the relationship between ethics, FAT, and product adoption?

Ethics → Trust-enhancing Factors (FAT) → Product Adoption

3
New cards

What percentage of Americans do not trust autonomous vehicles?

78% of Americans do not trust AVs (American Automobile Association, Nature 2017)

4
New cards

What are examples of domains where algorithmic decision-making occurs?

Predicting employees' success (hiring), predicting academic performance, predictive policing and jurisdiction (recidivism), predicting driving outcomes, predicting sport judgments

5
New cards

What does COMPAS stand for?

Correctional Offender Management Profiling for Alternative Sanctions

6
New cards

What is the purpose of COMPAS?

To predict how likely a defendant is to commit a felony or misdemeanor once released from prison

7
New cards

Who developed COMPAS and when has it been in use?

Developed by company Northpointe (now "equivant"); in use since 2000 for predictions on >1 million offenders

8
New cards

What is the COMPAS risk score range?

Scores from 1-10 (10 = highest risk score)

9
New cards

Why is COMPAS controversial regarding transparency?

The algorithm is proprietary and thus a trade secret, providing little transparency over the decision-making process

10
New cards

What were the key findings of ProPublica's 2016 investigation of COMPAS?

  • Only 20% of people predicted to commit violent crimes actually did so

  • For misdemeanors, algorithm was just above 50% correct

  • Overall: 61% of those deemed likely to re-offend were arrested for subsequent crimes within two years

11
New cards

What is Northpointe's definition of fairness?

Scores map to equal probability in actual re-offending among both blacks and whites (race does not matter). Within each risk category, the proportion of defendants who reoffend is approximately the same regardless of race.

12
New cards

What is ProPublica's definition of fairness?

Keep false positive and false negative rates equal between races. Focus on defendants who ultimately did not reoffend.

13
New cards

Why can't algorithmic systems implement multiple conceptualizations of fairness?

It's mathematically impossible to simultaneously satisfy both definitions of fairness, especially when there are different base rates between groups (e.g., black defendants recidivate at 51% vs 39% for white defendants in Broward County data).

14
New cards

What were the false positive rates by race in COMPAS?

False positive rate for blacks: 44.9%; False positive rate for whites: 23.5%

15
New cards

What were the false negative rates by race in COMPAS?

False negative rate for blacks: 28.0%; False negative rate for whites: 47.7%

16
New cards

What is the Broken Window Theory (1982) and how does it relate to bias in data?

Neighborhoods with visible civil disorder → More police forces → More arrests, creating a cycle of crime that biases the data

17
New cards
18
New cards
19
New cards
20
New cards
21
New cards
22
New cards
23
New cards
24
New cards