Data Science Quiz 2

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/17

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

18 Terms

1
New cards

Mean Residual Difference

Average difference in residuals (errors) between protected and unprotected groups. Ideally=0.

2
New cards

R² by Group

Check whether the model fits one group better than another. Lower R² for a group = possible bias in performance

3
New cards

Statistical Parity in Predictions

Compares average predicted values by groups. Means need to be equal.

4
New cards

Group Mean Error/RMSE

Compares average prediction errors. Identifies whether one group gets consistently lower/higher predictions. 

5
New cards

Demographic Parity

Equal probability of positive prediction across groups.

6
New cards

Equal Opportunity

Equal True Positive Rate (TPR) across groups.

7
New cards

Equalized Odds

Equal TPR and FPR across groups.

8
New cards

Predictive Parity

Equal Positive Predictive Value (precision) across groups.

9
New cards

Calibration

Predictions reflect equal probabilities for all groups. For same predicted score, actual outcome rates should match. No one from one group should have a lower chance with an equal score.

10
New cards

Demographic Groups

Categories of people defined by sensitive attributes/variables in the dataset.

11
New cards

Sensitive Attributes

The attributes related to fairness and equity. Any attributes that could be a basis for bias or discrimination.

12
New cards

What happens when we measure a fairness metric?

We compare the models performance outcomes across different demographic groups. 

13
New cards

Performance metrics: purpose, focus, question, level, used in

Measure how well model predicts, overall accuracy (etc.), Is model GOOD, global, model evaluation and training.

14
New cards

Fairness Metrics

Measure how fair model treats groups, equality of performance across sensitive groups, is model fair, group-level, ethical analysis bias detection and compliance.

15
New cards

SHAP

Shapley Additive Explanations

16
New cards

LIME

Local Interpretable Model-Agnostic Explanations

17
New cards

What are SHAP and LIME

Explainability tools, evaluation category of model interpretability.

18
New cards

Why do we care about local explanation/how the model reasons?

Fairness/Accountability (explain for one person), Identify anomalies (see if there are outliers where the model is behaving differently), Personalization/User Experience (Give guidance to one person), Regulatory/Ethical (rules for explaining down to individual)

Trust, debugging bias, Regulatory transparency