* 2 sets of related/non-independent data * Eg – siblings, repeated from same person
**Matched data**
* Extension of paired data for more than two groups
3
New cards
When are parametric methods used?
* Good to detect genuine differences * Can only be used for continuous normal outcome
4
New cards
When are non-parametric methods used?
* Less powerful to detect genuine differences * Can be used for any numerical outcome - non-normal continuous/discrete outcomes
5
New cards
What are non-parametric methods based on?
Ranks instead of actual data values
6
New cards
Parametric tests used for comparing numerical outcomes between the following groups
* Comparing 2 unpaired groups * Comparing 2 paired groups * Comparing 3+ unmatched groups * Comparing 3+ matched groups
Unpaired t test
Paired t test
One-way ANOVA
Repeated-measures ANOVA
7
New cards
Non-parametric tests used for comparing numerical outcomes between the following groups
* Comparing 2 unpaired groups * Comparing 2 paired groups * Comparing 3+ unmatched groups * Comparing 3+ matched groups
Mann-Whitney U/Wilcoxon rank sum test
Sign test/Wilcoxon signed rank test
Kruskal-Wallis test
Friedman test
8
New cards
Non-paramtetric methods used for comparing categorical outcomes between the following groups
* Comparing 2 unpaired groups * Comparing 2 paired groups * Comparing 3+ unmatched groups * Comparing 3+ matched groups
X2 test
McNemar’s test
X2 or ordinal X2 test
Cochrane’s Q test
9
New cards
When do you need to assess agreement
when you are comparing outcomes measured by multiple assessors/clinical methods
Aim is always to obtain the same value from different sets of outcome measurements
10
New cards
How could measurements be strongly associated but have low agreement?
If 2 assessors collect data, one may consistently score higher than the other
11
New cards
inter-observer variability
= accuracy/assessment of agreement for different assessors/methods
12
New cards
intra-observer variability
assessment of agreement for same assessor/method
13
New cards
What is Bland-Altman used to assess
Limits of agreement between numerical outcomes
14
New cards
How do you assess limits of agreement?
1. Calculate the difference between 2 sets of measurements 2. Calculate the mean of 2 measurements 3. Plot 4. Limits of agreement = Mean difference +/- 2 SD of the difference
1. Give range of agreement to judge whether differences are clinically important or not
15
New cards
Method to assess agreement between categorical outcomes
Method to assess agreement between numerical outcomes
Kappa/weighted kappa statistics
Limits of agreement (Bland-Altman)
16
New cards
Use of Kappa statistics
Based on comparing observed proportion of agreement with the proportion of agreement that would be expected to occur by chance – binary/nominal outcomes
17
New cards
For what type of outcome is weighted kappa more appropriate?
ordinal outcomes
Includes partial misclassifications
18
New cards
What type of agreement does 1 suggest in kappa statistics?
perfect agreement
19
New cards
What type of agreement does 0 suggest in kappa statistics?
no agreement
20
New cards
what type of data should kappa values be presented with?