1. Construct validity - how well are variables measured?
2. Internal validity - was the goal to find a causal relationship?
3. External validity - is the sample random? Can we generalize?
4. Statistical validity
Significance - det. by p-value
Suitability - checking assumptions and execution of test
Relevance - effect size
Accuracy - confidence interval
Assumptions Pearson's r
1. Sample is random
2. Both variables are interval/ratio measurement level
3. Rel. is linear
2. a. variables are ordinal
Spearman correlation = special correlation which measures the strength and direction bet. 2 ordinal variables (can be measured at ordinal lvl or made ordinal using rank score to straighten out non-linear rel.)
b. variables are categorical (nominal)
Chi-square test of interdependence = test that determines the degree to which distribution of 2 categ. variables is dependent on one another
Frequencies = numbers in each categ. in the table
Contingency table = table used in Chi-square test
• need to look at row percentages
Chi-square test - compares observed frequencies to expected frequency
Variables are independent/ have no rel —> rel./proportion would hold regardless of categ. (similar results)
3. Rel. is not linear
Suitability
Suitability-check if test suits the data (is data of correct measurement lvl? Is the rel. linear when correlation is tested? Etc)
Statistical hypotheses
H0: ρ = 0 HA: ρ > 0 -> expects positive rel. -> one sided hypothesis
One-sided hypothesis - the direction is assumed from the get-go
(+) theory-driven
(-) if rel turns out to be in opposite direction, you can't reject H0
HA: ρ =/= 0 -> two sided hypothesis
Two-sided hypothesis - simply says there is a rel. bet. variables, but can't specify direction
(+) looks at all possibilities (poz and neg)
(-) does not match expectations
(-) less likely to adopt new theory