1/19
A comprehensive set of flashcards covering key concepts from Lecture 5 on interpreting statistical test results.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Significance Testing
testing to see if differences between groups or associations between variables are statistically significant → due to something other than pure coincidence/randomness. also called hypothesis testing
hypothesis testing steps
developing hypothesis about association between 2+ variables OR difference between 2+ groups
testing hypothesis using appropriate inferential statistical tests
use the P VALUE and ALPHA for this class, alpha is always 5%, value to determine statistical significance of the results
interpret results to determine
if results are statistically significant
what the effect size is (if it’s association)
what the effect direction is (
if results are clinically/practically significant
Null Hypothesis (H0)
The hypothesis that states there is no effect or no difference/association; it is supposed to be tested and possibly rejected.
prediction that research results are due to chance or randomness
Alternative Hypothesis (H1)
The hypothesis that states there is an effect or a difference; it is what researchers typically aim to support.
results are REAL and NOT due to chance
ex there is a difference between groups, or there is an association between two variables
aka a research hypothesis
P-value
The probability of obtaining a result at least as extreme as the one observed, under the assumption that the null hypothesis is true
it’s called Sig in SPSS
tells us probability of obtaining test result by chance
Alpha Level (α)
The threshold for significance of observed event or test result
used as threshold to determine whether null hypothesis should be rejected or accepted
if alpha is 0.01, that means you are 99% confident
probability of making type 1 error. if alpha is 0.01, researcher is likely to commit type 1 errors 1% of the time
errors in hypothesis testing
type 1 is when you reject a null hypothesis when you shouldn’t. this is like if you send an innocent person to prison. the null hypothesis is true, but you reject.
type 2 is when you accept a null hypothesis when you shouldn’t. this is like letting a guilty man walk free. the null hypothesis is false, but you accept it.
inverse relationship - as probability of making type 1 error increases, probability of making type 2 error decreases
Effect Size
A quantitative measure of the magnitude of a phenomenon; it helps to determine the practical significance of research results.
Statistical Power
The probability of correctly rejecting a false null hypothesis; or, the ability to detect an effect if there is one.
Confidence Interval (CI)
A range of values derived from a data set that is believed to contain the true value of a population parameter with a specified level of confidence.
confidence level = 1-alpha x 100%
significance testing in association hypothesis
use table 5.5
1. determine if you have statistically significant association by comparing computed p-value to selected alpha level
2. determine effect size of association using table 5.5. may be small, medium, or large
3. determine direction of association using sign in front of correlation coefficients. is the direction negative (inversely proportional) or positive (directly)
interpreting 95% confidence
we accept (FTR) the null hypothesis if the confidence interval includes zero — (-, +)
if the higher and lower ends of confidence interval are both positive or both negative, aka they do not include zero, then you reject the null hypothesis — (-,-) or (+,+)
significance testing in difference hypothesis
determine if there is statistically significant difference by comparing p value to alpha level
determine direction of difference or direction of effect by identifying group w highest mean, or mean rank, or frequency
examine clinical or practical significance of difference
sample size and statistical significance
positive relationship
the larger the sample, the more likely that the results will be statistically significant — any association or difference can be significant if sample sizes are large enough
bigger the sample, less likely that research results are due to chance
statistical power
ability to reject null hypothesis when you should
ability to find a true difference or association that actually exists
ability to claim to have statistically significant results
aka ability to make correct decision on whether to accept or reject
how to increase statistical power
have low alpha value, aka decreasing chance of type 1 error
also increases chance of type 2 error, so this is not ideal
having large sample size
larger the sample, higher the power
also allows to minimize type 1 and type 2 errors
dangers of significance
statistical significance does not mean real-life importance
just become something is not statistically significant does not mean it is not clinically or practically significant
if a result is practically important, it means that something should be done or you should be concerned. this is subjective.
Cross-tabulation
values of one variable forms rows, values of other variable forms the columns
each cell shows frequency of corresponding row and column variables
helps w understanding differences and associations with categorical variables
different from frequency distribution in that this deals with 2 variables, where frequency distribution only deals with one
Effect Direction for difference
compare group means (for scaled DV) or mean ranks (for ordinal DV) or frequencies (for nominal DV) → which group did “better?”
Clinical Significance
The practical importance of a treatment effect, indicating whether it has real-world relevance.