1/107
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
nominal
categorical data used for naming or labeling without numerical value ๐ท๏ธ
ordinal
data that can be ranked in order but distance between values is unknown ๐ฅ
interval
ordered data with equal spacing between points but no true zero ๐ก๏ธ
ratio
data with equal spacing and a true zero allowing for relative comparisons โ๏ธ
discrete
variables that can only take specific, separate values like whole numbers ๐ข
continuous
variables that can take any infinite value within a range like time โฑ๏ธ
descriptive statistics
numerical summaries used to characterize and simplify sample data ๐
inferential statistics
procedures used to draw conclusions about a population from a sample ๐ฎ
statistics
summary numbers describing a sample usually shown with Latin letters like M or s ๐
parameters
summary numbers describing a population shown with Greek letters like ยต or ฯ ๐๏ธ
mode
the most frequently occurring score in a data set ๐
median
the middle score in a rank-ordered distribution or the 50th percentile ๐
mean
the arithmetic average calculated by dividing the sum of scores by the total count โ
range
the difference between the highest and lowest scores in a distribution โ๏ธ
variance
a measure of how much scores deviate from the mean squared ๐
standard deviation
the average amount scores vary from the mean in original units ๐
z-scores
standard scores used to compare values from different scales by showing distance from the mean in standard deviations ๐ฏ
probability
the likelihood or chance that a specific event will occur ๐ฒ
p
the mathematical abbreviation for probability expressed as a proportion between 0 and 1 ๐ข
inferential statistics
procedures used to determine if findings are reliable or due to chance ๐ฎ
null hypothesis
the prediction that there is no effect or no difference between groups โช
alternative hypothesis
the prediction that a specific effect or difference exists ๐ฏ
reject the null
concluding that an effect is unlikely to be due to chance alone โ
fail to reject the null
concluding there is not enough evidence to claim an effect exists ๐
nondirectional hypothesis
a prediction that a difference exists without specifying which group will be higher (two-tailed test) โ๏ธ
directional hypothesis
a prediction that specifies which group will be higher or lower (one-tailed test) โก๏ธ
type i error
mistakenly rejecting a true null hypothesis (false positive) โ ๏ธ
type ii error
failing to reject a false null hypothesis (false negative) ๐
alpha
the probability of making a Type I error, usually set at .05 ๐
critical value
the threshold a test statistic must exceed to reject the null hypothesis ๐ง
statistically significant
a result unlikely to have occurred by chance based on the alpha level ๐
statistical power
the ability to correctly reject a false null hypothesis and detect a real effect ๐ช
effect size
a measure of the magnitude of a difference independent of sample size ๐
sample size
the number of observations in a study; larger samples increase power by reducing error ๐ฅ
measurement reliability
the consistency of a measure; higher reliability reduces noise and increases power ๐ ๏ธ
single-sample t-test
compares a sample mean to a specific hypothesized population value when the population standard deviation is unknown ๐ฏ
dependent samples t-test
compares means from the same group tested twice or matched pairs (also called repeated measures) ๐ฏ
independent samples t-test
compares means from two separate, unrelated groups to see if they differ significantly ๐ฅ
degrees of freedom (df)
the number of values in a calculation that are free to vary; typically N - 1 for one sample or (N1 + N2) - 2 for two samples โ๏ธ
t-distribution
a family of probability distributions that look like the normal curve but have "fatter tails" for smaller sample sizes ๐
homogeneity of variance
the assumption that the amount of variability is roughly equal across the groups being compared โ๏ธ
standard error of the difference
the denominator in a t-test representing the estimated standard deviation of the difference between means ๐
two-tailed test
used when a researcher predicts a difference in either direction (greater than or less than) โ๏ธ
one-tailed test
used when a researcher predicts a specific direction for the effect (e.g., "Group A will be higher than Group B") โก๏ธ
sampling distribution of t
the theoretical distribution of all possible t-values if the null hypothesis were true ๐ฎ
robustness
the ability of a statistical test to remain accurate even if certain assumptions (like normality) are slightly violated ๐ก๏ธ
one-way anova
a test used to compare means across three or more levels of a single independent variable ๐งช
f-ratio
the test statistic for anova calculated by dividing the variance between groups by the variance within groups ๐
between-groups variance
measure of how much the group means differ from each other (due to the IV) โ๏ธ
within-groups variance
measure of the spread of scores within each group (due to chance or error) ๐ฒ
post-hoc tests
follow-up tests conducted after a significant anova to find which specific groups differ from each other ๐
factorial anova
an extension of anova used when there are two or more independent variables (factors) ๐งฉ
main effect
the separate effect of one independent variable on the dependent variable, ignoring other variables ๐ก
interaction
occurs when the effect of one independent variable depends on the level of another independent variable ๐ค
f-distribution
a right-skewed distribution of all possible f-ratios; it varies based on numerator and denominator degrees of freedom ๐
anova table
a standard way to report sums of squares (ss), degrees of freedom (df), mean squares (ms), and f-ratios ๐
mixed design anova
a study design that includes both between-subjects and within-subjects independent variables ๐
type i error inflation
the increased risk of a false positive that occurs when running multiple t-tests instead of one anova โ ๏ธ
correlational design
a research method where two or more variables are measured without manipulation to describe their relationship ๐ค
bivariate correlation
a measure of the association between exactly two variables ranging from -1.00 to 1.00 ๐ข
positive correlation
a relationship where both variables increase or decrease together ๐
negative correlation
a relationship where one variable increases as the other decreases ๐
scatterplot
a graph where each point represents an individual, used to visualize the relationship between variables ๐
effect size
the strength of an association; in correlation, the coefficient r itself is the effect size ๐
coefficient of determination (r^2)
the proportion of variability in the criterion variable explained by the predictor variable ๐ฐ
restriction of range
when a sample lacks the full range of scores present in the population, artificially lowering the correlation ๐ค
outlier
an extreme score that can disproportionately strengthen or weaken a correlation coefficient ๐
curvilinear relationship
a relationship where the data follows a curve rather than a straight line, making Pearsonโs r misleading ๐ข
pearsonโs r
a correlation coefficient used when both variables are measured on an interval or ratio scale ๐
spearmanโs r
a correlation coefficient used when both variables consist of ordinal (ranked) data ๐ฅ
point-biserial r
used when one variable is dichotomous (two categories) and the other is interval or ratio ๐
linear regression
a statistical method for finding the best-fitting line to predict a criterion variable from a predictor variable ๐
predictor variable
the variable used to make a prediction (the X variable) ๐ฎ
criterion variable
the variable being predicted or explained (the Y variable) ๐ฏ
multiple regression
a technique using two or more predictor variables to explain a single criterion variable and control for third variables ๐๏ธ
beta (\beta)
a standardized coefficient in regression that shows the relationship between a predictor and criterion while holding other predictors constant โ๏ธ
parametric test
statistical tests that assume data follows a specific distribution (like the normal distribution) and estimate population parameters ๐
non-parametric test
"distribution-free" tests used when data is skewed, nominal, or ordinal and doesn't meet parametric assumptions ๐
chi-square (\chi^2) test of independence
used to see if two categorical (nominal) variables are related, like major and political party ๐
chi-square (\chi^2) goodness-of-fit
compares observed frequencies in categories to what is expected by chance ๐ฒ
spearmanโs rho
a non-parametric correlation used for ordinal (ranked) data ๐ฅ
mann-whitney u test
the non-parametric alternative to the independent-samples t-test for comparing two groups ๐ฅ
wilcoxon t test
the non-parametric alternative to the related-samples t-test for dependent groups ๐
kruskal-wallis h test
the non-parametric alternative to a one-way anova for three or more independent groups ๐๏ธ
friedman chi-square
the non-parametric alternative to a repeated measures anova for three or more conditions ๐
mcnemar test
a non-parametric test for nominal data in a repeated measures design ๐
non-robust
assumptions that must be strictly met for the statistical result to be valid โ
normally distributed
the assumption that the population follows a bell-shaped curve ๐
random sample
a group where every member of the population had an equal chance of being chosen ๐ฒ
independence of observations
the assumption that one person's data does not influence another's ๐ค
standard error
the estimated standard deviation of a sampling distribution (the denominator in many tests) ๐
apa format
the standard style for reporting statistics (e.g., t(df) = value, p < .05) ๐
latin letters
italicized symbols (like M or s) used to represent sample statistics โ๏ธ
greek letters
symbols (like \bm{\mu} or \bm{\sigma}) used to represent population parameters ๐๏ธ
standard scores
another name for z-scores, used to compare different scales ๐ฏ
coefficient of determination
the \bm{r^2} value showing how much one variable explains another ๐ฐ
principle of least squares
the math used to find a line that minimizes the distance to all data points ๐
predictor variable
the independent variable (\bm{X}) used to forecast an outcome ๐ฎ
criterion variable
the dependent variable (\bm{Y}) being predicted in a regression ๐ฏ