1/110
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Multiple comparisons
Conducting more than one statistical comparison in a study, which increases the risk of Type I error
Per comparison approach
Treats the Type I error rate separately for each individual test
Familywise approach
Considers the probability of making at least one Type I error across all comparisons combined
Familywise error rate
The overall probability of making one or more Type I errors across a set of comparisons
Post-hoc comparison
Comparisons decided after examining the data, often involving all possible pairs and requiring error correction
A priori comparison
Comparisons planned before data collection based on hypotheses, usually fewer and more focused
Linear contrasts
A method using weighted combinations of group means to compare one group or set of groups with another
Psi
The symbol used to represent the value of a linear contrast
Orthogonal contrasts
Independent contrasts that do not overlap in information and whose weights satisfy specific conditions
Bonferroni t
A correction method that divides the familywise alpha by the number of comparisons to control error rate
Dunn test
Another name for the Bonferroni correction used for multiple comparisons
Error rate per comparison
The probability of making a Type I error in a single test
Tukey’s test
A post-hoc test that compares all possible pairs of means while controlling the familywise error rate
Pairwise comparison
A comparison between two group means at a time
Keppel’s recommendation
A guideline suggesting no alpha adjustment if planned comparisons do not exceed degrees of freedom, and a modified Bonferroni adjustment if they do
ANOVA
A statistical test used to compare the means of three or more groups to see if at least one is significantly different
Power
The probability of correctly rejecting a false null hypothesis (i.e., detecting a true effect)
Error variance
The variability within groups that is not explained by the independent variable
F statistic
A ratio of between-group variance to within-group variance used to determine statistical significance in ANOVA
Within treatments variance
The variation of scores within each group, reflecting random error or individual differences
Independence of observations
The assumption that each participant’s data is not influenced by or related to another participant’s data
Eta squared
A measure of effect size that indicates the proportion of total variance explained by the independent variable
Group mean
The average score of all participants within a single group
Grand mean
The overall average score across all groups combined
Square root transformation
A data transformation used to reduce positive skew by taking the square root of each value
Log10
A logarithmic transformation using base 10 to reduce skewness and stabilize variance
Inverse transformation
A transformation where each value is converted to its reciprocal (1/x) to reduce strong positive skew
Omega squared
A less biased measure of effect size that estimates the proportion of variance in the population explained by the independent variable
Kruskal-Wallis test
A non-parametric alternative to ANOVA used when assumptions like normality are violated
Welch’s ANOVA
A version of ANOVA that does not assume equal variances between groups
False negative
Failing to detect a real effect (Type II error)
False positive
Incorrectly concluding there is an effect when there is none (Type I error)
Multiple regression
A statistical method used to predict a dependent variable using two or more independent variables.
Multiple R
The overall correlation between all independent variables together and the dependent variable.
F value
A statistic that tests whether the regression model significantly predicts the dependent variable better than chance.
Beta weights
Standardised coefficients that show the relative strength and direction of each predictor on the same scale.
B weights
Unstandardised coefficients that show how much the dependent variable changes for a one-unit increase in the predictor.
sr squared
The unique proportion of variance in the dependent variable explained by a single independent variable.
Sequential/hierarchical regression
A regression method where variables are entered in steps to assess the additional variance explained by each set.
Unstandardised
Values that are expressed in their original measurement units.
Weightings
Values that indicate how much each independent variable contributes to predicting the dependent variable.
Regression coefficients
Numerical values that describe the relationship between independent variables and the dependent variable.
Multicollinearity
A situation where independent variables are highly correlated with each other, making it difficult to isolate their individual effets.
Singularity
A condition where one independent variable is a perfect linear combination of another, preventing the model from being estimated.
Cook’s distance
A measure used to identify influential data points that disproportionately affect the regression results.
ANOVA
A statistical method used to test differences between means or to assess the overall significance of a regression model.
Dummy variable coding
A method of converting categorical variables into numerical form using binary values (e.g., 0 and 1).
Singularity
occurs when two variables are perfectly correlated, and occurs in instances when two variables are measuring exactly the same construct.
Magnitude
The strength or size of the relationship between two variables.
Form/direction
The pattern of a relationship between variables, such as positive, negative, or nonlinear.
Pearson’s correlation
A statistical test that measures the strength and direction of a linear relationship between two continuous variables.
Dichotomous variable
A variable that has only two possible categories or values.
Spearman
A non-parametric correlation that measures the strength and direction of a relationship between two ranked or ordinal variables.
Correlation
A statistical measure that describes the strength and direction of a relationship between two variables.
Regression
A statistical method used to predict the value of one variable (outcome) based on one or more predictor variables.
Correlation coefficient
A number between -1 and +1 that indicates the strength and direction of a relationship between two variables.
Line of best fit
A straight line on a scatterplot that best represents the relationship between two variables and minimizes the distance from all data points.
Residuals
The difference between an observed value and the value predicted by a statistical model.
Bivariate regression
A regression analysis that examines the relationship between one predictor variable and one outcome variable.
Multiple regression
A regression analysis that uses two or more predictor variables to predict an outcome variable.
Range restriction
When the range of scores in a sample is limited, which can reduce the strength of correlations.
Point biserial correlation
A correlation used when one variable is continuous and the other is dichotomous.
Univariate outlier
An extreme value that is unusual compared to the rest of the data on a single variable.
Bootstrapping
A statistical technique that repeatedly resamples the data to estimate the accuracy of a statistic, such as a mean or correlation.
Multivariate outliers
Cases that have unusual combinations of scores across multiple variables.
General linear model
A broad statistical framework that includes analyses such as regression, ANOVA, and correlation, used to examine relationships between variables.
Preliminary data analysis
The process of checking and preparing data before running statistical analyses, such as screening for errors, outliers, and assumption violations.
Normality
The assumption that data follow a normal (bell-shaped) distribution.
Skewness
A measure of how symmetrical or asymmetrical a distribution is.
Kurtosis
A measure of how peaked or flat a distribution is compared to a normal distribution.
Median split
A method that divides a continuous variable into two groups based on whether scores fall above or below the median.
Logarithm
A mathematical transformation used to reduce positive skewness by shrinking large values more than smaller values.
Experimental research
A research approach in which the researcher manipulates one or more independent variables and controls conditions to examine their effect on a dependent variable.
Quasi-experimental research
A research approach that involves manipulation of an independent variable but lacks random assignment to conditions.
Non-experimental research
A research approach in which variables are observed as they naturally occur without manipulation by the researcher.
Correlational research
A research approach that examines the relationship between two or more variables without manipulating them.
Descriptive research
A research approach that aims to describe characteristics, behaviours, or phenomena as they exist.
Differential research
A research approach that compares pre-existing groups to examine differences on a particular variable.
Descriptive research
A research approach that aims to describe characteristics, behaviours, or phenomena as they exist.
Quasi-experimental designs
Research designs that include manipulation of an independent variable but do not use random assignment.
Correlational designs
Research designs used to measure the strength and direction of relationships between variables.
Cross-sectional research design
A design in which data are collected from different participants at one point in time.
Cross-sectional developmental design
A design that compares people of different ages at a single point in time to study development.
Longitudinal research design
A design in which the same participants are studied repeatedly over an extended period of time.
Cross-sectional longitudinal design
A design that combines cross-sectional and longitudinal approaches by following multiple age groups over time.
Longitudinal-sequential design
A design that studies several cohorts over time to separate age effects from cohort effects.
Developmental research designs
Research designs used to examine changes in behaviour or abilities across age or time.
Internal validity
is the degree to which the study accurately answers the questions it was intended to answer.
Threat to internal validity
any aspect of the research which raises doubts about the limits of research results or about the interpretation of the results.
Extraneous variables
Variables other than the independent variable that may influence the dependent variable.
Confounding variables
Extraneous variables that systematically vary with the independent variable and make it difficult to determine causal relationships.
External validity
refers to the extent to which we can generalise the results of a research study to people, settings, times, measures, and characteristics other than those used in that study.
Threat to external validity
is any characteristic of a study that limits the generality of the results.
Sensitisation
A change in participants’ behaviour or responses caused by prior exposure to testing or treatment.
Novelty effect
A change in behaviour that occurs because a situation or treatment is new or unusual.
Reactivity
Changes in participants’ behaviour that occur because they know they are being studied.
Multiple treatment interference
When exposure to one treatment affects participants’ responses to subsequent treatments.
Experimenter characteristics
Personal attributes or behaviours of the researcher that may influence participants’ responses or outcomes.
Between subjects designs
Designs in which different participants are assigned to each condition.
Within subjects designs
Designs in which the same participants take part in all conditions.