1/32
These flashcards cover key concepts related to experimental design, statistical methods, and data analysis as discussed in the lecture.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
Within-participants design
A research design where the same participants are given ALL of the levels of the IV in the experiment.
Between-participants design
A research design in which different participants are randomly assigned to each condition of the experiment.
Paired-participants design
A design that involves matching participants in pairs based on certain characteristics, and each pair experiences different levels of the IV.
Parametric statistics
Statistical methods that assume the data follows a specific distribution, often normal distribution.
Simulate-and-build-the-null-distribution approach
A non-parametric approach that allows for generating a null distribution through simulations rather than relying on theoretical assumptions.
Sources of variability
The different factors that can affect the results of an experiment, important for understanding the reliability and generality of the findings.
t statistic
The calculation or comparison we are using to measure and test the difference between the experimental conditions
Statistical significance
A determination of whether the results of an analysis reflect a true effect or if they might have occurred by chance.
Population variance vs Sample variance
Population variance uses the entire population data while sample variance estimates population variance based on a sample, hence the use of a different formula.
Full/formal results statement
A structured phrase summarizing the outcomes of t-tests and ANOVAs, including statistics and significance. Ex: t(80)=72.575 t.453,p=.057.
Shortcomings of null hypothesis significance testing
Critiques of this method include issues of binary decision-making and potential for misinterpretation.
File drawer effect
A bias in publication where studies with non-significant results are less likely to be published.
p-hacking
Manipulating data or the analytic methods to obtain significant p-values.
Effect size
A measure of the strength of the relationship between variables or the magnitude of an experimental effect.
Confidence intervals
Ranges of values derived from sample statistics that are likely to contain the true population parameter.
ANOVA
Analysis of Variance, a statistical method for comparing three or more groups to see if at least one is different.
Between-groups variance
The variation in scores between different groups in an experiment.
Within-groups variance
The variation in scores within the same group in an experiment.
Null hypothesis for ANOVAs
The assumption that there are no differences between the means of the groups being compared.
Shape of null distribution for ANOVA
Typically bell-shaped, reflecting the central limit theorem, under the null hypothesis.
Rejecting the null hypothesis for ANOVA
Indicates at least one group mean significantly differs, but does not specify which.
Post-hoc test
Tests conducted after an ANOVA to determine exactly which means are significantly different.
Problems with post-hoc tests
Issues include increased risk of Type I errors; can be addressed by using corrections like Bonferroni.
Correlation
A statistical measure that indicates the extent to which two variables fluctuate together.
Correlational analysis
A method used to assess the strength and direction of relationships between two variables.
Causal relationships in correlation
Could be A causes B, B causes A, or C causes both.
Correlational coefficient
A numerical value ranging from -1 to 1 that expresses the degree of linear relationship between two variables.
Effect size for correlation
Usually measured using the squared correlation coefficient, indicating strength of the relationship.
Modeling the null hypothesis for correlations
Assumes no relationship exists between the variables under study.
Replication
Repeating a study to confirm results, improving reliability.
Meta-analysis
A statistical technique for combining the results of multiple studies to derive a broader conclusion.
HARKing
Hypothesizing After the Results are Known, altering the hypothesis based on observed outcomes.
Pre-registration
The practice of publicly registering an experiment's methodology before data collection to reduce bias.