Looks like no one added any tags here yet for you.
Experimental Design
A well-designed experiment that allows researchers to assess the effects of different conditions on participants' behavior.
Independent Variable
A variable that is manipulated in experimental research.
Dependent Variable
The response being measured in a study.
Random Assignment
Participants are randomly assigned to different levels of the independent variable, ensuring each participant has an equal chance of being in any experimental condition.
Matched Groups
Participants are paired based on specific characteristics relevant to the study, and then one member of each pair is randomly assigned to an experimental condition.
Counterbalancing
Distributing order effects across conditions to avoid biases, particularly useful in repeated-measures designs.
Simple Random Assignment
Participants are placed in conditions in such a way that every participant has an equal probability of being placed in any experimental condition.
Matched Random Assignment
Participants are ranked on a pretest measure and then matched in clusters or blocks of size k, where k is the number of conditions in the experiment.
Repeated Measures Designs
Within-subjects design where each participant is measured more than once, requiring fewer participants and allowing for more powerful results.
Experimental Control
Eliminating or holding constant extraneous factors that might affect the outcome of the study to ensure internal validity.
Systematic Variance
The variance in participants' scores on the dependent variable(s) that is due to the independent variable.
Treatment Variance
The variance in participants' scores on the dependent variable(s) that is due to the independent variable.
Confound Variance
The variance in participants' scores on the dependent variable(s) that is due to other, uncontrolled variables.
Error (Within-Groups) Variance
Unsystematic differences among participants that result in error variance.
Measurement Error
Random variability introduced into participants' scores, contributing to error variance.
Three Components of Total Variance
Treatment variance, confound variance, and error variance.
Experimental Control
The primary purpose of standardizing procedures is to reduce variability and enhance result reliability.
Confounding Variables
Variables that may distort the interpretation of study results.
Random Assignment
Assigning participants to different levels of the independent variable randomly to ensure group equivalence.
Internal Validity
The degree to which a researcher draws accurate conclusions about the effects of the independent variable.
Threats to Internal Validity
Biased assignment of participants to conditions, differential attrition, pretest sensitization, history, and miscellaneous design confounds.
Experimenter Expectancy Effects
Researchers' expectations about how participants will respond, which can distort the results of an experiment.
Demand Characteristics
Aspects of the study that indicate to participants how they should behave, which can distort the results of an experiment.
Placebo Effects
Physiological or psychological changes that occur as a result of the mere belief that a change will occur.
Error Variance
Variability in data not accounted for by the independent variable, arising from measurement error, individual differences, and other uncontrollable factors.
Reducing Error Variance
Strategies such as standardization, reliable measurement, and large sample sizes can reduce error variance.
External Validity
The generalizability of research findings.
Balancing Internal and External Validity
Researchers face a dilemma in balancing internal and external validity and must carefully consider study goals.
Enhancing External Validity Without Sacrificing Internal Validity
Researchers can enhance external validity without sacrificing internal validity through thoughtful design.
Web-Based Experimental Research
Conducting experiments online, which has advantages such as obtaining large samples and fewer resources needed, but also disadvantages such as difficulty in controlling the nature of the sample and study setting.
One-Way Designs
Experiments in which only one variable is manipulated.
Randomized Groups Design
Assigning participants to different conditions randomly.
Matched-Subjects Design
Assigning participants to different conditions based on matching characteristics.
Repeated Measures or Within-Subjects Design
Participants experience all levels of the independent variable.
Posttest-Only Designs
Measuring the dependent variable only after the experimental manipulation occurred.
Pretest-Posttest Designs
Measuring the dependent variable before and after the experimental manipulation occurred.
Main Effects
The impact of individual independent variables on the dependent variable.
Interactions
Whether the effects of one variable depend on the level of another in factorial designs.
Factorial designs
Designs that allow for the study of the combined effect of two or more independent variables simultaneously.
Two-way factorial designs
Factorial designs that include two independent variables or factors.
Higher-order factorial designs
Factorial designs with more than two independent variables or factors.
Split-plot designs
Factorial designs that involve a combination of within-subjects and between-subjects factors.
Main effects
The effect of a single independent variable, ignoring all other variables in the model.
Interactions
Occur when the effect of one independent variable differs across the levels of other independent variables.
Randomized group factorial design
An approach for assigning participants to conditions within factorial designs where participants are randomly assigned to different groups.
Matched factorial design
An approach for assigning participants to conditions within factorial designs where participants are matched based on certain characteristics.
Repeated measures factorial design
An approach for assigning participants to conditions within factorial designs where each participant experiences all levels of the independent variables.
Mixed factorial design
An approach for assigning participants to conditions within factorial designs that combines aspects of both between-group and within-group designs.
Individual effects
Variability in the dependent variable due to each independent variable separately.
Combined or interactive effects
Variability in the dependent variable due to the interaction of the independent variables.
Error variance
Variability in the dependent variable that cannot be explained by the independent variables.
Three-way factorial designs
Factorial designs that involve three independent variables.
Mixed/expericorr designs
Experimental designs that examine the effects of both individual and participant variables.
Discrete participant variables
Participant variables that allow for natural groupings.
Continuous participant variables
Participant variables that can be grouped using a median-split or extreme groups procedure.
Causal inferences
Conclusions about the causal relationship between variables.
Confounding
When the effects of two or more variables cannot be separated.
Significance testing
A statistical method used to determine the probability that an observed effect is a real effect of the independent variable.
Effect size
A measure of the magnitude of the effect of the independent variable on the dependent variable.
Confidence intervals
A range of values within which the true population parameter is likely to fall.
Exploratory data analysis
A method that encourages researchers to visually inspect and explore data before formal analysis.
Null hypothesis
The hypothesis that states the independent variable did not have an effect on the dependent variable.
Experimental hypothesis
The hypothesis that states the independent variable did have an effect on the dependent variable.
Type I error
When the null hypothesis is true, but the researcher rejects it.
Type II error
When the null hypothesis is false, but the researcher fails to reject it.
Alpha level
The probability of making a Type I error.
Statistical significance
Finding that has a low probability of occurring as a result of error variance alone.
Power
The probability of making a Type II error.
Small sample size
The number of participants in a study is considered small, which may affect the statistical power and generalizability of the findings.
Power
The probability that a study will correctly reject the null hypothesis when the null hypothesis is false.
Power analysis
An analysis conducted to determine the number of participants needed in a study to achieve sufficient statistical power.
Type I error
When the null hypothesis is true, but the researcher incorrectly rejects the null hypothesis.
Type II error
When the null hypothesis is false, but the researcher fails to reject the null hypothesis.
Null hypothesis testing
A statistical approach that involves either rejecting or failing to reject the null hypothesis based on the observed data.
Effect size
A measure that quantifies the practical significance of the observed result, emphasizing the magnitude of the effect rather than just statistical significance.
Cohen's d
A common effect size measure that indicates the standardized difference between groups.
Odds Ratio
A measure that assesses the ratio of the odds of an event occurring in one group to the odds of the event occurring in another group.
Confidence intervals
A range of values within which the true population parameter is likely to fall, providing a more informative perspective than point estimates.
t-test
A statistical test used to compare means between two groups in an experiment and determine if the observed differences are statistically significant.
Equivalence of the experimental and control groups
Ensuring that the groups being compared in an experiment are similar in all relevant aspects except for the independent variable.
Confounds
Factors other than the independent variable that may influence the dependent variable and confound the results of the study.
Directional hypothesis
A hypothesis that states which of the two condition means is expected to be larger.
Nondirectional hypothesis
A hypothesis that merely states that the two means are expected to differ, without predicting which mean will be larger.
One-tailed test
A statistical test used when the researcher's prediction is directional.
Two-tailed test
A statistical test used when the researcher's prediction is nondirectional.
Multiple comparisons problem
Conducting multiple statistical tests increases the likelihood of Type I errors.
Bonferroni correction
An adjustment method that divides the desired alpha level by the number of tests conducted to control for the inflation of Type I error.
Type I error
The probability of incorrectly rejecting the null hypothesis.
Rationale
The reason or justification behind using ANOVA to analyze variance between groups and determine significant differences.
F-test
A statistical test that compares the variance among conditions (between-groups variance) to the variance within conditions (within-groups, or error, variance).
Systematic variance
The variance between groups that is caused by the independent variable.
Error variance
The variance within groups that is not due to the independent variable.
Post hoc tests
Additional tests conducted after ANOVA to identify specific group differences contributing to the overall effect.
Total sum of squares (SStotal)
The total amount of variability in the data.
Sum of squares within-groups (SSwg)
The sum of the sums of squares for each of the experimental groups, representing the variability in responses that is not due to the independent variable.
Mean square within-groups (MSwg)
An estimate of the average within-groups, or error, variance.
Sum of squares between-groups (SSbg)
The degree to which the independent variable causes the group mean to deviate from the grand mean.
Mean square between-groups (MSbg)
An estimate of the systematic differences among the groups that are due to the effect of the independent variable.
F-statistic
The ratio of MSbg to MSwg, used to determine the significance of the differences between groups.
Follow-up tests
Additional tests conducted after ANOVA to determine precisely which means differ.