1/33
Flashcards covering key concepts in experimental design, variables, causal criteria, types of experimental designs, assignment methods, validity, and result interpretation based on lecture notes.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
How do manipulated and measured variables differ in a study?
Manipulated variables are controlled by researchers who assign participants to particular levels, while measured variables are observed and recorded by researchers without intervention.
In an experiment, how are independent, dependent, and control variables defined?
The independent variable is manipulated (causal), the dependent variable is measured (outcome), and control variables are held constant on purpose.
What are the three causal criteria used to analyze an experiment's ability to support a causal claim?
Covariance (causal variable related to outcome), Temporal precedence (causal variable comes before outcome in time), and Internal validity (rules out alternative explanations).
How do control variables help an experimenter eliminate design confounds?
Design confounds can be detected in advance and transformed into control variables to increase internal validity.
What is a within-groups/within-subjects experimental design?
Researchers expose all participants to all levels of the independent variable, either through repeated exposures over time or concurrently.
What is a between-subjects/between-groups/independent-groups experimental design?
Separate groups of participants are placed into different levels of the independent variable.
What are the advantages and disadvantages of between-groups designs?
Advantages: Subjects are fresh and unaware of manipulations. Disadvantages: Requires many participants, and differences between conditions may exist.
What are the advantages and disadvantages of within-groups designs?
Advantages: Participants are equivalent, requires fewer participants, and controls for extraneous differences (e.g., personality, gender, ability). Disadvantages: Order effects can occur, not always possible or practical, and participant behavior can change (demand characteristics).
When might a between-groups design be used in research?
When comparing the effectiveness of two or more treatments, to minimize carryover effects, when participants might react differently to multiple treatments, or to ensure participants are exposed to only one condition.
When might a within-groups design be used in research?
When controlling for individual differences to reduce variability, in longitudinal studies, or to minimize the influence of confounding variables through repeated measures.
What is an example of a between-groups design?
A sample is split into groups, and each group is given a different condition.
What is an example of a within-groups design?
The entire sample receives the same condition(s) over time.
What are the advantages of a post-test only design?
It allows researchers to test for covariance by detecting differences in the dependent variable, establish temporal precedence because the independent variable comes first, and establish internal validity if conducted well.
What are the disadvantages of a post-test only design?
Lack of baseline data, potential for confounding variables to influence results after intervention, difficulty establishing causality, and no control group.
What are the advantages of a pretest-posttest design?
It ensures no selection effects in the study and enables researchers to track changes in performance over time.
What are the disadvantages of a pretest-posttest design?
Potential for testing effects (pretest influencing posttest results), assumption that groups are equivalent, and a threat to internal validity.
When is a post-test only independent-groups design used in research?
Participants are randomly assigned to independent variable groups and are tested on a dependent variable once.
When is a pre-test/post-test independent-groups design used in research?
Participants are randomly assigned to at least two groups and are tested on the key dependent variable twice—once beforehand and once after exposure to the independent variable.
What is an example of a post-test only design?
The notetaking study.
What is an example of a pre-test/post-test design?
A study on the effects of mindfulness training.
Regarding within-groups designs, when might they be used in research?
For longitudinal studies.
What is an example of a within-groups design?
Longitudinal studies.
What is random assignment of participants and what is its impact on internal validity?
Random assignment ensures each participant has the same likelihood of being selected to a given group, making individual differences about the same in each group; it reduces selection bias, balances confounding variables, facilitates causal inferences, and enhances the generalizability of findings to a larger population.
What are the two types of restricted random assignment?
Control by holding constant (participants meet enrollment criteria and are randomly assigned) and Control by matching (participants are matched on a characteristic, then one from each matched pair is assigned to a group).
How does matching work, what is its role in internal validity, and when is it preferred over random assignment?
Matching involves grouping participants based on a control characteristic to ensure comparability (e.g., intelligence test scores); it establishes internal validity by identifying key variables that influence the outcome and reducing variability; preferred to ensure group comparability or when controlling specific characteristics for a specific purpose.
What factors should be considered when evaluating if a study might benefit from random assignment or restricted random assignment?
Assess if the study aims to establish causal relationships, evaluate potential for selection bias, consider the diversity of the participant pool for generalizability, determine if interventions require balanced groups, analyze the feasibility of random assignment within the design, and review ethical considerations.
Why do researchers control for order effects when conducting within-subjects design experiments?
Order effects harm internal validity problems in within-groups designs by acting as confounds, meaning behavior at later levels of the independent variable might be caused by the sequence in which conditions are experienced rather than the experimental manipulation.
How are order effects controlled in within-subjects designs?
By using complete and partial counterbalancing.
What skill is required for analyzing experimental designs related to participant grouping?
The ability to classify research studies as either within-subjects or between-subjects designs.
What does it mean to interrogate the construct validity of a measured variable in an experiment?
It means to assess how well the measured variable actually captures the theoretical construct it is intended to measure.
How does one interrogate the construct validity of a manipulated variable, and what is the role of manipulation checks and theory testing?
Interrogating construct validity of a manipulated variable means assessing how well the manipulation represents the theoretical construct; manipulation checks verify if the manipulation had its intended effect, and theory testing helps establish construct validity by confirming if the variable behaves as predicted by theory.
What two aspects of external validity are typically interrogated for an experiment?
Generalization to other populations and generalization to other settings.
Why do experimenters usually prioritize internal validity over external validity when it is difficult to achieve both?
Experimenters prioritize internal validity because it ensures that the independent variable truly caused the change in the dependent variable, making the causal claim valid, even if the findings are not immediately generalizable.
What fundamental concepts related to results analysis should be understood for an experiment?
Effect size (d) and statistical significance.