Experimental Design and Validity

0.0(0)
studied byStudied by 0 people
full-widthCall with Kai
GameKnowt Play
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/33

flashcard set

Earn XP

Description and Tags

Flashcards covering key concepts in experimental design, variables, causal criteria, types of experimental designs, assignment methods, validity, and result interpretation based on lecture notes.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

34 Terms

1
New cards

How do manipulated and measured variables differ in a study?

Manipulated variables are controlled by researchers who assign participants to particular levels, while measured variables are observed and recorded by researchers without intervention.

2
New cards

In an experiment, how are independent, dependent, and control variables defined?

The independent variable is manipulated (causal), the dependent variable is measured (outcome), and control variables are held constant on purpose.

3
New cards

What are the three causal criteria used to analyze an experiment's ability to support a causal claim?

Covariance (causal variable related to outcome), Temporal precedence (causal variable comes before outcome in time), and Internal validity (rules out alternative explanations).

4
New cards

How do control variables help an experimenter eliminate design confounds?

Design confounds can be detected in advance and transformed into control variables to increase internal validity.

5
New cards

What is a within-groups/within-subjects experimental design?

Researchers expose all participants to all levels of the independent variable, either through repeated exposures over time or concurrently.

6
New cards

What is a between-subjects/between-groups/independent-groups experimental design?

Separate groups of participants are placed into different levels of the independent variable.

7
New cards

What are the advantages and disadvantages of between-groups designs?

Advantages: Subjects are fresh and unaware of manipulations. Disadvantages: Requires many participants, and differences between conditions may exist.

8
New cards

What are the advantages and disadvantages of within-groups designs?

Advantages: Participants are equivalent, requires fewer participants, and controls for extraneous differences (e.g., personality, gender, ability). Disadvantages: Order effects can occur, not always possible or practical, and participant behavior can change (demand characteristics).

9
New cards

When might a between-groups design be used in research?

When comparing the effectiveness of two or more treatments, to minimize carryover effects, when participants might react differently to multiple treatments, or to ensure participants are exposed to only one condition.

10
New cards

When might a within-groups design be used in research?

When controlling for individual differences to reduce variability, in longitudinal studies, or to minimize the influence of confounding variables through repeated measures.

11
New cards

What is an example of a between-groups design?

A sample is split into groups, and each group is given a different condition.

12
New cards

What is an example of a within-groups design?

The entire sample receives the same condition(s) over time.

13
New cards

What are the advantages of a post-test only design?

It allows researchers to test for covariance by detecting differences in the dependent variable, establish temporal precedence because the independent variable comes first, and establish internal validity if conducted well.

14
New cards

What are the disadvantages of a post-test only design?

Lack of baseline data, potential for confounding variables to influence results after intervention, difficulty establishing causality, and no control group.

15
New cards

What are the advantages of a pretest-posttest design?

It ensures no selection effects in the study and enables researchers to track changes in performance over time.

16
New cards

What are the disadvantages of a pretest-posttest design?

Potential for testing effects (pretest influencing posttest results), assumption that groups are equivalent, and a threat to internal validity.

17
New cards

When is a post-test only independent-groups design used in research?

Participants are randomly assigned to independent variable groups and are tested on a dependent variable once.

18
New cards

When is a pre-test/post-test independent-groups design used in research?

Participants are randomly assigned to at least two groups and are tested on the key dependent variable twice—once beforehand and once after exposure to the independent variable.

19
New cards

What is an example of a post-test only design?

The notetaking study.

20
New cards

What is an example of a pre-test/post-test design?

A study on the effects of mindfulness training.

21
New cards

Regarding within-groups designs, when might they be used in research?

For longitudinal studies.

22
New cards

What is an example of a within-groups design?

Longitudinal studies.

23
New cards

What is random assignment of participants and what is its impact on internal validity?

Random assignment ensures each participant has the same likelihood of being selected to a given group, making individual differences about the same in each group; it reduces selection bias, balances confounding variables, facilitates causal inferences, and enhances the generalizability of findings to a larger population.

24
New cards

What are the two types of restricted random assignment?

Control by holding constant (participants meet enrollment criteria and are randomly assigned) and Control by matching (participants are matched on a characteristic, then one from each matched pair is assigned to a group).

25
New cards

How does matching work, what is its role in internal validity, and when is it preferred over random assignment?

Matching involves grouping participants based on a control characteristic to ensure comparability (e.g., intelligence test scores); it establishes internal validity by identifying key variables that influence the outcome and reducing variability; preferred to ensure group comparability or when controlling specific characteristics for a specific purpose.

26
New cards

What factors should be considered when evaluating if a study might benefit from random assignment or restricted random assignment?

Assess if the study aims to establish causal relationships, evaluate potential for selection bias, consider the diversity of the participant pool for generalizability, determine if interventions require balanced groups, analyze the feasibility of random assignment within the design, and review ethical considerations.

27
New cards

Why do researchers control for order effects when conducting within-subjects design experiments?

Order effects harm internal validity problems in within-groups designs by acting as confounds, meaning behavior at later levels of the independent variable might be caused by the sequence in which conditions are experienced rather than the experimental manipulation.

28
New cards

How are order effects controlled in within-subjects designs?

By using complete and partial counterbalancing.

29
New cards

What skill is required for analyzing experimental designs related to participant grouping?

The ability to classify research studies as either within-subjects or between-subjects designs.

30
New cards

What does it mean to interrogate the construct validity of a measured variable in an experiment?

It means to assess how well the measured variable actually captures the theoretical construct it is intended to measure.

31
New cards

How does one interrogate the construct validity of a manipulated variable, and what is the role of manipulation checks and theory testing?

Interrogating construct validity of a manipulated variable means assessing how well the manipulation represents the theoretical construct; manipulation checks verify if the manipulation had its intended effect, and theory testing helps establish construct validity by confirming if the variable behaves as predicted by theory.

32
New cards

What two aspects of external validity are typically interrogated for an experiment?

Generalization to other populations and generalization to other settings.

33
New cards

Why do experimenters usually prioritize internal validity over external validity when it is difficult to achieve both?

Experimenters prioritize internal validity because it ensures that the independent variable truly caused the change in the dependent variable, making the causal claim valid, even if the findings are not immediately generalizable.

34
New cards

What fundamental concepts related to results analysis should be understood for an experiment?

Effect size (d) and statistical significance.