1/44
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
Experimental design
The way participants are allocated to the conditions in an experiment.
Independent groups design
Participants take part in only one condition of the experiment.
Strength of independent groups
No order effects such as practice or fatigue since each participant only completes one condition.
Limitation of independent groups
Participant variables may affect results because groups differ in abilities or characteristics.
Reducing participant variables in independent groups
Random allocation is used to distribute differences evenly.
Repeated measures design
Participants take part in all conditions of the experiment.
Strength of repeated measures
No participant variables because the same individuals take part in each condition.
Limitation of repeated measures
Order effects such as practice, boredom or fatigue may impact performance.
Order effects
Changes in performance caused by the order in which conditions are completed.
Practice effect
Improvement in performance due to experience from previous condition.
Fatigue effect
Worse performance due to tiredness when performing later conditions.
Counterbalancing
A method to control order effects by varying the order in which participants complete conditions.
ABBA counterbalancing
Participants do conditions in one order then reverse order to cancel out practice/fatigue.
Latin square
A more complex counterbalancing technique ensuring each condition appears in each position equally.
Matched pairs design
Participants are matched on key variables relevant to the experiment (e.g. IQ, age).
Strength of matched pairs
Reduces participant variables because groups are similar on important characteristics.
Limitation of matched pairs
Time-consuming and difficult to find closely matched participants.
Matched pairs allocation
Members of each matched pair are placed in different conditions.
When to use independent groups
When avoiding order effects is important or repeated measures is impractical.
When to use repeated measures
When participant variables must be controlled and fewer participants are available.
When to use matched pairs
When individual differences could confound results but repeated measures is not possible.
Participant variables
Differences between participants such as motivation, intelligence or personality.
Controlling participant variables
Random allocation or matched pairs design can reduce their impact.
Demand characteristics in repeated measures
Increased risk because participants may guess the aim after completing both conditions.
Demand characteristics in independent groups
Less likely because participants only do one task.
Design suitability
Choice depends on study aims, practical constraints and control of variables.
Independent groups disadvantage
Requires more participants; harder to recruit large samples.
Repeated measures advantage
Economical design as fewer participants are needed.
Repeated measures disadvantage
High likelihood of demand characteristics influencing behaviour.
Matched pairs advantage
No order effects and fewer participant differences than independent groups.
Matched pairs disadvantage
Matching process may be subjective and imperfect.
Random allocation
A method of assigning participants to conditions using chance to reduce bias.
Random allocation purpose
Ensures participant variables are evenly distributed across conditions.
Counterbalancing purpose
Ensures practice or fatigue effects are balanced across conditions, not favouring one condition.
Types of counterbalancing
ABBA, Latin square, or randomised order.
Order effects control
Only needed in repeated measures design.
Participant variables control
Indirectly controlled in repeated measures; directly controlled in matched pairs.
Control group
A group not receiving the IV manipulation, used for comparison.
Experimental group
A group receiving the IV manipulation.
Extraneous variables and design
Design must minimise variables that could influence the DV.
Experimental design validity
Validity increases when confounding variables are minimised through appropriate design choice.
Internal validity and design
Design enhances internal validity when it controls participant and order effects effectively.
Choosing the right design
Depends on balancing control, practicality, and minimising bias.
Strength of using designs strategically
Different designs allow researchers to tailor the study to reduce confounds.
Design-specific ethical issues
Repeated measures may cause boredom or fatigue; independent groups may risk unfair comparisons.