Chapter 6 Independent Group Designs
First: Two Research Traditions in Psychology
Correlational Research (Individual Differences Tradition).
Experimental Research (Experimental Psychology Tradition).
Random Sampling vs. Random Assignment
Why Psychologists Conduct Experiments
Test
Hypotheses from theories
Effectiveness of treatment and programs
Thrid goal of psychological research
Explanation
Examine the causes of behaviour
Multimethod approach
Seek convergent validity for research findings across methods
Experimental Research
Usually the strongest means to test causation
An experiment must include
Independent variable (IV)
Dependent variable (DV)
Independent variable
Manipulated (controlled) by experimenter
At least 2 conditions (levels)
“Treatment” and “control”
Internal Validity
Differences in performance (DV) can be attributed unambiguously to the effect of the independent variable (IV)
3 conditions for causal inference?
Confounding variables
Control techniques to eliminate confounding
Hold conditions constant
Counter-balancing
Control Techniques
Balancing
Random assignment to conditions balances subject characteristics, on average.
Groups are equivalent prior to IV manipulation
All subject variables are balanced
Independent Groups Designs
Different individuals participate in each condition of the experiment
No overlap of participants across conditions.
Three types
Randomized groups design
Matched groups design
Natural groups design
Randomized Groups Designs
Individuals are randomly assigned to conditions of the IV.
Logic of casual inference
If groups are equivalent at the beginning of an experiment (through balancing) and conditions are held constant, any differences among groups on the dependent variable are caused by the manipulated independent variable.
Additional Independent Group Designs
Matched Groups Design
Random assignment requires large samples to balance subject characteristics.
Sometimes only small samples are available.
In matched group designs,
Researchers select 1 or 2 individual differences variables for matching.
Natural Group Designs
Natural group designs
Individual differences (subject variables)
Can’t randomly assign participants to these groups.
Threats to Internal Validity
The ability to make causal inferences is threatened when
Intact groups of subjects are used
Extraneous variables are not controlled
Hold conditions constant
Selective subject loss occurs
Mechanical subject loss is not a problem
Demand characteristics and experimenter effects are not controlled.
Used placebo-control and double-blind procedures.
Analysis and Interpretation of Experimental Findings
Use statistical analysis to
Claim IV produced an effect on DV
Rule out the alternative explanation that chance produced any observed effect.
Replication
The best way to determine whether findings are reliable
Repeat the experiment and see if the same results are obtained
Analysis of Experimental Designs
Three steps
Check the data
Errors? Outliers?
Describe the results
Descriptive statistics such as means, standard deviations, effect size
Analyze the data
Inferential statistics
Descriptive Statistics
Mean (central tendency)
Standard deviation (variability)
Confirm what the data reveal
Use inferential statistics to determine whether the IV produced a reliable effect on the DV.
Rule out whether findings are due to chance (error variation).
Two types of inferential statistics
Null Hypothesis Significance Testing
Confidence intervals
Null Hypothesis Significance Testing
Statistical procedure to determine whether the mean difference between conditions is greater than what might be expected due to chance (error variation)
Or more precisely, the probability of observing a difference that is extreme assuming the null hypothesis is true.
p < .05, p < .01, p < .001, etc.
* “Alpha level” vs. observed significance level
Steps for Null Hypothesis Testing
(1) Assume the null hypothesis is true.
The population means for groups in the experiment are equal.
(2) Use sample means to estimate population means.
(3) Compute the appropriate inferential statistic.
t-test: test the difference between two sample means
F-test (ANOVA): test the difference among three or more sample means
(4) Identify the probability associated with the inferential statistic
p value printed in computer output or can be found in statistical tables.
(5) Compare the observed probability wtih the predetermined level of significance (alpha), which is usually p <.05
If the observed p value is greater than 0.5, do not reject the null hypothesis of no difference
Conclude IV did not produce a reliable effect
Effect size
Measure of strength of relationship between the IV and DV
Cohen’s d
difference between treatment and control means
average variability for all participants’ scores
Guidelines for interpreting Cohen’s d:
small effect of IV: d = .20
medium effect of IV: d = .50
large effect of IV: d = .80
Meta-analysis
Summarize effect sizes across many experiments that investigate same IV or DV.
Choose experiments based on their internal validity and other criteria.
Allows researchers to gain confidence in general psychological principles.
External Validity
Questions of external validity
- Would the same findings occur
In different settings?
In different conditions?
With different participants?