central limit theorem
the theorem that specifies the nature of the sampling distribution of the mean
central limit theorem definition
given a population with a mean μ and a variance σ², the sampling distribution of the mean will have a mean equal to μ and a variance equal to σ²/N.
The distribution will approach the normal distribution as N, the sample size, increases (this sentence is true regardless of the shape of the population distribution)
if you increase the sampling size,
the distribution becomes more normal (bell-shaped)
5 factors that affect whether the t-value will be statistically significant
the difference between X̄ and μ
N (sample size)
S (standard deviation)
α
one-tailed vs two-tailed tests
the difference between X̄ and μ
the greater the difference, the more likely you'll have statistical significance
N
the greater the sample size, the more likely you'll have statistical significance
critical value becomes smaller
S
the greater the standard deviation, the less likely you'll have statistical significance
α
the greater the alpha (the larger the rejection region), the more likely you'll have statistical significance
one-tailed vs two tailed tests
if you're using a one-tailed test and you're right about the direction, you're more likely to have statistical significance because you've doubled the area of the rejection region
if you're using a one-tailed test and you're wrong about the direction, you'll never have statistical significance because the area of the rejection region is 0
effect size
the difference between 2 populations divided by the standard deviation of either population
it is sometimes presented in raw score units (and sometimes in standard deviations)
effect size in terms of standard deviations
d^= X̄ - μ / S
guidelines for interpreting d^
trivial, small, medium, large effect size
< 0.2
trivial effect size
≥ 0.2 & < 0.5
small effect size
≥ 0.5 & < 0.8
medium effect size
≥ 0.8
large effect size
confidence interval
an interval, with limits at either end, having specified probability of including the parameter being estimated
Confidence Interval
X̄± t_df * S/√N
what makes a confidence interval wider?
smaller α (0.05 to 0.01)
larger s
smaller N
what makes a confidence interval narrower?
bigger α
smaller s
bigger N
related samples
an experimental design in which the same subject is observed under more than one treatment
ex: comparing students' two sets of quiz scores (DV) after they used two different studying strategies
repeated measures
data in a study in which you have multiple measurements for the same participants
ex: measuring a person's cortisol level before and after a competition
matched samples
an experimental design in which the same subject is observed under more than one treatment
ex: asking a husband and wife to each provide a score on marital satisfaction
d‾
the difference of the means
S_D
the standard deviation of the difference scores
difference scores (gain scores)
the set of scores representing the difference between the subjects' performance on two occasions
advantages of related samples
it avoids the problem of person-to-person variability
it controls for extraneous variables
it requires fewer participants than independent samples designs to have the same amount of power
disadvantages of related samples
order effect
carry-over effect
order effect
the effect on performance attributable to the order in which treatments were administered
ex: you want to know if stimulant A or B improves reaction time more
carry-over effect
the effect of previous trials (conditions) on a subject's performance on subsequent trials
ex: you want to know if Drug A or B improves depression more
independent-sample t tests
are used to compare two samples whose scores are not related to each other, in contrast to related-samples t tests
ex: comparing male and female grades on a language verbal test
two assumptions for an independent-samples t test
homogeneity of variance
the samples come from populations with normal distributions
homogeneity of variance
the situation in which two or more populations have equal variances
the variance of a sample is similar to another
heterogeneity of variance
a situation in which samples are drawn from populations having different variances
homogeneity assumption is for
population variances, not sample variances
the samples come from populations with normal distributions
however, independent-samples t tests are robust to a violation of this assumption, especially with sample sizes less than 30
pooled variance
a weighted average of separate sample variances
standard error of difference between means
the standard deviation of the sampling distribution of differences between means
sampling distribution of differences between means
the distribution of the differences between means over repeated sampling from the same populations
degrees of freedom for one-sample t test
N-1
degrees of freedom for related-samples t test
N-1
degrees of freedom for independent-samples t test
n1+n2-2 or N-1
formula for pooled variance
^
formula for independent-samples t test
^
counterbalancing
how to solve the problems of order and carry-over effect (disadvantages)