1/47
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
decision errors
situations in which incorrect conclusions are made in hypothesis testing despite using the correct procedures
How are decision errors possible?
because decisions about populations are made based on information in samples
Type I error
you conclude that the study supports the research hypothesis,, when in reality, the research hypothesis is false
Why does Type I Error concern psychologists?
because theories, research programs, treatment programs, and social programs are often based on conclusions of research studies
What is the chance of making Type I error?
alpha (α), which is equal to the significance level
Type II error
you fail to reject the null hypothesis when it is false
What is the chance of making Type II error?
beta (β)
How can the chance of making Type II error be reduced?
by setting a very lenient significance level
What is the downside of protecting against one kind of decision error?
it increases the chance of making the other kind
True or false: an effect can be statistically significant without having much practical significance
true
effect size
a measure of the difference between population means
What does effect size show?
how much something changes after a specific intervention
the extent to which two populations do not overlap
What happens in a smaller effect size?
the populations will overlap more
How is raw effect size calculated?
by taking the difference between the population 1 mean and the population 2 mean
standardized mean difference (Cohen’s d)
the difference between population means divided by the population’s standard deviation
formula for standardized mean difference
d = (µ1 - µ2) / σ
µ1 = mean of population 1 (experimental)
µ2 = mean of population 2 (comparison)
σ = standard deviation of population 2
effect size conventions
standard rules about what to consider a small, medium, or large effect size, based on what is typical in psychology research
small effect size
0.20
medium effect size
0.50
large effect size
0.80
What does knowing the effect size of a study allow you to do?
compare results with effect sizes found in other studies, even when the other studies have different population standard deviations
What does knowing whether an effect size is small or large allow you to do?
evaluate the overall importance of a result
meta analysis
a statistical method for combining effect sizes from different studies
statistical power
the probability that a research study will produce a statistically significant result if the research hypothesis is true
What does statistical power help determine?
how many participants are needed
What does understanding statistical power help you do?
make sense of the results that are not significant or results that are significant but not of practical importance
What tools do researchers use to figure out statistical power?
power software packages
Internet-based power calculators
power tables
What are the 2 main factors that determine statistical power?
effect size
sample size
How does effect size influence statistical power?
larger effect size = greater power
How does sample size influence statistical power?
more participants = greater power
What other factors influence statistical power?
significance level
one-tailed v. two-tailed tests
type of hypothesis-testing procedure
How does significance level affect statistical power?
less extreme = more power
more extreme = less power
How does one-tailed v. two-tailed influence statistical power?
power is less with a two-tailed test that with a one-tailed test
Why do researchers consider statistical power?
to help them decide how many people to include in their studies; they need to ensure that they have enough people in the study to see an effect if there is one
What is the standard acceptable level of statistical power?
80%
What does conducting a study with low statistical power usually result in?
results that are not statistically significant
What are practical ways to increase the power of the study?
increase effect size by increasing the predicted difference between population means
increase effect size by decreasing population standard deviation
increase sample size
use less extreme significance level
use one-tailed test
use more sensitive hypothesis-testing procedure
When is evaluating practical significance important?
when studying hypotheses that have practical importance
clinical significance
the result is big enough to make a difference that matters in treating people
What conclusion can be made if a result is statistically significant with a small sample size?
it is likely to be practically significant
What conclusion can be made if a result is statistically significant with a large sample size?
it may or may not have practical importance; effect size should be considered
What conclusion can be made if a result is not statistically significant with a small sample?
it is inconclusive
What conclusion can be made if a result is not statistically significant with a large sample size?
the research hypothesis is probably false
True or false: a nonsignificant result from a study with low power is truly inconclusive
true
What does a nonsignificant result from a study with high power suggest?
that either the research hypothesis is false or there is less of an effect than was predicted when calculating power
How frequently is effect size mentioned in research articles?
often
Why are effect sizes almost always report in meta-analyses?
because they’re a crucial factor
Where is power discussed?
grant proposals and sometimes when evaluating nonsignificant results