1/81
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
The Normal Curve
symmetrical, bell-shaped, and unimodal curve in which half the scores are above the mean and half the scores are below the mean
Central Limit Theorem
as sample size increases, skewed raw scores become unskewed
What percent of scores are between the mean and 1 SD above or below the mean in a normal curve?
34%
What percent of scores are between the mean and 1 and 2 SDs above or below the mean in a normal curve?
14%
What percent of scores are between the mean and more than 2 SD above or below the mean in a normal curve?
2%
Normal Curve Table
gives the precise percentage of scores between the mean and any other Z score
The mean has a Z score of ___.
0
What can the normal curve table be used to determine?
Proportion of scores above or below a particular Z score
Proportion of scores between the mean and a particular Z score
Proportion of scores between 2 Z scores
Determine the Z score for a particular proportion of scores under the normal curve
The table lists ______ Z scores, but they work for negatives too because the curve is ______.
positive, symmetrical
How to figure out the percentage above or below a Z score
Convert raw score to Z score
Draw curve and indicate where Z score falls
Find exact percentage with normal curve table + compare with estimate
How to figure out a Z score or Raw score from a Percentage
Draw normal curve
Make estimate of the Z score needed
Use normal curve table and convert Z score to raw score
1.96 cuts off the top _____ of the distribution, while -1.96 cuts off the bottom _____.
2.5%
95% of Z-scores lie between ____ and ____.
-1.96, 1.96
99% lie between _____ and _____.
-2.58 and 2.58
99.9% lie between _____ and _____.
-3.29 and 3.29
What is the normal distribution relevant to?
Parameters
Confidence Intervals around a parameter
Null hypothesis significance testing
When does the assumption of normality matter?
only in small samples, because of the central limit theorem
Outliers
data points that are significantly different from others and can change the mean, SD, and SE
Where can outliers be found?
Histograms
Whisker box plots
Error bars
Boxplots
made up of a box and two whiskers
The box shows..
The median
Upper and lower quartiles
Limits within which the middle 50% of scores lie
The whiskers show..
The range of scores
The limits within which the top and bottom 25% of scores lie
Error Bar Charts
The bar (usually) shows the mean score
The error bar sticks out from the bar like a whisker
The error bar displays precision of mean in one of three ways
Three Ways Precision of the Mean can be Shown
The confidence interval (usually 95%)
The standard deviation
The standard error of the mean
Mixed Designs
Research designs that combine both experimental and correlational methods
The Standard Error
the standard deviation of a sampling distribution that can be estimated in a population
Inferential statistics
Methods used by social and behavioral scientists to go from the results of research studies to conclusions about theories or applied procedures
Sample
relatively small number of instances that are studied in order to make inferences about a larger group from which they were drawn
Population
the larger group from which a sample is drawn
Population parameters
actual value of the mean, SD, etc. for the population
Sample statistics
descriptive statistics, such as the mean or SD, figured from the scores in the particular group studied
Why study samples?
it is often not practical to study an entire population, so researchers make samples representative of populations
Simple Random Sample
a sample design where every member of the population has an equal chance of being chosen and is the most basic
Systematic Random Sample
every kth member of the population is chosen for inclusion
K
population size/sample size
Stratified Random Sample
divide population into subgroups based on some variable characteristics and draw a simple random sample from each
Disproportionate Statified Sample
a sampling method where the sample size of each subgroup is not proportional to its size in the population
Cluster sampling
a sampling technique in which clusters of participants that represent the population are used
Haphazard selection (Convenience Sampling)
Non-probability sampling with any technique in which samples are selected in a way not connected to probability
Purposive sampling
a biased sampling technique in which only certain kinds of people are included in a sample
Snowball sampling
a variation on purposive sampling, a biased sampling technique in which participants are asked to recommend acquaintances for the study
Quota sampling
A nonprobability sampling technique in which researchers divide the population into groups and then arbitrarily choose participants from each group
Probability theory
branch of mathematics that provides the tools researchers need to devise sampling techniques representative to samples and statistically analyze the results of their sampling
p
probability, measures expected relative frequency of a particular outcome and can be represented as a proportion or percentage
p =
possible successful outcomes/all possible outcomes
Hypothesis testing
procedure for deciding whether the outcome of a study supports a particular theory or innovation
What does the researcher consider in hypothesis testing?
the probability that the experiemental procedure had no effect and that the observed result could have occured by chance
If the probability that the experiment had no effect is low, what will the researcher do?
Reject the notion that the experimental procedure had no effect and affirm the hypothesis that the procedure did have an effect
Null Hypothesis (Ho)
assumes the manipulation had no effect on the outcome
Research hypothesis (H1)
assumes that manipulation or intervention did have an effect on the outcome
5 Steps of Hypothesis Testing
1. Restate the question as a research hypothesis and a null hypothesis about the populations
2. Write down all known information
3. Determine the cutoff sample score on the comparison distribution at which the null hypothesis should be rejected
4. Determine your sample's score on the comparison distribution
5. Decide whether to reject the null hypothesis
Critical Values of .05 (One Tailed)
-1.64 or 1.64
Critical Values of .05 (Two Tailed)
-1.96 or 1.96
Critical Values of .01 (One Tailed)
-2.33 or 2.33
Critical Values of .01 (Two Tailed)
-2.58 or 2.58
How to determine sample's score in comparison to CV?
Convert raw score to Z score
Z score formula for one sample
Z = X-M / SD
When should the null hypothesis be rejected (and the research accepted_?
If the sample's Z score is more extreme (higher) than the cutoff score
One-tailed tests
A directional prediction in which the researcher knows the specific direction the manipulation is going to have on the experiment, such as increases or decreases
Two-tailed tests
Non-directional prediction in which the researcher expects an effect but does not know in which direction the effect will occur, so it takes into consideration that the sample could be at the extreme of either tail
.05 Significance Level
5% significance level, meaning if the calculated value is less than or equal to 0.05, you reject the null hypothesis, meaning there is a statistically significant result at this level
.01 Significance Level
1% significance level, meaning if the calculated value is less than or equal to 0.01, you reject the null hypothesis, meaning there is a statistically significant result at this level
Two-tailed tests are more _____ than one-tailed.
conservative
What happens if you use a one-tailed test and the results come out in the wrong direction?
It cannot be considered significant
Can the null hypothesis ever be rejected completely? Why?
no, it can only be shown to be very unlikley that the researcher would have gotten the observed results if the null were true
Decision errors
When the right procedure leads to the wrong conclusion
Type I Error
rejects the null hypothesis when it is true, concluding that manipulation had an effect when it did not
Type II Error
fails to reject the null hypothesis when it is false, concludes that manipulation did not have an effect when it did
What would setting a strict significance level, such as p < .001, do?
Decrease probability of Type I error
Increase probability of Type II error
What would setting a lenient significance level, such as p < .10, do?
Increase probability of Type I error
Decrease probability of Type II error
Effect size
amount that two populations do not overlap, or the extent to which the experimental procedure had the effect of seperated the two groups
Effect size =
Population 1 M - Population 2 M / Population SD
Why is effect size used?
it is a standardized value that allows for comparison of effect across studies
Cohen's Effect Size Conventions
.20 small
.50 medium
.80 large
Statistical Power
probability that a study will produce a statistically significant result if the research hypothesis is true
What does power depend on?
effect size and sample size
There is more power if..
- Bigger difference between means
- Smaller population SD
- Larger sample size
How may statistical power look?
two distributions may have little overlap and the study has high power because the means are very different and the variance is very small
A study should have at least ____ power to be worth taking.
80%
How to increase power
Increasing mean difference
Decreasing population SD
Using less stringent significance level
One-tailed over two-tailed test
More sensitive hypothesis testing procedure
Why is power important in interpreting results for a significant result?
if the sample size is small, the effect is probably practically significant as well while if it is large the effect may be too small to be useful
Why is power important in interpreting results for a insignificant result?
If sample size is small, study is inconclusive and if large, research hypothesis is probably false