1/39
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Nonparametric
assumptions freeer
Nonparametric tests
The combination of being able to use small sample sizes, makes nonparametric tests desirable for analyzing categorical data obtained from small sample
Degrees of freedom
Has to do with how peaked (or flat) the sampling distribution is for any given statistical test
df = N-1
Two samples subtract 1 twice
As SS increases df will increase
Estimation
A process whereby we select a random sample from a population and use a sample statistic to estimate a population parameter.
Point estimates
A sample statistic used to estimate the exact value of a population parameter.
Ex: 13.71 years of edu
Interval Estimates
A sample statistic used to estimate a range of values within which the population parameter may fall.
Aka Confidence interval
Ex: 12-14 years of education
Confidence Interval
A range of values (the raw score) is defined by the confidence level within which the population parameter is estimated to fall.
Defined in terms of confidence level (90%, 95%, 99%)
Can also be defined in terms of margin of error
Confidence Level
The likelihood, expressed as a percentage or a probability, that a specified interval will contain the population parameter.
Used to evaluate the accuracy of interval estimate
Margin of Error
The radius of a confidence interval
Determining the C.I for means
Calculate the S.E
Decide on C.L and find the corresponding Z score
Calculate the CI
Interpret the results (“We can say with ___ confidence ____”)
Standard Error of the Mean
σ/√N
Confidence Interval
C.I = Ȳ ± Z (𝜎y)
Confidence Levs + Corresponding Z score
90%: ± 1.65
95% ± 1.96
99%: ± 2.58
Reducing Risk by Increasing Confidence
Larger confidence level = wider range
Trade off
precision decreases (C.I widens)
SS and C.L
As SS increases
Width of C.I decreases
Precision of C.I increases
Statistical Hypothesis Testing + Assumptions
A procedure used to evaluate hypotheses about population parameters based on sample statistics.
All statistical tests assume RANDOM SAMPLING
Tests about MEANS assume an INTERVAL-RATIO level of measurement.
Either assume that the population is NORMALLY DISTRIBUTED, or that
the SAMPLE SIZE is larger than 50 (N>50).
Alt Hypthoesis
Claim to be tested; challenges status quo
H₁: <, >, ≠,
One tailed test
Alt hypothesis is directional
< or > specified value
Right Tailed Test
Pop mean is > specified value
H₁: M >
Left Tailed Test
Pop mean is < specified value
H₁: M <
Two Tailed Test
Pop mean isn’t equal to specified value
H₁: ≠
Null Hypothesis
Indicates no significant diff between population and sample means
expressed in population parameters
H0: =, >, <
The Z statistic (obtained)
Convert the sample mean into a Z score
Compute test stat
Z = Ȳ-Mȳ/ σ/ √N
P value
Prob that “measured diff” would occur from random chance if null is true
Measures the unsuality/rarity obtained stat is compared to null
Smaller p-value = more evidence to reject the null
Larger p value = null is true
Alpha (α)
Level at which the null is rejected
Usually set at.05, .01, or .001 level
5 Steps in Hypothesis Testing
Making assumptions and meeting test requirements
Formulating the null and research hypotheses and stating the alpha
Selecting the sampling distribution and specifying the test statistic
Computing the test statistic
Making a decision and interpreting the results of the test
Type I error
If the null is true and it’s rejected
Type II error
If the null is false and it’s failed to be rejected
T Statistic (Obtained)
T reps the standard error units that the sample mean is from the hypothesized value of M (assuming null is true)
t = Ȳ-M/ s/ √N
T distribution
Family of curves determined by degrees of freedom
used when SS is less than 30
Sampling distribution of the difference between means
Variances are =: SȲ1-Ȳ2 = √(N1-1)s1² + (N2-1) s2²/ (N1+N2)-2 √ N1+N2/ (N1)(N2)
t = Ȳ1- Ȳ2/ SȲ1-Ȳ2
Variances are unequal: SȲ1-Ȳ2= √s1²/N1 + s2² / N2
df: (√s1²/N1 + s2² / N2) ² / (√s1²/N1)/ N1-1) + (s2² / N2)/ (N2-1)
t= Ȳ1- Ȳ2/ SȲ1-Ȳ2
Bivariate Analysis
A method designed to detect and describe the relationship between two nominal/ ordinal variables
Cross Tab
A technique for analyzing the relationship between two variables (IV and DV) that’s been organized in a table
Constructing a Bivariate Table
Lays the distribution of one variable across categories of another variable
Classify case based on joint scores or two variables
Think of it as frequency distributions joined together in one table
Computing percentages in Bivariate Table
Calculate % with each category of the independent variable
Interpret by comparing % for diff categories of independent variable
Direct casual relationship
When the relationship between two variables cannot be accounted for by other theoretically relevant variables
Spurious relationship
Relationship between two variables in which both IV and DV are influenced by a causally prior variable and there’s no link between them
3 steps for finding relationship
Divide the observation into subgroups based on the control variable
Reexamine the relationship between the original two variables separately for the control subgroup
Compare partial relationships with original bivariate relationship for total group
Conditional Relationships
When a bivariate relationship differs for different control variables
condition met → relationship holds
conditions not met → relationship dissapears