1/103
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
measures of dispersion
identifies the individual differences of the scores in a sample; how scores are spread around the mean; extent individual scores deviate from one another
homogenous measure of dispersion
scores are similar
heterogeneous measure of dispersion
wide variation within the scores resulting in an increase of values for the measures of dispersion; range and standard deviation most common measurements
range
simplest measure of dispersion; lowest and highest scores obtained for a variable
Standard deviation
average number of points by which the scores of a distribution vary from the mean; visual measure of dispersion
sample characteristics
description of participants who comprise the study sample; ex: ethnicity, age, marital status; types of calculations/procedures depends on level of measurement of the demographic variables in the study
demographic variables
age, sex, income, employment, location, homeownership, level of education; most are nominal
clinical variables
selective physical, emotional, and cognitive variables collected and analyzed to describe specific clinical characteristics of study sample; can be dependent variable; ex: diagnoses, blood pressure, blood glucose levels
T-Test for independent samples
parametric and inferential (allows us to make assumptions about larger population based on sample); differences between two independent samples; depender variable must be continuous and normally distributed
t-test for paired samples (dependent samples)
parametric and inferential; compares two sets of data from one group of people; repeated assessment of the same group of people (more than one test)
one-way analysis of variance (ANOVA)
parametric and inferential; compares data between two or more groups or condition to investigate the presence of differences between those groups; tests one independent variable and one dependent variable
repeated measures ANOVA
parametric; compares multiple sets of data from one group of people; assesses the same group of people over time or refers to naturally occurring pairs
Mann-Whitney U
non parametric alternative to independent samples t-test; compares differences between two independent sample when there is not a normal distribution or ordinal data that can’t be treated as interval/ratio
Kruskal-Wallis Test
non parametric alternative to one-way ANOVA; compares differences between two or more groups when there is not a normal distribution or if there is ordinal data that can’t be treated as interval/ratio
Friedman Test
non parametric alternative to repeated measure ANOVA; compares multiple sets of data from one group of people when there is not a normal distribution of if ordinal data can’t be treated as interval/ratio
Wilcoxon Signed-Rank Test
non parametric alternative to paired samples t-test; compares two sets of data from one group of people (looking for differences between groups) when the data is not normally distributed or if ordinal data can’t be treated as interval/ratio
Pearson Chi-Square Test
non parametric test that compares differences between group on variable at nominal level; can reveal if the difference in proportions (percentages) between categories is statistically improbable; not an alternative, just for nominal data
One-way Chi Square
compares different levels of one variable (nominal)
Two-way Chi Square
tests whether proportions in level of one nominal variable are significantly different than the proportions in a second nominal variable
Pearson Product-Moment Correlation Coefficient (Pearson’s r)
parametric, inferential statistic computed by two continuous, normally distributed variables
What is value of r between?
-1.00 (lower values of x are associated with lower values of y) and +1.00 (higher values of x are associated with higher values of y)
Spearman Rank-Order Correlation Coefficient
non parametric alternative to Pearson’s r; examines association between two continuous variables when one or both variable are not normally distributed or variables that are ordinal can’t be converted to interval/ratio
Phi
non parametric alternative to Pearson’s r when the two variables being correlated are dichotomous (two options)
What is the value of Phi?
between -1.0 and 1.0 when 0 represents no association
Cramer’s V
nonparametric alternative to Pearson’s r when the two variables being correlated are both nominal
What is the value of Cramer’s V
between 0 and 1 where a 0 represents no association between the varaibles and a 1 represents a perfect association
Odds Ratio
commonly used to obtain and indication of association when both the predictor (independent) and the dependent variable are dichotomous; the ratio of the odds of an event occurring in one group to the odds of it occurring in another group
What does an Odds Ratio value of 1.0 indicate?
the predictor does not affect the odds of the outcome; no association
What does an Odds Ratio value of >1.0 indicate?
the predictor is associated with a higher odds of an outcome; strong association; the bigger the number the higher the odds for the outcome
What does an Odds Ratio value of <1.0 indicate?
the predictor data is associated with a lower odds of the outcome; the lower the number the lower the odds of the outcome
Simple and Multiple Linear Regression
provides an estimate of the value of the dependent variable based upon the independent variable or set of independent variables (predictors); multiple independent variables (known) predicting one dependent variable
power analysis
determines an adequate same size of the study; usually conducted using computer software
Type I Error
false positive; when the results of the study falsely/incorrectly indicate that there’s a significant difference between groups when there actually is no difference; incorrect rejection of the a true null hypothesis (should be accepting it)
Type II Error
false negative; when the results of the study falsely/incorrectly indicate that there’s not a significant difference when there actually is; incorrectly retaining false null hypothesis (should be rejecting it; difference is there, but not detected originally
Power
probability that a statistical test will detect an effect when it actually exists (degree to which null hypothesis is false); deciding factor in determining an adequate sample size for descriptive, correlation, quasi experimental, and experimental studies
1-B (complement of type II error)
How is power calculated?
What is the conventional value of power?
0.20 (1-0.20=.80); statistic will have an 8-% chance of detecting an effect if an effect actually exists; don’t want 1-B to be >0.20
Priori Power
when power analysis is performed before the study begins; what is preferred
Hoc Power
when power analysis is performed after the study; should be reported in the results section of a study that fails to reject the null hypothesis; will strengthen meaning of findings if power is high (0.80) if a relationship was found
factors of power analysis
level of significance (alpha level); p=<0.05
probability of obtaining a significant result (1-B) usually 0.80
the hypothesized or actual effect (association or difference)
sample size; if other 3 are known can calculate 4th
larger
two-tailed tests (normal distribution with 2 tails of outliers) require ____ samples sizes than one tailed tests (outliers only on one size)
larger
the smaller the effect size the ____ the necessary sample size
effect size
indicates how strong the relationship (differences or associations) is between variables and the strength of the differences between groups; degree to which the phenomenon is present in the population; degree to which the null hypothesis is false
larger
a ___ effect size would be selected if the researcher thought the effect would be larger
larger; small
small effect sizes require ___ samples to detect ____ differences
Cohen’s d
used for Independent Samples T-test; effect size measure of comparison of 2 groups; represents the difference between means of group 1 and 2 in SD unity
calculation for Cohen’s d
(mean of group 1-mean of group 2)/SD (of either group)
small effect size for Cohen’s d
0.20
moderate effect size for Cohen’s d
0.50
large effect size for Cohen’s d
0.80
Cohen’s f
used for one-way ANOVA; expresses effect size in SD units for two or more groups; identifies magnitude of differences among group
small effect size for Cohen’s f
0.10
moderate effect size for Cohen’s f
0.25
large effect size for Cohen’s f
0.40
Pearson’s r is its own _____ _____
effect size
small effect size for Pearson’s r
0.10
moderate effect size for Pearson’s r
0.30
large effect size for Pearson’s r
0.50
effect size
Odds Ratio is its own ___ ____
small effect size for Odds Ratio
1.5
moderate effect size for Odds Ratio
2.5
large effect size for Odds Ratio
4.3
d
difference in percentages in group 1 versus group 2; used for a two-sample comparative design where the dependent variable is dichotomous
small effect size for d
0.05
moderate effect size for d
0.15
large effect size for d
0.25
Coefficient of Determination or R²
used in simple/multiple linear regression; represents the percentage of varaicane explained by y by the predictor (x)
small effect size for R²
0.02
moderate effect size for R²
0.15
large effect size for R²
0.26
Frequency Table
method of organizing data by listing every possible value in the first columns of numbers and the frequency of each value as the second column
Ungrouped Frequency Distribution
list all categories of the variable on which they have data and tally each datum on the listing
Theoretical Normal Curve
a theoretical frequency distribution of all possible scores; no real distribution exactly fits the normal curve; symmetrical, unimodal and has continuous values (mean, median, and mode all equal)
validity
Skewness interefers with the ____ of many statical analyses
Positive Skew
largest portion of data is below the mean; mean is greater than the median which is greater than the mode
Negative Skew
largest portion of data is above the mean; mean is less than the median which is less than the mode
Kurtosis
degree of peakedness/steepness of the frequency distribution which is related to the spread or variance of scores
Leptokurtic
extremely peaked distribution; value above zero (>1 —→ extreme); curve is tall and skinny because data is not distributed well
Mesokurtic
intermediate degree of kutrosis; score of zero (+1- -1)
Platykurtic
relatively flat distribution; value below zero (< -1 —→ extreme)
What skewness and kurtosis statistic values are fairly severe and could impact outcome of parametric analysis techniques?
greater than or equal to +1 and less than or equal to -1
Shapiro-Wilk’s W test
test of normality that assesses whether a variable’s distribution is skewed and/or kurtotic (if distribution is normal or not); if only one of the W values are severe check the p value to see if parametric analysis can be used or not
significant deviation of normality
Shapiro-Wilk’s p<0.05
Descriptive Statistics
computed to reveal characteristics of the sample data and to describe study variables; crucial to understanding of the fundamental properties of the variables being studied
Measures of Dispersion
measures of individual difference of the members of the population and sample; indicated how values in a sample are dispersed around the mean (how spread the values in the sample are around the mean)
Difference (Deviation) Scores
obtained by subtracting the mean from each score/data point
above
A positive difference/deviation score is ____ the mean
below
A negative difference/deviation score is ___ the mean
Mean Deviation
average difference score using absolute values; absolute value of each data point - the mean / n (# of people in sample)
Variance- s²
describes a sample variance; denominator is (n-1)
Variance- o²
represents a (whole) population variance; denominator is (N)
Standard Deviaiton
measure of dispersion that is the square root of the variance
Standard Error
describes extent of the sampling error; determines the magnitude of the variability associated with the mean
Small Standard Error
indicated the sample mean is close to whole population mean
Large Standard Error
less certainty that the sample mean approximates the population mean; sample mean not that close to population mean
Sampling Error formula
Standard Deviation/square root of n (# of people in sample)
Degrees of Freedom (df)
number of independent pieces of information that are free to vary in order to estimate another piece of information; number of choices you have to work the variables; when using a Confidence Interval the degrees of freedom are n-1
Confidence Intervals (CI)
determines how closely the sample mean approximates the population mean; standard error or mean used
What must Confidence Intervals include?
-standard error value
-t value (need alpha value and df)
-df (n-1)
-mean
Confidence Intervals calculation
mean + and - (standard error)(t value)