1/63
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Analysis of Variance (ANOVA)
hypothesis-testing procedure that is used to evaluate mean differences between two or more treatments (or populations)
Factor
In ANOVA, the variable (independent or quasi-independent) that designates the groups being compared.
Levels (levels of the factor)
The individual conditions or values that make up a factor
Two-factor design / factorial design
a study that combines two factors
Single-factor designs
studies that have only one independent variable (or only one quasi-independent variable)
single-factor, independent-measure design
Testwise alpha level
the risk of a Type I error, or alpha level, for an individual hypothesis test.
Experimentwise alpha level
when an experiment involves several different hypothesis tests, total probability of a Type I error that is accumulated from all of the individual tests in the experiment. Typically, the experimentwise alpha level is substantially greater than the value of the alpha used for any one of the individual tests.
Between-treatments variance
measures how much difference exists between the treatment conditions
Treatment effect
differences between treatments have been caused by this.
Ex. if treatments really do affect performance, then scores in one treatment should be systematically different from scores in another condition.
Within-treatments variance
provides a measure of how big the differences are when H0 is true
F-ratio
ratio of variance between treatments and variance within treatments
helps determine whether any treatment effects exist
F = variance btwn treatments ÷ variance within treatments. = differences including any treatment effects ÷ differences with no treatment effects
Error term
its the denominator of the F-ratio for ANOVA
provides a measure of the variance caused by random and unsystematic differences.
when treatment effect is zero (H0 is true), it measures the same sources of variance as the numerator of F-ratio, so value of F-ratio expected to be nearly equal to 1.00.
Mean square (MS)
In ANOVA, customary to use this term in place of the term variance.
mean of squared deviations
Distribution of F-ratios
all possible F values that can be obtained when the null hypothesis is true.
ANOVA summary table
summary of all ANOVA calculations
SS, df, MS, F-value, and p-value
Eta sqaured
η2
percentage of variance accounted for by the treatment effect
= SSbetween treatments ÷ SStotal
Post hoc tests / posttests
additional hypothesis tests done after an ANOVA to determine exactly which mean differences are significant and which aren’t
Pairwise comparisons
comparing individual treatments two at a time
Tukey’s HSD test
allows you to compute a single value that determines the minimum difference between treatment means that is necessary for significance
Name of value produced by Tukey’s HSD test:
Honestly significant difference or HSD
What can you conclude if the mean difference exceeds Tukey’s HSD?
that there is a significant difference between treatments
Scheffe’ test
uses an F-ratio to evaluate significance of difference between any two treatment conditions.
Numerator of the F-ratio:
an MS between between treatments
How is an MS between treatments calculated?
using only the two treatments you want to compare
What is the denominator of the F-ratio?
same MSwithin that was used for the overall ANOVA
What are similarities between ANOVA and t tests?
both use sample data to test hypotheses about population means
Differences between ANOVA and t tests:
t-tests are limited to situations in which there are only two treatments to compare
ANOVA can be used to compare two or more treatments
ANOVA provides researchers with much greater flexibility in designing experiments and interpreting results
What is the goal of the analysis done by ANOVA
to determine whether the mean differences observed among the samples provide enough evidence to conclude that there are mean differences among the three populations
What happens to the experimentwise alpha level as the number of separate tests increases?
it increases
A large value for the test statistic provides evidence that:
the sample mean differences (numerator) are larger than would be expected if there were no treatment effects (denominator)
Matrix
a set of numbers arranged in rows and columns so as to form a rectangular array
Cell
an individual element or value located at the intersection of a row and a column in a matrix.
each one represents a specific value identified by its position in the matrix
Main effect
mean differences among the levels of one factor
The mean differences between columns or rows describe:
the main effect for a two-factor study
Interaction
between two factors, it occurs whenever the mean differences between individual treatment conditions, or cells, are different from what would be predicted from the overall main effects of the factors
Simple main effects
the effect of one variable on one level of the other variable
Correlation
statistical technique that is used to measure and describe the relationship between two variables
Positive correlation
two variables tend to change in the same direction
as value of X variable increases/decreases from one individual to another, Y variable also tends to increase/decrease
Negative correlation
two variables tend to go in opposite directions
as X variable increases, Y variable decreases = inverse relationship
Direction of the relationship
sign of the correlation, positive or negative, describes the direction of the relationship
Envelope
line that encloses the data, and often helps you see the overall trend in the data.
When an envelope is shaped roughly like a football:
the correlation is around 0.7
Envelopes fatter than a football indicate what?
that correlations closer to 0
Narrow shaped envelopes indicate what?
correlations closer to 1.00
Pearson correlation
measures the degree and the direction of the linear relationship between two variables
Linear relationship
how well the data points fit a straight line
Sum of products (SP)
measures the amount of covariability between two variables
The value for SP can be calculated with either a:
definitional formula or a computational formula
Definitional formula
Computational formula
Outliers
extreme data points
Restricted range
observed data for a variable or variables is limited to a smaller portion of its potential range
coefficient of determination
r2 measures the proportion of variability in one variable that can be determined from the relationship with the other variable
Correlation matrix
results from multiple correlations are most easily reported in this table
Spearman correlation
result when Pearson correlation formula is used with data from an ordinal scale (ranks)
Monotonic
relationship when there’s a consistently one-directional relationship
Point-biserial correlation
used to measure relationship between two variables in situations in which one variable consists of regular, numerical scores, but the second variable has only two values.
Dichotomous variable
variable with only two values
Phi-coefficient
when both variables (X and Y) measured for each individual are dichotomous