1/31
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
If other factors are held constant, increasing the level of confidence from 95% to 99% will cause the width of the confidence interval to:
Increase.
When the null hypothesis is true, then F=MSbetween/MSwithin will be equal to:
1 (one).
In an Analysis of Variance test (ANOVA), what term is used to signify (or is equivalent to) variance?
Mean square
When conducting an independent measures t-test, if the null hypothesis is rejected:
The mean of one sample is so far from the mean of the other sample that the decision is that the samples come from populations that have different mean values.
What are the three parts to conducting an independent measures t-test?
1-The population variances are estimated.
2-The comparison is made against a t-distribution.
3-The variance of the distribution of differences between means is computed.
When conducting an independent measures t-test the null hypothesis is rejected if:
the calculated t-statistic you compute is more extreme than the critical-t.
When conducting an ANOVA, you decide to reject the null hypothesis. What must be true?
Between variability is greater than within variability.
What is the probability of making a Type I error when you reject the null?
Alpha= 0.05
What is the probability of making a Type II error when you reject the null?
0
When do you normally use analysis of variance rather than the independent measures t-test?
When there are more than two means to compare.
The assumption that the population variances are the same is:
Homogeneity of variance.
If there is no treatment effect, the F ratio is near:
One.
If you obtain a significant F-statistic then you know that:
At least two means are significantly different from one another.
An independent measures experiemtn uses two samples with n=8 in each group to compare two experimental treatments. The t-statistic from the experiment will have degrees of freedom equal to:
14
When doing an independent samples t-test, when MUST you pool the variance?
When the samples are of unequal sizes.
Between variability can also be thought of as
Between groups variability.
Within variability can also be thought of as
Within groups variability.
In repeated measures, we are able to quantify and remove ???? from MS.
Variability in individual responses.
What is the per comparison error rate?
What alpha is set to at the individual comparison.
What is the experiment-wise error rate?
The total error rate for all experimental tests.
What is the family-wise error rate?
The overall error rate for a group/family of comparisons.
The total variability can also be thought of as:
Between variability + within variability.
Nonparametric tests are also referred to as ???? free tests?
Distribution.
What is true about chi-squares?
1-Chi-square is used primarily with nominal data.
2-No expected frequencies should be less than 5.
When computing a chi-square test of independence one compares:
Observed frequencies to expected frequencies.
If you fail to reject the null hypothesis in a chi-square test for goodness of fit, then the expected and observed:
Frequences for the cells should be equal.
When does one conduct an ANOVA?
When you wish to compare more than two sample means (x-bar).
What is a multiple comparison procedure (post-test) and why does one need to conduct one when conducting ANOVA?
ANOVAs only tell us that at least two groups differ, but it does not tell us which ones vary. Multiple comparisons allows us to compare multiple groups' means and determine which means differ.
What are the assumptions of repeated measures?
1-Normality
2-Homogeneity of variances
3-Correlation among pairs are equal
What assumptions are repeated measures not robust to?
Correleations among pairs being equal.
What type of data does one need to have in order to conduct a chi-square test?
Frequency/categorical data.
How does Chi-square test of independence differ from the chi-square goodness of fit test?
1-Chi-square: independence (contingency) considers two variables at once to determine if they are independent of each other.
2-Chi-square goodness of fit considers one variable at a time to determine if the variable has more influence than what we would expect to see by chance.