T-tests, ANOVA, etc

0.0(0)
studied byStudied by 1 person
full-widthCall with Kai
GameKnowt Play
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/98

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

99 Terms

1
New cards

One-Sided T-Test

tests if the mean is greater or less than the population, but not both (you have a specific directional hypothesis)

2
New cards

Two Sided T-Tests

examines if a sample mean is significantly different from a population mean, regardless of whether it’s greater or less (testing for any difference)

3
New cards

T-Test

Compare the means of only two groups to determine if there's a statistically significant difference between them

4
New cards

3 Types of T-tests

One sample, two sample, paired tests

5
New cards

Parametric Tests

T-T-tests and simple ANOVA. The variances between groups are equal, the ‘data’ is normally distributed, and the sample size is ‘big enough’ to approximate the population

6
New cards

Nonparametric statistics

Chi-square tests, Wilcoxon tests, Mann Whitney U Tests, Friendman test, Alternative tests we can use when assumptions are violated

7
New cards

One Sample Z-test

use when we want to test the difference between a sample mean and a population mean, and or between the means of 2 independent samples, when the population standard devition is KNOWN or sample size is large. Ex: does average GRE for this program differ from population’s average GRE score

8
New cards

One-sample t-test

used to determine if a sample mean significantly differs from a known or hypothesized population mean. Used when the population standard deviation is unknown. ex: see if the average height of students in a particular class differs significantly from the average height of all students in the school.

9
New cards

Assumptions of one-sample Z test

normality, independence, know the true stand dev of population. use normal distribution

10
New cards

Assumptions of one-sample T test

Normality, independence, use T distribution

11
New cards

Independent samples T-Test

used to determine if there is a significant difference between the means of two independent groups. It's used when comparing the means of two separate, unrelated groups of data. (uses between-subjects design)

12
New cards

Student T-Test

used to determine if there's a significant difference between the means of two groups. we make the assumption the 2 groups have same population standard deviation

13
New cards

Welch’s Test

used to compare the means of two groups when the variances of those groups are not assumed to be equal. often preferred over student test because it’s more robust to violations of equal variance

14
New cards

Paired/Dependent Samples T-Test

used to compare the means of two related groups. It's employed when you have two sets of measurements for the same subjects or matched pairs. This test determines if there's a statistically significant difference between the means of these related groups. 

15
New cards

Q-Q plot (Quantile-Quantile plot)

used to assess whether a dataset follows a specific theoretical distribution, such as a normal distribution. It does this by plotting the quantiles of the sample data against the quantiles of the theoretical distribution. If the data points closely follow a straight line (often a 45-degree line), it suggests the data is well-modeled by the theoretical distribution

16
New cards

Shapiro-Wilk Test

used to determine if a sample of data was drawn from a normally distributed population. compares the distribution of your sample data to a perfect normal distribution with the same mean/standard deviation

17
New cards

Way to Check Normality of a Samples

QQ plots and Shapiro-Wilk Test

18
New cards

How do you check for Homogeneity of Variance?

Levene’s Test

19
New cards

Levene’s Test

used to assess the equality of variances for a variable calculated for two or more groups. It's often used as a preliminary test before conducting an ANOVA (analysis of variance) to check if the assumption of equal variances is met. If the variances are significantly different, it suggests that the data may not be suitable for ANOVA. 

20
New cards

What test do you use when the normality assumption is violated?

Mann-Whitney U test

21
New cards

What test do you use when the homogeneity of variance assumption is violated?

Welch’s test

22
New cards

Chi Square Test

used to compare observed data with expected data. It's primarily used to examine the relationship between two categorical variables, specifically to determine if they are independent of each other. The test helps determine if any observed differences between the variables are due to chance or if there's a genuine association. can be used to detect adverse impact

23
New cards

Chi-square goodness-of-fit test

compares the distribution of observed frequencies of a single categorical variable to an expected distribution. Requires knowledge of probability of occurrence outside of your data (real world). Ex: voter preferences in a sample matches the known distribution of preferences in a population

24
New cards

Chi-square test of independence

determine if two categorical variables are dependent or independent of each other. don’t need to know the probability expected in real world. You only need counts for two categorical variables and each variable must have two or more categories. the relationship between gender (male/female) and favorite color (blue/green/pink)

25
New cards

Fisher's exact test

used to analyze the relationship between two categorical variables, particularly when dealing with small sample sizes or sparse data. It determines if there's a significant association between the variables by calculating the probability of observing the data (or more extreme data) if there were no association (null hypothesis). 

26
New cards

McNemar's test

a non-parametric statistical test used to analyze paired nominal data, specifically when dealing with dichotomous variables (variables with two categories). It's designed to assess whether there's a significant change in the proportions of these categories between two related measurements on the same subjects or matched pairs. It helps determine if a treatment or intervention has a statistically significant effect on a binary outcome when the same individuals are measured before and after the intervention or when data is matched. 

27
New cards

ANOVA

Analysis of variance. used to compare the means of two or more groups. basic logic: we partition the variability into between and within group sources (variances). Try to figure out how much of the total variability is the effect of our variable and how much of the total variability is due to other factors.

28
New cards

Simple ANOVA/One-Way ANOVA

used when you have one factor with 3 or more levels. (when you have one categorical dependent variable w/3 or more levels and one continuous dependent variable. single-factor experiments. Only one indep variable. simplest experimental design. it’s difficult to isolate only one variable that causes a behavior

29
New cards

Repeated Measures ANOVA

used when you test the same participants more than twice (within subjects). It's designed to analyze data where the observations are dependent, meaning they are not independent of each other. This test is particularly useful when dealing with data where the same individuals are measured multiple times, such as in longitudinal studies or experiments with repeated treatments. 

30
New cards

F statistic

test statistic that assesses whether the means of three or more groups are significantly different, doesn’t say which groups differ. It’s an omninbus test

31
New cards

Factorial ANOVA

Experiment which there are two or more independent variables. The independent variables are also called factors

32
New cards

Omega-Squared

(ω²) is a statistical measure used to quantify the effect size in analysis of variance (ANOVA). It represents the proportion of variance in the dependent variable that is explained by the independent variable(s). Unlike eta-squared (η²), omega squared is considered a less biased estimate of population variance, especially with smaller sample sizes. 

33
New cards

Coefficient of Determination

(R²), a statistical measure that represents the proportion of variance in the dependent variable that is explained by the independent variable(s) in a regression model. AKA how much variance they’ll share. the more 2 variables have in common, the more variance they’ll share

34
New cards

Simpson’s Paradox

a statistical phenomenon where a trend appears in different groups of data but disappears or reverses when these groups are combined. It highlights how misleading conclusions can be drawn from aggregated data without considering underlying subgroups.

35
New cards

Bessel’s Correction

a method used in statistics to reduce bias when estimating the population variance from a sample. It involves dividing the sum of squared differences by n-1 instead of n, where n is the sample size. This adjustment provides a less biased estimate of the population variance. 

36
New cards

Cronbach’s Alpha

measure of internal consistency, specifically the reliability of a test or scale. It indicates how well the items on a test measure the same construct or concept. A higher Cronbach's alpha (closer to 1) suggests greater reliability, meaning the items are more consistent in their measurement

37
New cards

Parsimony

The principle of selecting the simplest model that adequately explains the data. It emphasizes using the fewest possible parameters or variables to achieve a good fit, thus avoiding overfitting and promoting model interpretability and generalizability

38
New cards

Dummy Coding

method to represent categorical variables in regression models by creating binary (0 or 1) variables. This allows you to include categorical data, which is not numerical, into models designed for numerical data

39
New cards

Post Hoc Test

a statistical test conducted after an ANOVA (Analysis of Variance) or other similar tests to determine which specific groups or means differ significantly from each other, when the initial test indicates an overall significant difference. It essentially "clears up" which specific pairs or sets of data are responsible for the significant overall result

40
New cards

Bonferroni Test

a type of post hoc test used to control for the increased risk of false positives when conducting multiple hypothesis tests simultaneously, particularly after an ANOVA test. It works by adjusting the significance level (alpha) for each individual comparison, making it more stringent. This helps to ensure that any statistically significant findings are less likely to be due to chance alone

41
New cards

Regression Formula

Y = b₀ + b₁X, where Y is the dependent variable, X is the independent variable, b₀ is the y-intercept, and b₁ is the slope of the line. This formula represents a straight line that best fits a set of data points, allowing for predictions of Y based on given X values

42
New cards

Regression Coefficient

a statistical measure that represents the average change in the dependent variable for a one-unit change in the independent variable

43
New cards

R Squared

AKA coefficient of determination, is a statistical measure that represents the proportion of variance in the dependent variable that is explained by the independent variable(s) in a regression model. It essentiallyA tells you how well your model fits the data, with values ranging from 0 to 1 (or 0% to 100%)

44
New cards

Adjusted R Squared

adjusts for the number of predictors in a regression model. It provides a more accurate measure of the model's goodness of fit, especially when dealing with multiple independent variables. Unlike R-squared, which always increases when more predictors are added, adjusted R-squared can decrease if the added predictors don't significantly improve the model's fit. 

45
New cards

Akaike Information Criterion (AIC)

a statistical measure used to evaluate the quality of statistical models. It helps in selecting the best model from a set of candidate models by balancing model fit and complexity. Lower * values generally indicate a better-fitting model. 

46
New cards

Phi Coefficient

(φ) is a measure of association between two dichotomous variables (variables with only two categories). It's a type of correlation coefficient, specifically a special case of the Pearson correlation adapted for binary data. It quantifies the strength and direction of the relationship between these variables

47
New cards

Cramer’s V

a statistical measure used to assess the strength of association between two nominal categorical variables, particularly in contingency tables. It's derived from the chi-square statistic and ranges from 0 to 1, with 0 indicating no association and 1 indicating perfect association. It's particularly useful for larger contingency tables where the phi coefficient, which is limited to 2x2 tables, is not applicable

48
New cards

Null Hypothesis Formula

H0: μ1 = μ2 (The means of two populations are equal)

49
New cards

Alternative Hypothesis Formula

Hₐ: μ ≠ value (or μ > value, or μ < value - alternative hypothesis, stating an effect) 

50
New cards

Eta Squared


(η²) measures of effect size for simple ANOVA. represents the proportion of total variance explained by a factor. Useful when you want to understand the overall effect of a factor on the total variance in your data

51
New cards

Partial Eta-Squared

(ηp²) the proportion of variance explained by a factor relative to the variance explained by that factor and its associated error. Used for factorial ANOVA. isolates the effect of a specific factor by removing the variance explained by other factors and interactions in the model. More appropriate when examining the effect of a specific factor while controlling for the influence of other factors in a multi-factor ANOVA. 

52
New cards

Omnibus Test

(Ex: F test) a statistical test that examines whether there are any significant differences among the means of multiple groups, or if all groups have identical means. It's used as a preliminary step to determine if further investigation into specific differences between groups is warranted. In essence, it tests a global hypothesis about the overall relationship within the data.

53
New cards

Simple Linear Regression

model the relationship between one independent (predictor) variable and one dependent (response) variable, assuming a linear relationship. It aims to find the best-fitting straight line through a set of data points to predict the value of the dependent variable based on the independent variable. 

54
New cards

Multiple Linear Regression

method used to model the relationship between a dependent variable and two or more independent variables. It aims to find the best-fitting linear equation that predicts the value of the dependent variable based on the values of the independent variables. This technique is widely used in various fields to understand how multiple factors influence an outcome. 

55
New cards

(ŷ)

the predicted score for a person of a given type (NOT observed score

56
New cards

Residual

distance between the regression line and any one person’s actual y score (difference between observed score and predicted score) (yi - ŷ)

57
New cards

Simple Linear Regression Formula

Y = b₀ + b₁X, where: 

  • Y: is the dependent (or response) variable. 

  • X: is the independent (or predictor) variable. 

  • b₀: is the y-intercept (the value of Y when X is 0). 

  • b₁: is the slope of the line (the change in Y for every one unit change in X). 

58
New cards

Unstandardized Beta (reg coeff)

represents the change in the dependent variable for a one-unit change in the independent variable, while holding all other independent variables constant. It's expressed in the original units of the variables in the dataset. Ex: if unstand beta for height is 5.2, for each additional inch in height, weight is increased by 5.2 pounds

59
New cards

Standardized Beta Weight (Reg coeff)

tells us the predicted increased (slope) in standard deviation units of the predictor and outcome. ex: stand reg coeff for hour studied is 0.6. one stand deviation increase in hours studied is associated with a 0.6 stand dev increase in exam scores

60
New cards

Multiple Regression Equation

y = b₀ + b₁x₁ + b₂x₂ + ... + bₚxₚ

'y' is the predicted dependent variable,

'x₁' through 'xₚ' are the independent variables

'b₀' is the y-intercept

'b₁' through 'bₚ' are the regression coefficients for each independent variable

61
New cards

F test

statistical test that compares the variances of two or more samples to see if they are significantly different. It's commonly used in analysis of variance (ANOVA) and in testing for differences in variances between two populations

62
New cards

Most common used measure of effect size for a t-test

Cohen’s D

63
New cards

M

mean of population

64
New cards

mean of sample

65
New cards

F formula

MSb/MSw (MSB is the mean square between groups and MSW is the mean square within group)

66
New cards

true

Pearson’s r can only be bused for linear relationships

67
New cards

R

the statistic used for a correlation coefficient

68
New cards

Null Hypothesis in a one-sample z-test

H₀ = x̅ = M, H₀: μ = μ₀. 

69
New cards

Alternative Hypothesis in a one-sample z-test

H1 = x̅ ≠ M, H1 = μ ≠ μ₀

70
New cards

Research hypothesis formula

H1 : x1 < x squared

71
New cards

d

Symbol for Effect Size

72
New cards

What is the primary difference between Student's t-test and Welch's t-test?

Student's t-test assumes equal variances, while Welch's t-test does not

73
New cards

Independent sample t test is used when you are testing how many groups?

only two groups in total

74
New cards

The null hypothesis in an independent-samples t-test

Ho: M1 = M2

75
New cards

Effect Size of Chi-Square Test

Phi (φ), Cramer's V (V),

Small effect: φ ≈ 0.1, V ≈ 0.1

Medium effect: φ ≈ 0.3, V ≈ 0.3

Large effect: φ ≈ 0.5, V ≈ 0.5

76
New cards

Interpretation of correlation

  • Weak: |r| < 0.3. 

  • Moderate: 0.3 < |r| < 0.7. 

  • Strong: |r| > 0.7. 

77
New cards

If the correlation between two variables is 0.5, how much of the variance has NOT been accounted for?

Unaccounted variance

=1−r2

=1−0.25 =0.75

equals 1 minus r squared equals 1 minus 0.25 equals 0.75

=1−𝑟2

=1−0.25=0.75

78
New cards

What will occur if multiple t-tests are conducted instead of conducting an ANOVA to compare means among three groups?

The risk of Type I error increases.

79
New cards

Which of the following tests is used in ANOVA?

F test

80
New cards

A simple analysis of variance includes how many factor(s) or treatment variables in the analysis?

only one

81
New cards

When interpreting F(2, 27) = 8.80, p < 0.05, how many groups were examined?

here were 3 groups examined. The "2" in F(2, 27) represents the degrees of freedom between groups, which is calculated as the number of groups minus 1 (k-1). Therefore, k-1 = 2, which means k (the number of groups) equals 3. 

82
New cards

When interpreting F(2, 27) = 8.80, p < 0.05, what is the within-groups df?

the first number, 22, represents the degrees of freedom between groups (dfbetweendf)

The second number, 27, represents the degrees of freedom within groups (dfwithindf sub)

83
New cards

Which of the following is a null hypothesis in ANOVA?

Ho: M1 = M2 = M3

84
New cards

What is the effect size used in a simple ANOVA analysis?

Eta squared

85
New cards

What is the primary purpose of post hoc tests in an ANOVA?

To identify which specific groups significantly differ from each other.

86
New cards

The Bonferroni correction for post hoc tests involves using a t-test with a significance level of 0.05.

False

87
New cards

The factorial ANOVA can be used to test :

main effects and interaction effects of the independent variables

88
New cards

To examine the impact of three different types of wall paint colors (blue, green, and yellow) on how often toddlers cry, which test is appropriate to use

one-way ANOVA

89
New cards

The null hypothesis refers to which of the following:

the popukation

90
New cards

The fact that the tails of a normal distribution never touch the horizontal axis relates to ____ __ property.

asymptotic

91
New cards

Which of the following represents a research hypothesis?

H1: X1 < X2

92
New cards

The null hypothesis refers to which of the following?

the population

93
New cards

The fact that the tails of a normal distribution never touch the horizontal axis relates to ____ __ property.

asymptotic

94
New cards

When we want to infer from a sample to the population, what must be assumed?

The population is normally distributed

95
New cards

The level of risk that you are willing to take that the results you find are not due to the treatment is expressed as which of the following?

significance level

96
New cards

If the obtained value is greater than the critical value, what should you do?

reject the null hypothesis

97
New cards

What Greek letter is associated with Type I error?

a

98
New cards

What Greek letter is associated with Type II error?

B

99
New cards

Power

What does 1- B represent?