stat methods - ch. 12,13,14,15

5.0(1)
studied byStudied by 4 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/63

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

64 Terms

1
New cards

Analysis of Variance (ANOVA)

hypothesis-testing procedure that is used to evaluate mean differences between two or more treatments (or populations)

2
New cards

Factor

In ANOVA, the variable (independent or quasi-independent) that designates the groups being compared.

3
New cards

Levels (levels of the factor)

The individual conditions or values that make up a factor

4
New cards

Two-factor design / factorial design

a study that combines two factors

5
New cards

Single-factor designs

studies that have only one independent variable (or only one quasi-independent variable)

6
New cards

single-factor, independent-measure design

7
New cards

Testwise alpha level

the risk of a Type I error, or alpha level, for an individual hypothesis test.

8
New cards

Experimentwise alpha level

when an experiment involves several different hypothesis tests, total probability of a Type I error that is accumulated from all of the individual tests in the experiment. Typically, the experimentwise alpha level is substantially greater than the value of the alpha used for any one of the individual tests.

9
New cards

Between-treatments variance

measures how much difference exists between the treatment conditions

10
New cards

Treatment effect

differences between treatments have been caused by this.

Ex. if treatments really do affect performance, then scores in one treatment should be systematically different from scores in another condition.

11
New cards

Within-treatments variance

provides a measure of how big the differences are when H0 is true

12
New cards

F-ratio

  • ratio of variance between treatments and variance within treatments

  • helps determine whether any treatment effects exist

  • F = variance btwn treatments ÷ variance within treatments. = differences including any treatment effects ÷ differences with no treatment effects

13
New cards

Error term

  • its the denominator of the F-ratio for ANOVA

  • provides a measure of the variance caused by random and unsystematic differences.

  • when treatment effect is zero (H0 is true), it measures the same sources of variance as the numerator of F-ratio, so value of F-ratio expected to be nearly equal to 1.00.

14
New cards

Mean square (MS)

  • In ANOVA, customary to use this term in place of the term variance.

  • mean of squared deviations

15
New cards

Distribution of F-ratios

all possible F values that can be obtained when the null hypothesis is true.

16
New cards

ANOVA summary table

  • summary of all ANOVA calculations

    • SS, df, MS, F-value, and p-value

17
New cards

Eta sqaured

  • η2

  • percentage of variance accounted for by the treatment effect

  • = SSbetween treatments ÷ SStotal

18
New cards

Post hoc tests / posttests

additional hypothesis tests done after an ANOVA to determine exactly which mean differences are significant and which aren’t

19
New cards

Pairwise comparisons

comparing individual treatments two at a time

20
New cards

Tukey’s HSD test

  • allows you to compute a single value that determines the minimum difference between treatment means that is necessary for significance

21
New cards

Name of value produced by Tukey’s HSD test:

Honestly significant difference or HSD

22
New cards

What can you conclude if the mean difference exceeds Tukey’s HSD?

that there is a significant difference between treatments

23
New cards

Scheffe’ test

uses an F-ratio to evaluate significance of difference between any two treatment conditions.

24
New cards

Numerator of the F-ratio:

an MS between between treatments

25
New cards

How is an MS between treatments calculated?

using only the two treatments you want to compare

26
New cards

What is the denominator of the F-ratio?

same MSwithin that was used for the overall ANOVA

27
New cards

What are similarities between ANOVA and t tests?

both use sample data to test hypotheses about population means

28
New cards

Differences between ANOVA and t tests:

  • t-tests are limited to situations in which there are only two treatments to compare

  • ANOVA can be used to compare two or more treatments

  • ANOVA provides researchers with much greater flexibility in designing experiments and interpreting results

29
New cards

What is the goal of the analysis done by ANOVA

to determine whether the mean differences observed among the samples provide enough evidence to conclude that there are mean differences among the three populations

30
New cards

What happens to the experimentwise alpha level as the number of separate tests increases?

it increases

31
New cards

A large value for the test statistic provides evidence that:

the sample mean differences (numerator) are larger than would be expected if there were no treatment effects (denominator)

32
New cards

Matrix

a set of numbers arranged in rows and columns so as to form a rectangular array

33
New cards

Cell

  • an individual element or value located at the intersection of a row and a column in a matrix.

  • each one represents a specific value identified by its position in the matrix

34
New cards

Main effect

mean differences among the levels of one factor

35
New cards

The mean differences between columns or rows describe:

the main effect for a two-factor study

36
New cards

Interaction

between two factors, it occurs whenever the mean differences between individual treatment conditions, or cells, are different from what would be predicted from the overall main effects of the factors

37
New cards

Simple main effects

the effect of one variable on one level of the other variable

38
New cards

Correlation

statistical technique that is used to measure and describe the relationship between two variables

39
New cards

Positive correlation

  • two variables tend to change in the same direction

    • as value of X variable increases/decreases from one individual to another, Y variable also tends to increase/decrease

40
New cards

Negative correlation

  • two variables tend to go in opposite directions

    • as X variable increases, Y variable decreases = inverse relationship

41
New cards

Direction of the relationship

sign of the correlation, positive or negative, describes the direction of the relationship

42
New cards

Envelope

line that encloses the data, and often helps you see the overall trend in the data.

43
New cards

When an envelope is shaped roughly like a football:

the correlation is around 0.7

44
New cards

Envelopes fatter than a football indicate what?

that correlations closer to 0

45
New cards

Narrow shaped envelopes indicate what?

correlations closer to 1.00

46
New cards

Pearson correlation

measures the degree and the direction of the linear relationship between two variables

47
New cards

Linear relationship

how well the data points fit a straight line

48
New cards

Sum of products (SP)

measures the amount of covariability between two variables

49
New cards

The value for SP can be calculated with either a:

definitional formula or a computational formula

50
New cards

Definitional formula

51
New cards

Computational formula

52
New cards

Outliers

extreme data points

53
New cards

Restricted range

observed data for a variable or variables is limited to a smaller portion of its potential range

54
New cards

coefficient of determination

  • r2 measures the proportion of variability in one variable that can be determined from the relationship with the other variable

55
New cards

Correlation matrix

results from multiple correlations are most easily reported in this table

56
New cards

Spearman correlation

result when Pearson correlation formula is used with data from an ordinal scale (ranks)

57
New cards

Monotonic

relationship when there’s a consistently one-directional relationship

58
New cards

Point-biserial correlation

used to measure relationship between two variables in situations in which one variable consists of regular, numerical scores, but the second variable has only two values.

59
New cards

Dichotomous variable

variable with only two values

60
New cards

Phi-coefficient

when both variables (X and Y) measured for each individual are dichotomous

61
New cards
62
New cards
63
New cards
64
New cards