PSYCHOLOGICAL STATISTICS

studied byStudied by 3 people
0.0(0)
learn
LearnA personalized and smart learning plan
exam
Practice TestTake a test on your terms and definitions
spaced repetition
Spaced RepetitionScientifically backed study method
heart puzzle
Matching GameHow quick can you match all your cards?
flashcards
FlashcardsStudy terms and definitions

1 / 168

encourage image

There's no tags or description

Looks like no one added any tags here yet for you.

169 Terms

1

Analysis of Variance

what does ANOVA stand for?

New cards
2

Analysis of Variance (ANOVA)

a test used to determine differences between research results from three or more unrelated samples or groups.

New cards
3

Analysis of Variance (ANOVA)

serves the same purpose as the t tests: it tests for differences in group means

New cards
4

Analysis of Variance (ANOVA)

more flexible in that it can handle any number of groups, unlike t tests, which are limited to two groups (independent samples) or two time points (dependent samples).

New cards
5

Systematic Variance

best understood as the variation arising from the differences between the independent variables

New cards
6

Unsystematic Variance

variability within individuals and/or groups of individuals

New cards
7

Unsystematic Variance

essentially random; some individuals change in one direction, others in an opposite direction, and some do not change at all

New cards
8

Random Error

a chance difference between the observed and true values of something.

New cards
9

ANOVA

all about looking at the different sources of variability (i.e. the reasons that scores differ from one another) in a dataset.

New cards
10

Grouping Variable

the predictor or in experimental terms, the independent variable, and is made up of k groups, with k being any whole number 2 or greater.

New cards
11

Outcome Variable

the variable on which people differ, and we try to explain or account for those differences based on group membership.

New cards
12

ANOVA

it requires two or more groups to work and is usually conducted with three or more.

New cards
13

Individual Group Means

the means of the groups in ANOVA, usually represented with subscripts

New cards
14

Grand Mean

the single mean representing the average of all participants across all groups, represented with MG.

New cards
15

Individual Group Means and Overall Grand Mean

how we calculate our sums of squares

New cards
16

Sums of Squares

used to calculate the sources of variability

New cards
17

Between-Group Variation

refers to the differences between the groups; for example in sampling, we are talking about the deviation between different samples drawn from different locations of a consignment.

New cards
18

Within-Group Variation

refers to variations caused by differences within individual groups (or levels). In other words, not all the values within each group (e.g. means) are the same.

New cards
19

Total Sums of Squares

An important feature of the sums of squares in ANOVA is that they all fit together. We could work through the algebra to demonstrate that if we added together the formulas for SSB and SSW, we would end up with the formula for this

New cards
20

Source

the first column of the ANOVA table, indicates which of our sources of variability we are using: between groups (B), within groups (W), or total (T).

New cards
21

SS

the second column in ANOVA table, contains our values for the sum of squared deviations, also known as the sum of squares

New cards
22

Sum of Squared Deviations

other term for sum of squares

New cards
23

degrees of freedom

df meaning

New cards
24

different

There is a ____ df for each group

New cards
25

N in df

refers to the overall sample size, not a specific group sample size

New cards
26

Mean Squared Deviation

MS stands for

New cards
27

Mean Square

another way to say variability and is calculated by dividing the sum of squares by its corresponding degrees of freedom.

New cards
28

F

last column in ANOVA table, our test statistic for ANOVA

New cards
29

F statistic

compared to a critical value to see whether we can reject for fail to reject a null hypothesis.

New cards
30

Type I Error

a false positive

New cards
31

Type I Error

the chance of committing this error is equal to our significance level, α.

New cards
32

Type I Error = significance level

This is true if we are only running a single analysis (such as a t test with only two groups) on a single data set

New cards
33

Type I Error

increases when we start running multiple analyses on the same dataset

New cards
34

Increased Type I Error rate

raises the probability that we are capitalizing on random chance and rejecting a null hypothesis when we should not.

New cards
35

ANOVA

keeps our error rate at the α we set

New cards
36

Null Hypothesis

still the idea of “no difference” in our data. Because we have multiple group means, we simply list them out as equal to each other

New cards
37

At least one mean is different.

alternative hypothesis for ANOVA

New cards
38

none

mathematical statement of the alternative hypothesis in ANOVA

New cards
39

alternative hypothesis in ANOVA

there is no directional hypothesis

New cards
40

Between

numerator df in the anova table

New cards
41

Within

denominator df in the anova table

New cards
42

Effect Size

In ANOVA, it is the ratio of these two sums of squares

New cards
43

eta-squared

The effect size η 2 is called _____

New cards
44

effect size

represents variance explained

New cards
45

.01

small effect size

New cards
46

.09

medium effect size

New cards
47

post hoc test

used after ANOVA to find which means are different

New cards
48

Reject Ho

Fobt is larger than Fcrit

New cards
49

Fail to reject Ho

Fobt is smaller than Fcrit

New cards
50

post hoc test

used only after we find a statistically significant result and need to determine where our differences truly came from.

New cards
51

after the event

translation of latin term “post hoc”

New cards
52

bonferroni test

perhaps the simplest post hoc analysis; a series of t tests performed on each pair of groups.

New cards
53

bonferroni correction

To avoid the inflation of Type I error rates, it divides our significance level α by the number of comparisons we are making so that when they are all run, they sum back up to our original Type I error rate.

Once we have our new significance level, we simply run independent samples t tests to look for differences between our pairs of groups.

New cards
54

Tukey’s Honestly Significant Difference

a popular post hoc analysis that, like Bonferroni’s, makes adjustments based on the number of comparisons; however, it makes adjustments to the test statistic when running the comparisons of two groups.

New cards
55

Tukey’s Honestly Significant DIfference

gives us an estimate of the difference between the groups and a confidence interval for the estimate.

New cards
56

Tukey’s Honestly Significant Difference

containing 0.00 means the groups are not different

New cards
57

Scheffe Test

adjusts the test statistic for how many comparisons are made, but it does so in a slightly different way

New cards
58

Scheffe Test

The result is a test that is “conservative,” which means that it is less likely to commit a Type I error, but this comes at the cost of less power to detect effects.

New cards
59

no difference

post hoc test result contain zero

New cards
60

with difference

post hoc test result do not contain zero

New cards
61

Factorial ANOVA

uses multiple grouping variables, not just one, to look for group mean differences.

New cards
62

Factorial ANOVA

there is no limit to the number of grouping variables, but it becomes very difficult to find and interpret significant results with many factors, so usually they are limited to two or three grouping variables with only a small number of groups in each.

New cards
63

Repeated Measures ANOVA

an extension of a related samples t test, but in this case we are measuring each person three or more times to look for a change.

New cards
64

Repeated Measures ANOVA

We can combine both of these ANOVAs into mixed designs to test very specific and valuable questions

New cards
65

Correlation

a statistical measure that expresses the extent to which two variables are linearly related

New cards
66

Correlation

they change together at a constant rate

New cards
67

Correlation

a common tool for describing simple relationships without making a statement about cause and effect

New cards
68

correlation coefficient

unit-free measure used to describe correlations

New cards
69

correlation coefficient

ranges from -1 to +1, denoted by r. Statistical significance is indicated with a p-value

New cards
70

Form, Direction, Magnitude

three characteristics of correlation

New cards
71

Form

the shape of the relationship in a scatter plot, and a scatter plot is the only way it is possible to assess it

New cards
72

Linear Relationship

a statistical term used to describe a straight-line relationship between two variables.

New cards
73

Linear Relationship

the form that will always be assumed when calculating correlations.

New cards
74

Curvilinear Relationship

a type of relationship between two variables where as one variable increases, so does the other variable, but only up to a certain point, after which, as one variable continues to increase, the other decreases.

New cards
75

Curvilinear Relationship

A form in which a line through the middle of the points in a scatter plot will be curved rather than straight.

New cards
76

Curvilinear Relationship

This is important to keep in mind, because the math behind our calculations of correlation coefficients will only ever produce a straight line—we cannot create a curved line with the techniques used in correlations.

New cards
77

No Relationship

indicates that there is no relationship between the two variables.

New cards
78

No Relationship

This form shows no consistency in relationship

New cards
79

Direction

tells whether the variables change in the same way at the same time or in opposite ways at the same time.

New cards
80

Positive Relationship

variables X and Y change in the same direction: as X goes up, Y goes up, and as X goes down, Y also goes down and the slope of the line moves from bottom left to top right.

New cards
81

Negative Relationship

variables X and Y change together in opposite directions: as X goes up, Y goes down, and vice versa, and the slope of the line moves from top left to bottom right.

New cards
82

No Relationship

represented by the number 0 as its correlation coefficient, and its line has no slope, which means that it is flat

New cards
83

Magnitude

the number being calculated as the correlation coefficient. It shows how strong or how consistent the relationship between the variables is.

New cards
84

greater magnitude

higher numbers mean ____

New cards
85

stronger relationship

higher numbers mean greater magnitudes, which means a ______

New cards
86

magnitude

the only thing that matters is the magnitude, or the absolute value of the correlation coefficient

New cards
87

very weak correlation

0-0.19

New cards
88

weak correlation

0.2-0.39

New cards
89

moderate correlation

0.4-0.59

New cards
90

strong correlation

0.6-0.79

New cards
91

very strong correlation

0.8-1.0

New cards
92

Pearson’s r

the most popular correlation coefficient for assessing linear relationships, which serves as both a descriptive statistic (like M) and a test statistic (like t).

New cards
93

Pearson’s r

It is descriptive because it describes what is happening in the scatter plot; r will have both a sign (+/−) for the direction and a number (0 to 1 in absolute value) for the magnitude

New cards
94

Pearson’s r

The coefficient r also works as a test statistic because the magnitude of r will correspond directly to a t value as the specific degrees of freedom, which can then be compared to a critical value.

New cards
95

test statistic

the coefficient r also works as a ______

New cards
96

Covariance

a measure of the relationship between two random variables and to what extent, they change together

New cards
97

formula for r

the covariance divided by the standard deviations of X and Y

New cards
98

rho

our population parameter for the correlation that we estimate with r, just like M and m for means.

New cards
99

N-2

df for correlations

New cards
100

one-tailed test

used when expecting only a positive relationship

New cards

Explore top notes

note Note
studied byStudied by 1 person
808 days ago
5.0(1)
note Note
studied byStudied by 16 people
847 days ago
5.0(1)
note Note
studied byStudied by 30 people
704 days ago
5.0(1)
note Note
studied byStudied by 54 people
185 days ago
5.0(1)
note Note
studied byStudied by 181 people
919 days ago
5.0(2)
note Note
studied byStudied by 35 people
243 days ago
5.0(1)
note Note
studied byStudied by 3 people
51 days ago
5.0(1)
note Note
studied byStudied by 21 people
612 days ago
5.0(1)

Explore top flashcards

flashcards Flashcard (59)
studied byStudied by 3 people
147 days ago
5.0(1)
flashcards Flashcard (35)
studied byStudied by 10 people
549 days ago
5.0(1)
flashcards Flashcard (415)
studied byStudied by 6 people
631 days ago
4.0(2)
flashcards Flashcard (30)
studied byStudied by 5 people
701 days ago
5.0(1)
flashcards Flashcard (104)
studied byStudied by 117 people
371 days ago
5.0(1)
flashcards Flashcard (30)
studied byStudied by 29 people
423 days ago
5.0(2)
flashcards Flashcard (57)
studied byStudied by 17 people
707 days ago
5.0(1)
flashcards Flashcard (40)
studied byStudied by 35 people
11 minutes ago
5.0(1)
robot