Chapter 8 - Inferential Statistics, Hypothesis Testing and One Sample t-test

0.0(0)
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/30

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

31 Terms

1
New cards

Middle Values in Normal Distribution

  • 95%

  • high probability values

  • indicate that the treatment has no effect

    • same as pre-treatment expected levels

2
New cards

Extreme Values in Normal Distribution

  • 5%

  • scores that are very unlikely to be obtained from the original population

  • provide evidence of a treatment effect

3
New cards

Hypothesis Test

  • a statistical method that uses sample data to evaluate a hypothesis about a population

    • used in research to evaluate results

4
New cards

Null Hypothesis conclusions

  • we either:

    • “reject the null hypothesis”

    • “fail to reject the null hypothesis”

5
New cards

H0

  • null hypothesis symbol

6
New cards

Null Hypothesis

  • the mean will not change because the treatment has no effect

7
New cards

H1

  • alternative hypothesis

8
New cards

Alternative Hypothesis

  • the mean will not be the same because something is going to happen/change

9
New cards

Alpha Level

  • a probability value that is used to define the concept of “very unlikely”

    • it determined what the threshold is for “different enough”

10
New cards

α = .05

  • 5% chance that the sample mean would be this extreme if the null were true

    • 95% of scores are likely values if null is true (because there is no efffect)

    • 5% of scores are unlikely values if null is true (because there is an effect so scores will be more extreme after treatment)

      • so we are never 100% sure of an effect — it’s always possible that there could just be an outliar in the extreme 5%

11
New cards

Errors in Hypothesis Testing

  • there is a chance that we reject the null hypothesis when we shouldn’t have or we fail to reject the null hypothesis when we should have

    • type I errors and type II errors

12
New cards

Type I Errors

  • reject the null hypothesis when it was actually true

    • reporting an effect that actually isn’t there

    • “false alarm”

      • maybe the sample just happened to be already in the extremes without the treatment — makes it look like there was an effect but there wasn’t

  • want to keep risk of this error low

    • error = alpha level — alpha = .05 so error rate = .05

13
New cards

Type II Errors

  • fail to reject the null hypothesis that is actually false

    • your data suggests no effect but there actually is one

    • “a miss”

    • your sample wasn’t in the critical region even though it should have been

    • denoted by the beta symbol: β

    • typically happens because your effect was too small (it moved the mean a little bit but not enough to get it into the critical region)

      • not enough power — small sample size, confounding variables, etc.

  • we can’t determine the exact probability of this type of error

    • people are less concerned about this type

14
New cards

Statistically Significant

  • the result is very unlikely to have occurred if the null hypothesis was true (if there was no effect); surpassed the threshold of “different enough”

15
New cards

Bidirectional (two-tailed) hypothesis test

  • makes a prediction without indicating positive or negative

16
New cards

Directional (one-tailed) hypothesis testing

  • predict the direction of your effect — then you can ONLY test that one direction

17
New cards

Directional (one-tailed) hypothesis testing — problems

  • your prediction could be wrong

    • you might predict a negative effect but it turns out to be positive — you can’t test the positive side though so you would not see an effect

  • easier to reach the critical region so there is more room for Type I errors

    • you need to strongly justify your use

18
New cards

When do we use t-scores instead of z-scores

  • when we don’t know the population standard deviation

19
New cards

Estimated Standard Error

  • an estimate of the real standard error when the population standard deviation is unknown

20
New cards

Degrees of Freedom (df)

  • the number of scores in a sample that are independent and free to vary

  • n-1

    • how many pieces of information do you need to find the mean

    • only need to know 2/3 (if sample is 3) because the last valve has to be a specific number to equal the mean

      • the final score is not free to vary (dependent on the other scores)

21
New cards

t-statistic

  • comparing our mean to the null hypothesis to see if they are different enough to have an effect

22
New cards

3 types of t-tests

  • one sample t-test

  • independent samples t-test

  • paired samples t-test

23
New cards

One sample t-test

  • comparing 1 sample mean to 1 known population mean (but not std)

    • unique test but used all the time in psychology

24
New cards

Independent samples t-test

  • comparing 2 means from separate groups

    • comparing 2 conditions (different people in each condition)

25
New cards

Paired samples t-test

  • comparing 2 means from the same people

    • comparing 2 conditions (same people in both conditions)

26
New cards

One Sample t-test — Occurs when…

  • you know a population mean and want to compare a mean to it

  • you have a specific number you want to compare your sample mean to

27
New cards

Effect Size

  • tells you magnitude of your effect and is not dependent on the sample size

28
New cards

d = 0.2

  • small effect

29
New cards

d = 0.5

  • medium effect

30
New cards

d = 0.8

  • large effect

31
New cards

What boosts the likelihood of a significant effect

  • lower error value (lower estimated standard error)

    • because we divide by estimated standard error, a larger error value will produce a smaller t-statistic

    • larger standard error → smaller t-statistic (not good!)

  • larger sample size

    • because we divide by n to get standard error, a larger n value will produce a smaller standard error

    • as sample size increases, standard error decreases, and the likelihood of rejecting the null (and finding a significant effect) increases