Ch 11: Threats to internal validity/null effects

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/23

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

24 Terms

1
New cards

Maturation threat

A change in behavior that emerges more or less sponaneously/naturally over time.

  • Preventing these threats: Include a no-treatment comparison group.

2
New cards

History threats

A specific event happens (unrealated to the study) between the pre and post test and affects everyone in the group.

  • Comparison group can help control.

3
New cards

Regression threat

a threat to internal validity related to regression to the mean.

4
New cards

Regression to the mean

  • An extreme finding is likely to be closer to its own typical/mean level the next time it is measured because the same combination of chance factors that made the finding extreme are not present the second time.

    • If peformance is extreme at pretest, performance is likely to be less extreme at posttest.

    • Prevent with comparison group.

5
New cards

Attrition threat

In pretest/posttest, repeated measures or quasi study; a threat to internal validity that occurs when a systematic participant drops out of study before it ends.

  • Problematic when attrition is systematic / not random.

  • To prevent: comapre the drop out and remove the participants who drop out from analysis.

6
New cards

Testing threat

in repeated-measures expiriment or quasi-experiment, a kind of order effect in which scores change over time just because participants have taken the test more than once; includes practice effects and boredom effects.

  • Can prevent with using post-test only, alternate forms of tests for pre and post test, comparison group.

7
New cards

Instrumentation threat

occurs when a measuring instrument changes over time.

  • Solutions: use post-test only design, calibrate forms to be comparable, establish reliability and validity and pre and post test, counterbalance diferent forms across pre and post test.

8
New cards

Selection effect

independent-groups design, the two groups have systematically different kinds of participants in them..

9
New cards

Observer bias

researcher expectations influence interpretation fo results.

10
New cards

Avoiding observer bias

  • Double blind design - neither the participants nor the researchers who evaluate them know who is in the treatment group and who is in the comparison group.

  • Masked design - participants know which group they are in but observers do not.

11
New cards

Demand characteristics

Participants guess study hypotheses and change their behavior accordingly.

  • Double blind design

  • Masked design

12
New cards

Placebo effect

people believe they will improve and they do only because of their beliefs.

13
New cards

Double blind plaacebo control study

neither the people treating or the patients know if they are real or placebo.

14
New cards

Null effect

a finding that the IV did not make a difference in the DV.

15
New cards

Reasons for null effects:

  • Not enough between-group differences.

  • Weak manipulations;

    • Insensitive measures;

  • Ceiliing effect

  • Floor effect

  • Manipulation check

16
New cards

Weak manipulations

not enough of a difference between the two groups.

17
New cards

Insensitive measures

researchers havent used an operalization of the DV with enough sensitivity.

18
New cards

Ceiling effect

all the scores are squezed together at the high end.

19
New cards

Floor effect

all the scores cluster at the low end.

20
New cards

Manipulation check

a seperate DV that experimenters include a study, specifically to make sure the manipulation worked.

21
New cards

Noise

unsystematic variability among members in a group; caused by situation noise, individual differences, or measurement error.

22
New cards

Situation noise

external distractions

23
New cards

Measurement error

  • a human or instrument factor that can randomly inflate or delate a person’s true score on the dependent variable.

    • Ex. holding ruler at angle.

24
New cards

Power

an aspect of statistical validity; the likelihood that a study will return an accurate result when the independent variable really has an effect.