Lecture 25: null effects in research designs

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/44

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

45 Terms

1
New cards

what are Abelson’s laws about null effects? (5)

  1. chance is lumpy: we aren’t good at understanding what happens by chance because we look for patterns

  2. overconfidence hates uncertainty: if you’re too confident, you can oversee some uncertainty (biases)

  3. there is no free hunch: hunches are biases by scientists, which can cause misinterpretation of the hypothesis 

  4. you can’t see dust if you don’t move the couch: we tend to look at where it’s the easiest instead of all the possible outcomes 

  5. criticism is the moth of methodology: research methods need criticism 

2
New cards

define “interpreting standalone statistics”

claims are often presented with no supporting data

3
New cards

according to Abelson, what do people overestimate? (2)

  • systematic effects: influence that contributes contributes equally to each observation in a consistent way → pattern, predictable

  • chance effect: influence that contributes by chance to each observation → no pattern, unpredictable

*systematic tends to be overestimated

4
New cards

people tend to overestimate [chance/systematic] effects

systematic

  • systematic effects: influence that contributes contributes equally to each observation in a consistent way → pattern, predictable

  • chance effect: influence that contributes by chance to each observation → no pattern, unpredictable

5
New cards

define “systematic effects”

influence that contributes to each observation in a consistent way (pattern, predictable)

6
New cards

define “chance effects”

influence that contributes by chance to each observation (no pattern, unpredictable)

7
New cards

what’s used as comparison standards?

control groups: they can reduce misleading statistical interpretations

8
New cards

define “null effect”

outcome that does not support rejecting the null hypothesis (no statistically significant effect on the design)

9
New cards

define “null hypothesis”

statement that the effect being studied (H0) does not exist

10
New cards

what’s the difference between a null effect and a null hypothesis? 

  • null effect: outcome doesn’t support rejecting the null hypothesis (no statistically significant difference)

  • null hypothesis: the effect studied doesn’t exist (what’s the probability that outcome x happens if the null hypothesis is true)

11
New cards

define “alternative hypothesis”

studied effect that exists

12
New cards

what’s the difference between a null hypothesis and an alternative hypothesis?

  • null: the effect studied doesn’t exist

  • alternative: the effect studied exists

13
New cards

what does it mean when we say that the null hypothesis and the alternative hypothesis are mutually exclusive?

that both cannot be correct/true at the same time (it’s one or the other)

14
New cards

which one is correct/incorrect and explain why:

  1. if the null hypothesis is true, then outcome X is highly unlikely. outcome X occurred. therefore, the null hypothesis is highly unlikely to be true.

  2. if the null hypothesis is true, then outcome X cannot occur. outcome X occurred. therefore, the null hypothesis is false (rejected).

  1. correct: “highly unlikely”, “highly unlikely to be true”

  2. almost correct: “cannot occur”, “false/rejected”

  • we cannot prove the null hypothesis (2), we can only provide evidence against it

  • data and hypotheses aren’t “all-or-none”, they are probabilistic

15
New cards

what are the types of null effects? (3)

  • outcome isn’t different from chance because there is no true evidence for the alternative hypothesis

  • outcome is real, but not statistically significant because there isn’t enough data or the measures aren’t sensitive enough

  • outcome reached significance level to reject H0, but the size of the impact was too small to be meaningful

*null effect: outcome doesn’t support rejecting the H0

16
New cards

how do we know that there is a publication bias regarding null effects?

we see more null effects in registered reports (reports that go through processing before the data is submitted) than in non-registered/standards reports → underestimation of the null effect in scientific papers 

17
New cards

why do we care about null effects? (3)

  • know if a cheaper or shorter treatment works just as well (no difference between the conditions = they both work the same)

  • design a study to demonstrate that another article was wrong and that there is no effect

  • be prepared to observe a non-significant finding in any study (H0 is a possible outcome)

18
New cards

what are the criteria to reject the null hypothesis? (3)

  • falsifiable: must be possible to reject the null hypothesis

  • results must be consistent with the null hypothesis

  • experiment doesn’t try to disprove the H0 only, must have tried to find an effect

19
New cards

when you report the results of a study, what should you consider as potential reasons for null results? (3)

  • were the two groups equivalent at baseline 

  • what’s the minimum detectable effect size? is it small enough to detect meaningful impacts? 

  • what is the difference between the treatment and control group? was the contrast strong enough?

20
New cards

how could the IV cause null effects? (4)

  • not enough between-subjects differences 

  • within-subjects variability (individual differences) hid group differences 

  • no actual difference 

  • null effect is hard to find 

21
New cards

how could the DV cause null effects?

ceiling and floor effects (used a lot of data or more sensitive to IV) → if not sensitive enough, we might not see differences that are there

22
New cards

how can you reduce the floor or ceiling effect?

by having more precise measurements and do manipulation checks: did the manipulation work as expected 

23
New cards

define “manipulation check”

additional DV included to make sure that the IV worked

24
New cards

what are the causes of null effects in a within-subjects design? (3)

  • measurement error: what the DV well measured, equipment problems

  • indvidual differences

  • situation noise: external distraction that could cause variability within groups

25
New cards

how could you reduce measurement errors? (2)

  • use reliable and precise measurements

  • measure multiple times 

26
New cards

how could you reduce individual differences? (2)

  • change the design to a matched-group design (2 participants with same individual differences)

  • add more participants

27
New cards

define “situational noise”

external distractions that could cause variability within groups that obscures within-subjects or between-subjects differences

28
New cards

how can you reduce situational noise?

controlling the environment

29
New cards

how can sampled participants cause null effects? (5)

  • are they representative of the population 

  • was variability under/overestimated 

  • were they all naive and unbiased 

  • did you recruit enough, were there some carryover effects 

  • ethical issues 

30
New cards

how can stimulus materials/equipment cause null effects? (5)

  • are they all familiar or new for all participants

  • are they too hard or easy

  • are they representative of the task

  • are they standardized across studies and responses

  • was the equipment the same for all participants

*you can control these after you’ve sampled

31
New cards

how can experimenters cause null effects? (4)

  • are they adequately trained for the task

  • are they objective/passive in the task

  • are they treating the participants the same

  • is there fatigue/practice occurring in the experimenter?

32
New cards

how can procedures cause null effects? (3)

  • were the procedures reproduced the same way across participants

  • were the procedures standardized relative to other studies

  • did new procedures have a time for participants to practice? 

33
New cards

how can constrains on study designs cause null effects? (3)

  • limited sample sizes

  • issues with data collection process

  • issues with the analysis of methods

34
New cards

you obtained null results, what should you do next? (4)

  • re-run the study with improved design details

  • re-measure the DV to reduce variability

  • constrain analyses to address portions of the study that don’t have flaws

  • consider publishing null effects as it is

35
New cards

what’s an advantage and a disadvantage of re-running the study with improved design details?

  • advantage: more likely to be a strong test of H0 (prove more strongly what you already have)

  • disadvantage: time consuming

36
New cards

what’s an advantage and a disadvantage of constraining your analyses to address parts of the study that don’t have flaws?

  • advantage: data is already available

  • disadvantage: hard to interpret findings from partial report

37
New cards

what does having multiple outcomes from re-running a study allows you to do?

mean and variance metrics: you can know the effect size you should expect 

38
New cards

when should you conduct the original study (2) and when should you conduct the improved study (2)?

original

  • believe that there are no design flaws

  • seeking confirmation of an outcome

improved

  • can improve on design flaw

  • can extend to another sample, materials or tasks (whatever is changed may account for observed differences)

39
New cards

what’s the difference between re-running a study and re-measuring the DV?

  • re-running: redo the experiment

  • re-measuring: obtaining another point of view

40
New cards

what’s important to know/understand when you are constraining your analyses to focus on parts without flaws?

the null hypothesis for all conditions (because it might differ and you won’t be able to compare them all)

<p>the null hypothesis for all conditions (because it might differ and you won’t be able to compare them all)</p>
41
New cards

true or false: you should not publish your study if there are strong evidences of null effects 

false

42
New cards

what’s the difference between a classic analysis and Bayesian analysis?

  • classic: p < 0.5 = reject H0; p > 0.5 = retain H0

  • Bayesian: evidence that supports H0 VS doesn’t support H0

<ul><li><p>classic: p &lt; 0.5 = reject H0; p &gt; 0.5 = retain H0</p></li><li><p>Bayesian: evidence that supports H0 VS doesn’t support H0</p></li></ul><p></p>
43
New cards

what’s “BF” and “BF10”?

  • BF: bayesian factor

  • BF10: strength of the H1 relative to H0

44
New cards

how do you compute the bayesian factor (BF)?

BF = Prob (Data | H1) ÷ Prob (Data | H0)

(probability H1 is true ÷ probability that H0 is true)

45
New cards

if your BF10 is close to 1/30, then you have strong evidence for the [H0/H1]. if your BF10 is close to 30, then you have strong evidence for the [H0/H1]

  • BF10 = 1/30: evidence for H0

  • BF10 = 30: evidence for H1

<ul><li><p>BF10 = 1/30: evidence for H0</p></li><li><p>BF10 = 30: evidence for H1</p></li></ul><p></p>

Explore top flashcards

AMSCO: Quiz 3
Updated 244d ago
flashcards Flashcards (246)
Germany
Updated 917d ago
flashcards Flashcards (222)
Vivaldi
Updated 1041d ago
flashcards Flashcards (182)
English vocab
Updated 218d ago
flashcards Flashcards (40)
Stages 20-22 vocab
Updated 377d ago
flashcards Flashcards (78)
Circuits
Updated 590d ago
flashcards Flashcards (84)
AMSCO: Quiz 3
Updated 244d ago
flashcards Flashcards (246)
Germany
Updated 917d ago
flashcards Flashcards (222)
Vivaldi
Updated 1041d ago
flashcards Flashcards (182)
English vocab
Updated 218d ago
flashcards Flashcards (40)
Stages 20-22 vocab
Updated 377d ago
flashcards Flashcards (78)
Circuits
Updated 590d ago
flashcards Flashcards (84)