RES METH EXAM 2 (excluding 12 threats to internal validity)

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/44

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

45 Terms

1
New cards

bivariate correlation

An association that involves exactly two variables.

2
New cards

Association claim

Describes the relationship found between 2 measured variable.

3
New cards

Correlational studies

A study is correlational if it has two measured variables

-Construct validity - how well was each variable measured, good reliability? measuring what it intends to measure? Evidence for face validity? Concurrent validity (how well it agrees with other “Gold standards”)? Convergent (how well it correlates with other measures of similar constructs) vs discriminant validity (ensures that a measurement tool is distinct from other measures assessing different constructs)?

-external validity → to whom can the association be generalized?

-internal validity → can we make a causal inference from an association?

-Statistical validity → How well does data support the conclusion? Point estimate - the value that is the result of your analysis

4
New cards

How strong is the relationship?

effect size r - describes the strength of the relationship

  • In a stronger effect size, r is closer to 1/-1

  • can indicate the importance of a result

  • with all else equal, larger effect sizes are more important

  • small effect sizes can still be important/ can compound over many observations

  • can be aggregated over many situations and many people.

5
New cards

How precise is the estimate?

Confidence intervals

  • designed to include the true population value a high percentage of the time (usually 95%)

  • If it does not include 0, relationship is statistically significant

  • If it does include 0, relationship is not significant

  • SMALL SAMPLES = WIDER CI’s (less precise)

  • LARGER SAMPLES = narrower CI’s (more precise)

6
New cards

Has it been replicated?

If a result can be replicated we can be more confident about the association.

7
New cards

Could outliers be affecting the association?

Outlier

  • an extreme score; a single case that stands out from the rest of the data

  • can make correlations appear stronger/weaker

  • outliers can exert disproportionate influence

  • in bivariate correlation → outliers are mainly problematic when they have extreme values on both variables

  • outliers are more problematic in small sample sizes bc they have more influence

8
New cards

Is there restriction of range?

Restriction of Range

  • when there is not a full range of scores on one of the variables in the association

  • can make correlation appear smaller/weaker

  • ex: SAT scores- most colleges look at 1200-1600 scores (restricted)

  • only using a partial range underestimates true correlation

  • similar to floor/ceiling effects

  • we msut ask about it when correlations appear weaker than expected

  • how do researchers correct this

    • obtain full ranges, then compute the correlation

    • correction for restriction of range→ Statistical formula to correct for the error

9
New cards

Is the association curvilinear

Curvilinear association

  • The relationship between two variables is not a straight line + than -

  • correlation coefficient close to 0

  • detect using scatter plots, r values will not describe data well

10
New cards

directionality problem

the difficulty in determining which variable influences the other when a correlation is observed, as it's impossible to conclude whether variable X causes Y or vice versa

11
New cards

Internal validity and association claims

not necessary to interrogate internal validity for an association claim but we need to protect ourselves from the temptation to make a causal inference

  • potential 3rd variables can explain a bivariate association → SPURIOUS CORRELATION

  • must correlate with both variables in the association

12
New cards

External Validity and association claims

To whom can the association be generalized

  • for this, size of sample does not matter as much as the way the sample was selected from the people of interest

  • moderator: a variable that influences the relationship between 2 other variables

    • example: pro sports team wins vs. attendance - dependent on the moderator of whether the city has low or high residential mobility.

13
New cards

Variables

How to operationalize them, how to describe their type and scale, how to interrogate construct validity

14
New cards

Association

How to describe and plot them, how to interrogate statistical validity

15
New cards

Reasons experiments support causal claims

1) COVARIANCE - do the results show that the causal variable is related to the outcome variable? Are distinct levels of IV associated with different levels of DV?

2) TEMPORAL PRECEDENCE - Does the study design ensure that the causal variable comes before the outcome variable in time?

3) INTERNAL VALIDITY - Does study rule out alternative explanations for the results?

16
New cards

3 types of comparison groups

  • control group →a level of IV intended to represent “no treatment”

  • treatment group → Participants in an experiment exposed to the level of IV that involves medication or the experimental condition

  • placebo group → group exposed to an inert treatment such as a sugar pill

17
New cards

Selection effect

when the kinds of participants in one condition are systematically different from those in the other

IF THERE IS SYSTEMATIC VARIATION WE CANNOT MAKE A CAUSAL CLAIM REGARDING THE IV AND DV

18
New cards

random assignment

helps to avoid selection effects - flip a coin to assign groups

19
New cards

matched groups

participants were sorted form lowest to highest on some variable, then grouped into sets of 2, so two participants with the highest score were in different groups,D and then so on and so forth. Then each group is randomly assigned to an experimental group.

20
New cards

Design Confound

Essentially, an alternative explanation. another variable that varies systematically with IV

  • a variable is only a confound when its levels vary systematically across levels of IV

21
New cards

systematic variability

changes in a dependent variable that are consistently related to the independent variable or other factors, rather than random fluctuations

BETWEEN GROUPS

  • BAD FOR INTERNAL VALIDITY

22
New cards

unsystematic variability

random, unpredictable fluctuations or differences in data that are not explained by the independent variables being studied WITHIN GROUPS

  • NOT A THREAT TO INTERNAL VALIDITY

  • problematic for statistical power, might see a null

23
New cards

EXPERIMENT

Researchers manipulate at least one variable and measure another.

24
New cards

Independent groups

different participants at different levels of IV

2 basic subtypes

  • posttest only →DV measured once after manipulation of IV

  • pretest/posttest →DV measured before and after manipulation of IV

    • both use random assignment

      • WHICH IS BETTER - IT DEPENDS

        • in some situation its problematic to use pre/post → if DV will cause fatigue or familiarity effects

        • posttest only can still be very powerful - random assignment and manipulation of IV

25
New cards

Within groups

same participants undergo all levels of IV

2 basic subtypes

  • concurrent → all levels of the IV are experiences at once SIMULTANEOUSLY (DV is preference)

  • repeated measures → levels of IV are experienced sequentially ONE AFTER THE OTHER (condition 1→measure dv- constion 2 - measure dv)

    • ADVANTAGES

    • no selection effects - groups are equivalent

    • unsystematic variability is less of a problem since participants are being compared to themselves.

    • statistical power- increased ability to detect between conditions

    • need fewer participants!

    • DISADVANTAGES

    • order effects (a potential compound)

    • might not be practical/possible

    • demand characteristics- participants an act in different ways based on knowledge of the IV

26
New cards

Order effects

exposure to 1 level of IV can influence responses to subsequent levels of IV

  • a confound because differences in DV may be explained by the sequence in which the levels were experienced

27
New cards

Counterbalancing

Used to avoid order effects

  • full counterbalancing → all orders, combinations used

  • partial counterbalancing → not all orders are tested

28
New cards

Practice effects/fatigue effects

participants may get better as a task continues OR get tired or bored towards the end.

29
New cards

Carryover effects

some form of contamination carries over from one condition to the next

30
New cards

Causal claim construct validity

How well was the DV measured

How well was IV manipulated

  • manipulation check - an extra dependent variable that researchers insert into an experiment to convince them that their experimental manipulation worked.

  • pilot studies - a simple study with a separate group of participants completed before the main study to confirm the effectiveness of a manipulation.

31
New cards

causal claim external validity

To whom or what can the causal claim generalize

  • to other people/ situations

If external validity is poor? not as much a concern of internal validity, this work can be done in future experiments

32
New cards

causal claim statistical validity

How much? How precise? How large is the effect? Is it significant? Has study been replicated?

33
New cards

causal claim internal validity

Are there alternative explanations for the results

  • were there design confounds

  • if independent group design was used, did researchers control for selection effects using random assignment or matching groups?

  • If the within-group design was used, did researchers control for order effects by counterbalancing

34
New cards

Null effects

The IV does not make a significant difference in the DV (the 95% CI includes 0)

35
New cards

What does it mean if the IV does not make a difference

it can mean…

  • not enough between groups variability

  • too much within groups variability

  • really no true difference

36
New cards

5 reasons for not enough between groups variability

1) Weak manipulations → The difference between IV levels is too small to be meaningful

2) Insensitive measures → operationalization of the DV does not have enough sensitivity to detect a difference between levels of the IV - should use detailed quantitative increments

3) Ceiling and Floor effects

  • ceiling effect: scores squeezed at top end of DV scale

  • floor effect: scores are squeezed together at bottom end of DV scale

Can be the result of problematic IV or DV

4) manipulation checks → additional dependent measure added to a study that can reveal a weak manipulation resulting in ceiling/floor effects

5) design confounds acting in reverse → confounds usually threaten internal validity, but they can also apply to null effects

37
New cards

Too much within groups variability

A null effect could arise due to this

  • unsystematic variability is not a problem for internal validity, but it can make it harder to find a true difference between conditions

    • CAUSES

    • measurement error

    • individual differences

    • situation noise

38
New cards

Measurement error

a human or instrument factor that can randomly inflate or deflate a person’s score on the DV

  • all DV’s involve a certain amount of measurement error

  • researchers try to keep these errors as small as possible

  • a groups’ mean on a DV will reflect the true mean ± random measurement info

  • when distortions od measurement are random they cancel out and do not affect the group mean

  • but a lot of measurement errors will result in more “spread out scores” making it harder to detect a difference between groups

Solutions

  • use reliable, precise measurements

  • measure more instances

39
New cards

Individual differences

differences across participants that add variability in DV scores

solutions

  • change the design →use a within-groups design to accommodate for individual differences

  • add more participants → less impact from any single participant

40
New cards

Situation noise

any kind of external distraction that could cause variability within groups that obscure between-group differences

  • can be minimized by controlling the surroundings of an environment

41
New cards

95% Confidence intervals and precision

can have a narrower CI (more precise) by…

  • dec error variability by using precise measurements, reducing situation noise or studying only one type of animal/person

  • inc sample size

    In a 95% CI, constant is at least 1.96

42
New cards

If variability is low and sample size is high and we have a null effect…

The IV could have almost no effect on DV

The IV could have a true effect on DV but because of random errors or measurement or sampling, this one study didn’t detect it

43
New cards

Power

The likelihood that a study will yield a statistically significant result when the IV really has an effect

  • statistical power leads to more precise estimates,

    • can be improved with…

    • within groups design

    • strong IV manipulation

    • a large sample size

    • less within groups variability

44
New cards

Advantages of large samples

  • inc statistical power, resulting in more precise estimate, narrowing CI, easier to detect true effect

  • small samples are less precise, so they increase the likelihood that you will detect an effect that is not actually there, making it unlikely to replicate

45
New cards

Null effects are published less often

  • can be just as interesting

  • journals are becoming more likely to publish null research

  • may be less published in popular media