Unit 10 - The Replication Crisis & Meta-analysis + Literature Reviews

0.0(0)
Studied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/55

flashcard set

Earn XP

Description and Tags

Last updated 7:59 PM on 4/22/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

56 Terms

1
New cards

The Replication Crisis

_____________________ - Began with a paper in medicine, John Ioannidis made an article that argued “the majority of research claims are false findings” because…

Statistics -> blamed for the replication crisis

  1. Studies w/ a small sample size less likely for findings to be true

  2. Studies with SMALL effects — will be failed to replicate

  3. More tests you do, the more likely a whole lot will be false

  4. More flexibility like design, and define of constructs.. Results will be false

  5. Greater the financial and other interests— if something is funded very well, results may be false due to other interests

  6. Therefore --- most research findings are false! We should look at fields other than medicine

2
New cards

Replication crisis in Psych

____________________ -The crisis reached psychology through a number of events.

  • Daryl Bem

  • John Barge

  • Dietrich Staple

Number of authors got together to see how reproducible psychological science it. Hundreds of studies from the best journals:

  • 1/3→1/2 replicated

  • 2/3→1/2 did NOT

3
New cards

Daryl Bem

____________________ - Argued for ESP (Extrasensory phenomena),got a paper published in JPSP arguing for it, presented subjects with primes showing it affected judgement, priming them AFTER, arguing they knew what was coming, creating much controversy, either we accept it exists or there’s something wrong with the way we do research

4
New cards

John Barge

____________________ - Priming – participants were primed with old/elderly words and walked slower. But lots of this could be explained by experimenter bias -> Popular area of research that failed to replicate. Kahneman referred to priming as a dumpster fire but it was a popular area of research that failed to replicate

5
New cards

Dietrich Staple

____________________ - Data fraud/falsified research

6
New cards

Statistical significance testing

Why is psych research failing to replicate?

____________________ - an argument that if your results are less than p = 0.05, results must be important. Greater than p = 0.05, then it is NOT important.

ISSUE: If you run enough participants, you will find an important/significant result, that’s not the way science is supposed to work

<p><span>Why is psych research failing to replicate?</span></p><p><span>____________________ - <strong>an argument that if your results are less than p = 0.05, results must be important. Greater than p = 0.05, then it is NOT important.</strong></span></p><p><span><strong>ISSUE: </strong>If you run enough participants, you will find an important/significant result, that’s not the way science is supposed to work</span></p>
7
New cards

Small sample sizes

Why is psych research failing to replicate?

___________________ - causes to get quirky results (small sample fallacy) which you cannot replicate. We can only get 60 participants for UR for example

<p>Why is psych research failing to replicate?</p><p>___________________ - <strong>causes to get quirky results (small sample fallacy) which you cannot replicate. We can only get 60 participants for UR for example</strong></p>
8
New cards

HARKing

Why is psych research failing to replicate?

__________________ - hypothesis AFTER knowing, but hypothesis is supposed to be BEFORE the study

9
New cards

Shrinking effect sizes

Why is psych research failing to replicate?

___________________ - Magnitude of effect shrinks more and more eventually disappearing– could be a result of a product that really isn’t there, but by chance you originally get a large effect

  • Researcher gets an interesting result, gets published, keeps trying to replicate.

10
New cards

Questionable research practices

Why is psych research failing to replicate?

_____________________ - Not doing research the way it’s supposed to be done

  • Example: Wonder Woman “posing” example -> Didn’t get results they wanted, so added more participants, and more, and more, until there was enough to have an effect/see results wanted

<p>Why is psych research failing to replicate?</p><p>_____________________ - <strong>Not doing research the way it’s supposed to be done </strong></p><ul><li><p>Example: Wonder Woman “posing” example -&gt; Didn’t get results they wanted, so added more participants, and more, and more, until there was enough to have an effect/see results wanted</p></li></ul><p></p>
11
New cards

Publication bias

Why is psych research failing to replicate?

____________________ - Non-statistically significant research is not published– but in journals, you ONLY see success, not failures. Gives false impressions we never get to see the failures

<p>Why is psych research failing to replicate?</p><p>____________________ - <strong>Non-statistically significant research is not published– but in journals, you ONLY see success, not failures. Gives false impressions we never get to see the failures</strong></p>
12
New cards

Publish or perish

Why is psych research failing to replicate?

________________ - Pressure to produce results that are statistically significant so you can be published, get promotions, etc.

Could put off ppl from doing important research that is just not statistically significant, or cause them to publish questionable results

<p>Why is psych research failing to replicate?</p><p>________________ -<strong> Pressure to produce results that are statistically significant so you can be published, get promotions, etc.</strong></p><p><strong>Could put off ppl from doing important research that is just not statistically significant, or cause them to publish questionable results</strong></p>
13
New cards

Eye-catching results

Why is psych research failing to replicate?

____________________ - Look for results that will give attention to us– but could be one-off and unreplicable. We always look for results that are counterintuitive that no one would expect.

14
New cards

Research design 101

Why is psych research failing to replicate?

________________ - Just being sloppy in research design

<p><span>Why is psych research failing to replicate?</span></p><p><span>________________ - <strong>Just being sloppy in research design</strong></span></p>
15
New cards

Conflicts of interest

Why is psych research failing to replicate?

___________________ - personal interests that can impact judgement

  • Getting a big grant to develop a new therapy, will you publish a lot of research that it doesn’t work?

16
New cards

Fraud

Why is psych research failing to replicate?

________________ - deception for personal gain– fail to replicate (stapel, painting the mice)

<p>Why is psych research failing to replicate?</p><p>________________ - <strong>deception for personal gain– fail to replicate (stapel, painting the mice)</strong></p>
17
New cards

Not valuing replication studies

Why is psych research failing to replicate?

_______________ - journals are reluctant to publish old studies just replicated. People want exciting new research.

Not tradition in psych research to reproduce old studies like in other sciences, whereas it may be in hard sciences

18
New cards

Responses to the Crisis

_______________ -There were many different responses, some of the old schools psychologists were not happy

  • Susan Fisk

  • Roy Baumeister

  • Wolfgang Strobe

19
New cards

Susan Fisk

_______________ - was furious after she accused her peers of ‘methodological terrorism’ when she was talking about was when she and others published research, terrorists appeared in blogs and criticized how the research is done.

  • She felt that’s not the way science should work and colleagues should interact with each other. Told them to write something to criticize and publish in a journal and we can have a debate.

  • Critics argued those journals are controlled by old school researchers like her. We often will not get our critiques published so we’re forced to go to blogs.

20
New cards

Roy Baumeister

_______________ - A better response in a paper arguing replications fail because the researchers are not as experienced as the original researchers.

  • You don’t immediately know how to do it, just like a plumber.

  • He makes an interesting argument for small sample sizes suggesting it’s better to do a number of small studies honing your technique rather than one big super size study which has lots of power but could be flawed

21
New cards

Wolfgang Strobe

_______________ - “What can we learn from many labs replications?” studies were done to try and replicate psychology research

  • Referring to attempts by multiple laboratories failing to replicate original findings

22
New cards

Critics of “replication crisis” article

Wolfgang says:

  1. ISSUE: Would need a representative sample of ALL social psychology studies— not just handpick studies to replicate

  2. ISSUE: Given social psychology operates in a SOCIAL context, a failure to replicate a particular study tells us little (referenced Aronson & Mills)

  1. SOLUTION: Replicating or failing to replicate a research result is a scientific finding and, as such, not final… It does not mean one is right, one is not, we just have 2 results

  2. SOLUTION: Our focus should be on theories --- and no one crucial experiment can determine the fate of a theory… Need multiple studies to see if a theory is supported or not

23
New cards

Aronson & Mills

_______________ - Women asked to join a group and have an embarrassing initiation reported it was more interesting than those who didn’t.

A study on how people interpret sex done in the 1960s would be interpreted much different than in 2020 due to how attitudes of sex have changed

24
New cards

General public think…

Lilienfeld argued public is skeptical of psych for the following reasons…says it’s not thought of as a proper science

  • People think psych is just “common sense”…

  • “Psych doesn’t use the scientific method”

  • “Psychology cannot generalize because everyone is unique”

  • “Can’t always predict people’s behaviour”

  • Psych is not useful to society”

  • “Psych results don’t replicate”

Ended paper by saying there is some reason the public is skeptical. If you wander into chapters and the psychology section, you’ll find a self-help section that’s not scientific

25
New cards

Psych is just “common sense”

_______________ - a lot of results are obvious in hindsight. However there are ones contrary. And even those that are, they are valuable.

26
New cards

“Psychology does not use the scientific method”

_______________ - we do.

27
New cards

“Psychology cannot generalize because everyone is unique”

_______________ - If we do a study on this group, it cannot generalize to another. Both true and false, everyone who has cancer is unique, doesn’t mean you cant make generalizations. But you must also recognize they are unique.

28
New cards

“Psychology cannot make predictions”

_______________ - we cant predict how people will behave in a particular situation. Weather forecaster cant predict weather beyond 3 days, many sciences struggle to make predictions of all sorts (earthquakes), psychology can make predictions of course they’re limited

29
New cards

“Psychology not useful to society”

_______________ - it is! for instance look at treatments and people with autism , psychological disorders, etc.

30
New cards

“Psychology’s results don’t replicate”

_______________ - there are cases, if you look at the bulk, many replicate again and again

31
New cards

Dr. Phil

_______________ - If you ask the world who the most famous psychologist is they’ll say __________; endorsement of polygraphs and EEG feedback for ADHD—yet APA invited him to speak at their convention. Not a reputable psychologist anymore.

32
New cards

Ferguson

Wrote a response to Lilienfeld. Contrasted the roles of a lawyer versus a scientist whose jobs are advocating diff. people/advocating for the truth

  • Says recently we’ve begun to confuse science and advocacy “The difficulty is that advocacy and science are diametrically opposed in method and aim…

  • Science is dedicated to a search for ‘truth’ even if that truth is undesired, inconvenient, unpalatable, or challenging to one’s personal or the public’s beliefs and goals…

  • Advocacy is concerned with constructing a particular message in pursuit of a predetermined goal that benefits oneself or others”

33
New cards

Where do we go from here?

___________________ - There’s been many lessons learned. They come and go, the better we handle the current one, the better we handle future crises

34
New cards

The Pyramid of Research Evidence

___________________ - When we think of this, the lowest level are editorials, then case studies, then case-control, etc. All are valuable. But randomized control trials were long thought to be at the top of evidence pyramids. These days there’s something higher: systematic reviews.

<p><strong>___________________ - When we think of this, the lowest level are editorials, then case studies, then case-control, etc. All are valuable</strong>. But randomized control trials were long thought to be at the top of evidence pyramids. These days there’s something higher:<strong> systematic reviews.</strong></p>
35
New cards

Literature reviews

___________________ - combine the results from many studies --- greater power and value. Permit researchers to address big broad questions. Baumeister said they are special!

The best way of research evidence

Three kinds:

  1. Narrative reviews

  2. Systematic reviews

  3. Meta-analysis

36
New cards

Narrative reviews

Types of literature reviews 1/3

__________________ - non-systematically examine the published literature to address broad theoretical questions. Ex. a term paper

ISSUE: Often cherry-pick what info suits them best

<p>Types of literature reviews 1/3</p><p>__________________ - <strong><u>non-systematically</u> examine the published literature to address broad theoretical questions. </strong>Ex. a term paper</p><p><strong>ISSUE</strong>: Often cherry-pick what info suits them best</p>
37
New cards

Systematic reviews

Types of literature reviews 2/3

_________________ - systematically review the literature so all of it is represented in some way— NOT a cherry-picked sample

<p>Types of literature reviews 2/3</p><p>_________________ - <strong><u>systematically</u> review the literature so all of it is represented in some way— NOT a cherry-picked sample</strong></p>
38
New cards

Meta-analysis

Types of literature reviews 3/3

________________ - systematically review info from literature statistically. ALL different types of studies conducted…

The term was created by Gene Glass—he did a large one looking at “does psychotherapy work?” concluding it is.

Became popular in psychology, education, and medicine

<p>Types of literature reviews 3/3</p><p>________________ - <strong> systematically review info from literature <u>statistically. </u>ALL different types of studies conducted…</strong></p><p>The term was created by Gene Glass—he did a large one looking at “does psychotherapy work?” concluding it is.</p><p>Became popular in psychology, education, and medicine</p><p></p>
39
New cards

Meta-analysis EXAMPLE

The Cochrane Collaboration: do meta-analysis on topics related to medicine

  • Got symbol from study on premature babies suffering from lung issues. Thought a treatment might be helpful.

  • Two studies suggested it was effective, the effect was different from 0 (green)

  • 5 studies didn’t find any differences (red)

  • A meta-analysis combined results from all 7, it didn’t touch the 0 line suggesting the drug was effective, many babies saved (yellow)

<p><strong>The Cochrane Collaboration: do meta-analysis on topics related to medicine</strong></p><p></p><ul><li><p>Got symbol from study on premature babies suffering from lung issues. Thought a treatment might be helpful.</p></li><li><p>Two studies suggested it was effective, the effect was different from 0 (green)</p></li><li><p>5 studies didn’t find any differences (red)</p></li><li><p>A meta-analysis combined results from all 7, it didn’t touch the 0 line suggesting the drug was effective, many babies saved (yellow)</p></li></ul><p></p>
40
New cards

Steps to conducting a meta-analysis

  1. Pick a topic

  1. Collect all relevant quantitative studies (sufficient #, but not too many it’s overwhelming)

  2. Explicitly state exclusion/inclusion criteria for studies (which studies in, which are out? Meta-analysis in Australia, researchers in Brazil started one of the same topic, but they had missed many important studies

Meta-analysis requires calculating effect sizes

41
New cards

IQ Scores

  • The average IQ in the population is μ=100

  • About σ=15 points above or below μ=100 captures most IQ scores (68%)

    • IQ scores are distributed normally, roughly 68% haves scores between 85-115

  • A herbal supplement’s effect on IQ scores gives the treatment group an advantage of 3 IQ points over the control group (x̄=103)

  • d=(μ-x̄)/σ

42
New cards

Cohen’s d

______________ - a popular measure of effect size

  • D of .2 is “small,” .5 is “medium” and .8 is “large” effect size

  • Each study outcome is converted to an effect size

  • These effect sizes themselves are summed and averaged.

<p><strong>______________ - </strong><span style="font-family: Aptos, sans-serif; line-height: 115%;"><strong>a popular measure of effect size</strong> </span><span style="font-family: &quot;Times New Roman&quot;; line-height: normal; font-size: 7pt;"> </span></p><ul><li><p>D of .2 is “small,” .5 is “medium” and .8 is “large” effect size</p></li><li><p class="MsoListParagraphCxSpLast">Each study outcome is converted to an effect size</p></li><li><p><span style="font-family: Aptos, sans-serif; line-height: 115%;">These effect sizes themselves are summed and averaged.</span></p></li></ul><p></p>
43
New cards

Questions about effect size

  1. If there is an effect-- Is the average effect size different from zero?

  2. Do the set of effect sizes vary?

  3. Can we explain variability in effect sizes by moderator variables?

44
New cards

Effect sizes EXAMPLE

Meta-analysis requires calculating effect sizes

EXAMPLE: If you had a sibling with a chronic illness, it may cause issues with a sibling because you are not paying attention to one kid over the other

Mean effect size d = -.20 (small and negative— 51 published studies, 103 effect sizes)

Effect sizes varied:

  1. Reports from child (-.13) / parents (-.23)

  2. what you were measuring– psych functioning (-.22), sibling relationship (+.12)

  3. Chronic illness type: cancer (-.28) and cardiac (+.20)

  4. Severity: Effects more likely when severity was LESS (-.26) compared to (-.17)

<p>Meta-analysis requires calculating effect sizes</p><p>EXAMPLE: If you had a sibling with a chronic illness, it may cause issues with a sibling because you are not paying attention to one kid over the other</p><p><strong><em>Mean effect size d = -.20 (small and negative— 51 published studies, 103 effect sizes)</em></strong></p><p><strong><em>Effect sizes varied:</em></strong></p><ol><li><p>Reports from child (-.13) / parents (-.23)</p></li><li><p>what you were measuring– psych functioning (-.22), sibling relationship (+.12)</p></li><li><p>Chronic illness type: cancer (-.28) and cardiac (+.20)</p></li><li><p>Severity: Effects more likely when severity was LESS (-.26) compared to (-.17)</p></li></ol><p></p>
45
New cards

Cumulative and meta-meta-analyses

_____________ - Take results from multiple meta-analysis and put them together

46
New cards

Qualitative meta-analysis

_______________ - using qualitative research and combining it into a meta-analysis

47
New cards

Four conclusions from any literature review (Baumeister)

  1. Hypothesis = correct

  2. Hypothesis not proven but best guess

  3. There is not enough evidence or the evidence is flawed!

  4. HYPOTHESIS IS WRONG

48
New cards

Blinding

Assumption that non-epileptic seizures (no changes to brainwave) may relate to child sexual abuse— Meta-analysis by him and Cathy Faye in Australia seemed to support this due to higher effect size…. Effect seems quite large and substantial

BUT studies were divided into blinded researchers and not blinded researchers– some researchers were more likely to rate someone with CSA when searching for it

…recommendation was that researchers MUST be blinded to avoid this

Hypothesis was WRONG or evidence is flawed

<p>Assumption that non-epileptic seizures (no changes to brainwave) may relate to child sexual abuse— Meta-analysis by him and Cathy Faye in Australia seemed to support this due to higher effect size…. <strong>Effect seems quite large and substantial</strong></p><p><strong>BUT </strong>studies were divided into blinded researchers and not blinded researchers– some researchers were more likely to rate someone with CSA when searching for it</p><p><strong>…recommendation was that researchers MUST be blinded to avoid this</strong></p><p><strong>Hypothesis was WRONG or evidence is flawed</strong></p>
49
New cards

3 Criticisms of Meta-Analysis (Eysenck)

_____________________ - They were controversial when they first appeared. Hans Eysenck said:

  1. Apples & Oranges

  2. Garbage in, garbage out

  3. File Drawer

50
New cards

Apples and oranges

Three Criticisms of Meta-Analysis 1/3

_____________________ - combining results from studies that measure different things

There are all diff. types of outcomes and theories– how could you get to an answer when they are so different? (all kinds of therapies, all kinds of outcomes)

SOLUTION: important to define collection of studies properly to avoid measuring diff. things

51
New cards

Garbage in, garbage out

Three Criticisms of Meta-Analysis 2/3

__________________ - combining low quality studies and giving them the same weight as good studies but defining “good quality” is an issue in and of itself

SOLUTION: Rate studies for quality and give more weight to those seen as good studies, but indicate how you’ve made those determinations

52
New cards

File drawer

Three Criticisms of Meta-Analysis 3/3

___________________ - the failure to obtain all or a representative sample of studies

  • If you look at published literature, it is overwhelmingly statistically significant, but don’t find the studies that DON’T work

SOLUTION: Use funnel plots to look for a “funnel shape” or spread of studies normally distributed… Missing some studies which could suggest publication bias.

<p>Three Criticisms of Meta-Analysis 3/3</p><p>___________________ - <strong>the failure to obtain all or a representative sample of studies</strong></p><ul><li><p>If you look at published literature, it is overwhelmingly statistically significant, but don’t find the studies that DON’T work</p></li></ul><p><strong>SOLUTION</strong>: Use funnel plots to look for a “funnel shape” or spread of studies normally distributed… Missing some studies which could suggest publication bias.</p>
53
New cards

Psychotherapy and Meta-Analysis (Glass)

___________________ - I think Eysenck was motivated by his disagreement with the conclusion reached by Gene Glass, he was himself a behavioural therapist, but didn’t believe all therapies were effective. Anyone who suggests all therapies work is problematic.

  • He called on the idea “Everyone has won and all must have prizes” (Alice in Wonderland)

54
New cards

Ferguson Video game violence meta-analysis

_________________ - Small effect between video games and violence– slight negative effect but ALSO prosocial effect of same-ish area

Fatally flawed [meta-analysis]… should not have been published in this journal or any other journal

<p>_________________ - <span><strong>Small effect between video games and violence– slight negative effect but ALSO prosocial effect of same-ish area</strong></span></p><p><span><strong>“</strong><em>Fatally flawed [meta-analysis]… should not have been published in this journal or any other journal</em>”</span></p>
55
New cards

Effects of CSA Meta-analysis

___________________ - Caused a big controversy…. Looked at college students (high-functioning group) who had history of child sexual abuse, they found the negative effects was neither common or typically intense. Not so different from others conclusions.

A long paper, hunch is that reviewers read through so much and didn’t pay as much attention to what was said in the discussion. Disagreements were all related to the last few pages. Nobody really noticed them originally, but Dr. Laura said its endorsing this, congress got involved, etc.

The paper had said….

  • [For children] “A willing encounter with positive reactions would be labeled simply adult-child sex”

  • “Adolescents are different from children in that they…know whether they want a particular sexual encounter, and to resist an encounter that they do not want…adult-adolescent sex”

56
New cards

Meta-Analysis Today

___________________ -Meta-analysis still remains a fairly controversial technique. Some of his most cited publications are his meta-analysis, you get more impact than publishing a regular study. Literature reviews can be very influential.