1/49
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
What is a replication study
A study that repeats a previous study to see if the same results happen again.
Direct replication
Same methods, same procedures, same conditions. Goal = check if the original effect shows up again.
Conceptual replication
Tests the same idea but with different methods or measures.
What value do replication studies provide?
They check whether findings are real or flukes.
They help identify false positives.
They show when an effect happens and when it doesn't.
They improve precision of effect sizes.
Why might a replication fail?
The original result was a false positive.
The replication was a false negative.
Small, unknown differences in procedure changed the effect.
The effect only works under certain conditions (moderators).
Type 1 error
Rejecting null hypothesis when it is true
Duhem-Quine Thesis:
What are the three reasons a study may not find a significant result?
The theory is wrong.
The method is wrong (bad design, wrong measure).
The auxiliary assumptions are wrong (hidden factors messed things up).
About what percent of studies failed to replicate in the Nosek paper?
64%
Were the studies randomly sampled in the Nosak paper?
no, they were quasi-random
What is p‑hacking?
Trying lots of analyses until something becomes statistically significant.
Common ways:
Trying many variables
Trying many statistical tests
Stopping data collection early
Dropping outliers
Changing hypotheses after seeing results
Which is more common: p‑hacking or complete data fabrication?
p-hacking
the file drawer problem
Studies with non‑significant results get stuck in researchers' "file drawers" and never published.
How does publication bias impact the file drawer problem?
Journals prefer exciting, significant results → researchers hide null results → literature becomes biased.
How does our use of alpha (.05) create a file drawer problem even without p‑hacking?
If 5% of studies will be significant by chance, journals will publish those 5% and ignore the 95% nulls → creates a biased picture.
How does p‑hacking enhance the file drawer problem?
it increases the number of false positives → more flashy results get published → even more nulls get hidden.
Why do so many published studies report p‑values between .01 and .05?
p‑hacking pushes results just under .05
Journals prefer "barely significant" results
True large effects are rare, so extremely tiny p‑values (<.001) are uncommon
How does peer review work?
Editors decide whether a paper is worth sending to reviewers.
Reviewers are experts who judge the quality, methods, and importance.
Editors make the final decision.
editors
gatekeepers; make decisions
Reviewers
unpaid experts who critique the paper.
How does Null Hypothesis Significance Testing work
You assume "no difference" (null hypothesis). If your p‑value is below .05, you reject the null.
How does gender research show flaws in Null Hypothesis Significance Testing
Favreau shows NHST can find a "significant difference" even when only a small minority of people differ.
Does finding a group difference mean everyone in one group differs from everyone in the other?
No. Groups can overlap almost completely even when the means differ.
Are people who use alternatives to p‑values given the same credibility? What does this mean for falsification?
No — researchers using alternatives (effect sizes, Bayesian methods) often get less credibility. This shows falsification is not the only valid way to do science, even though the field treats it that way.
What is preregistration?
Writing down your study plan before collecting data and posting it publicly.
HARKing
hypothesizing after results are known
Registered Report
A journal reviews and accepts your plan before data collection, guaranteeing publication regardless of results.
What are critiques of preregistration, and how are they rebutted?
Critiques:
"It limits creativity."
"Researchers can still cheat."
"It takes too much time."
Rebuttals:
You can still do exploratory analyses — you just label them.
Transparency makes cheating harder.
It improves credibility.
Which fixes publication bias: preregistration or registered reports?
Registered Reports. Because journals commit to publishing regardless of results.
What is the difference between confirmatory and exploratory analyses?
Confirmatory: Planned ahead of time; tied to preregistration.
Exploratory: Done after looking at the data; must be labeled as such.
HARKing vs p-hacking
HARKing: Hypothesizing After Results are Known — pretending your hypothesis came first.
P‑hacking: Trying many analyses until something is significant.
What are potential flaws in registered reports?
They can't prevent all errors.
Even high‑powered studies can still produce false positives/negatives.
Replications may still differ due to unknown moderators.
reduction
use/harm as few animals as possible and maximize information learned and good done.
refinement
mitigating discomfort as much as possible for the animal
replacement
only use animals in research if you have to
preregistration allows
reviewers and editors to see that you are not p-hacking or HARKing
If CNM remains significant
it predicts something above and beyond authoritarianism, dominance, and nationalism.
Tuskegee Syphilis Study Ethical Flaws
Black men were not informed of their condition and were denied effective treatment
No informed consent
Withholding treatment
Exploiting a vulnerable population
Racism and deception
Thalidomide Tragedy Ethical Flaws
The drug caused "approximately ten thousand children with severe deformities" due to poor testing.
Inadequate safety testing
Failure to communicate risks
Harm to vulnerable group (pregnant women & infants)
Stanford Prison Experiment
resulted in emotional trauma for participants
No safeguards against psychological harm
Coercive environment
Lack of monitoring
No early stopping rules
Current Ethical Standards (Human Research)
Explain purpose, risks, procedures
Voluntary participation
Debriefing required when deception is used
Confidentiality & Privacy
Researchers must protect personal information... from unauthorized access or disclosure.
Beneficence
Researchers must maximize possible benefits while minimizing potential harm. Risk-benefit analysis. Avoid unnecessary harm
Justice
Ensures fair selection and treatment preventing the exploitation of vulnerable populations.
Fair distribution of risks and benefits
IRBs
Evaluate risks & benefits
Review consent procedures
Ensure confidentiality
Monitor ongoing studies
Can halt studies if violations occur
Ethical practices ensure
reliable and valid findings and prevent biases, fabrications, or errors.
It:
Increases credibility
Improves replicability
Builds public trust
Reduces bias
Ensures transparent methods
Arguments for animal research (from APA Animal Guidelines)
Advances knowledge of behavior
Benefits humans and animals
Some questions cannot be answered without animals
When done ethically, animals receive humane care
Research must follow strict laws & IACUC oversight
Arguments against animal research
Potential for pain or distress
Animals cannot consent
Some procedures may be invasive
Risk of poor housing or handling
Moral concerns about using sentient beings
How to Acquire Lab Animals Ethically
Animals "are to be acquired lawfully."
Transport must include proper "food, water, ventilation, and space."
Wild animals must be trapped humanely.
Endangered species require permits.
Additional Training Required for Animal Research
Personnel must be "trained and competent" and receive "explicit instruction in experimental methods and in the care, maintenance, and handling of the species."
Purpose of animal training
Ensure humane treatment
Recognize stress, illness, or abnormal behavior
Follow federal laws (Animal Welfare Act, PHS Policy)
Use proper surgical, anesthetic, and euthanasia techniques
Maintain accurate records