JT

8. history

Bem's Experiment and Confirmatory Research

  • In 2011, Bem claimed people could predict the future using a psychology experiment.
  • A team conducted a confirmatory replication but found no evidence supporting Bem's claims using Bayesian analysis.
  • Preregistration helps to ensure experiments are free of biases.

Bad Science and the Need for Transparency

  • Psychological research often suffers from a lack of reliability due to practices like HARKing and cherry-picking.
  • It's essential to differentiate between exploratory and confirmatory research.
  • Transparency and pre-registration of research plans are necessary to improve reliability.

Good Science and Preregistration

  • Researcher honesty is crucial, but biases can influence judgment.
  • Preregistering study designs and analysis plans.
  • Splitting data and preregistration ensure transparency and academic integrity.

Detecting Biases in Published Literature

  • Detecting biases in single studies is challenging without registered protocols.
  • Tests of small-study effects assess if effect sizes are related to study size.
  • Selection models assess if result patterns suggest a filtering process.
  • Excess significance tests evaluate if the number of significant results is too high.

Field-Wide Assessments and Bias Correction

  • In neuroimaging, assess if the number of discovered foci relates to study sample size.
  • Small-study effects and selection models can be extended to correct potential biases.
  • The trim-and-fill method imputes missing studies to correct for bias.
  • More robust methods need raw data, protocols, analysis codes, and unpublished information.

Empirical Evidence of Biases in Cognitive Sciences

  • Neuroimaging studies show an excess of statistically significant results.
  • Smaller animal studies report ore favorable results.
  • Psychological science has a bias toward reporting positive results.
  • P-hacking manipulates data to achieve significance just below the 0.05 threshold.
  • Industry-sponsored trials report significant results and larger effects.
  • Candidate gene studies often lack reproducibility.
  • Genome-Wide Association Studies (GWAS) requiring international collaboration and rigorous replication practices improved reliability.

Approaches to Prevent Biases

  • Scientists may struggle to recognize or control their own biases.
  • Study registration ensures public recording of all trials.
  • Pre-specification distinguishes between pre-specified and exploratory analyses.
  • Data and code sharing promotes reproducibility and research scrutiny.
  • Replication verifies findings and reduces publication bias.
  • Standardize reporting through checklists and incentivizing open data.

Incentive Structures

  • Shifting academic incentives from quantity to quality and reproducibility can mitigate biases.

Crisis of Confidence in Science

  • The scientific community faces a crisis due to fraud, failed replications, and errors.
  • Human factors contribute to errors.

Common Issues in Research

  • Post Hoc Hypothesizing (HARKing) leads to biased results.
  • Outcome switching in RCTs undermines reliability.
  • Design failures and data analysis issues (like p-hacking) are prevalent.
  • Publication bias favors positive results.

Underlying Causes and Proposed Solutions

  • Researchers are influenced by confirmation and hindsight biases.
  • Incentive structures exacerbate these issues.
  • Proposed solutions include transparency, pre-registration, registered reports, replication, improved training, and alternative statistical techniques.