A statistical method that evaluates whether observed data differ significantly from a null hypothesis, which assumes no effect or relationship.
The probability of observing results as extreme as the ones obtained, assuming the null hypothesis is true.
Incorrectly rejecting a true null hypothesis, concluding an effect exists when it does not.
Failing to reject a false null hypothesis, concluding no effect exists when one does.
The increased probability of making at least one Type 1 error across multiple statistical tests.
Research behaviors that compromise the integrity of findings, such as selective reporting, cherry-picking data, or p-hacking.
Manipulating data or analyses (e.g., repeatedly testing or stopping data collection early) to achieve a statistically significant p-value.
Formulating hypotheses after analyzing the data and presenting them as if they were pre-planned.
Drawing conclusions by selectively focusing on patterns that fit a hypothesis, ignoring data that doesn’t.
The tendency for studies with null or insignificant results to remain unpublished, skewing the literature.
The preference for publishing studies with significant or positive results, leading to an unbalanced representation in the literature.
The practice of publicly documenting study methods and hypotheses before data collection begins to prevent biases.
A publication format where study plans are peer-reviewed and approved before data collection, ensuring transparency and methodological rigor.