1/12
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
An RCT shows a small treatment effect that is statistically significant (p < 0.05). What question should you ask before concluding causality?
Check for confounding, chance, and bias — especially loss to follow-up or poor adherence that could exaggerate significance.
If adherence differs by arm, what type of analysis preserves internal validity?
Intention-to-treat analysis; it keeps participants in their original groups.
Why might per-protocol analysis overestimate efficacy?
It excludes non-adherent subjects, who often have worse outcomes, making the treatment look stronger.
How does cluster randomization affect power?
It reduces power because observations within clusters are correlated; you have fewer independent units.
Why include a placebo group when a standard treatment exists?
Only if ethically justified — to measure pure treatment effect while controlling for expectation (placebo) effects.
What’s the main trade-off between internal and external validity?
Tighter control increases accuracy inside the study but may reduce generalizability to real-world settings.
If several participants drop out early, how does that affect the KM curve?
Censor marks appear; the curve doesn’t drop but later intervals have smaller denominators.
If α is decreased from 0.05 to 0.01 with the same n, what happens to power?
Power decreases — stricter significance requires a larger effect or sample
A trial finds no effect but had low power — what can you conclude?
You can’t rule out a true effect; the study may simply have been underpowered (Type II error)
A new screening test shows Se = 98 %, Sp = 90 %. In a population with 1 % disease prevalence, what’s the most common type of error?
False positives — low prevalence makes PPV small even with high specificity.
How could randomization help eliminate length-time bias?
Random assignment to screening vs no screening balances aggressive and slow diseases between groups.
Why can high internal validity in an RCT still yield misleading external conclusions?
Trial participants may differ from real-world patients (e.g., healthier, younger).
What shared concept underlies “per-protocol analysis” and “overdiagnosis bias”?
Both selectively include people who appear to do better, artificially inflating observed benefit.