Week 3: Questions ;)

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/36

flashcard set

Earn XP

Description and Tags

Critical understanding deck

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

37 Terms

1
New cards

When is an experimental study more appropriate than an observational one?

When the researcher can ethically manipulate the exposure to test causality; experimental designs (RCTs) reduce confounding by randomization.

2
New cards

How does randomization strengthen causal inference?

It balances both known and unknown confounders across groups, making any observed outcome differences more likely due to the intervention.

3
New cards

Why might randomization fail to eliminate bias in practice?

Poor allocation concealment, small sample size, or differential loss to follow-up can unbalance confounders and reintroduce bias.

4
New cards

How does allocation concealment differ from blinding?

Allocation concealment prevents selection bias before assignment; blinding prevents measurement bias after assignment.

5
New cards

Why is blinding important for subjective outcomes?

Knowledge of treatment can influence how participants report symptoms or how investigators measure outcomes.

6
New cards

What is the trade-off between internal and external validity in an RCT?

Highly controlled studies maximize internal validity but may limit generalizability (external validity) to real-world settings.

7
New cards

Why is “intention-to-treat” analysis preferred?

It maintains comparability created by randomization and preserves real-world effectiveness, even with non-adherence or crossover.

8
New cards

When might a per-protocol analysis be justified?

When you want to estimate biological efficacy among those who fully followed the assigned treatment, knowing it may sacrifice randomization balance.

9
New cards

Why use a cluster RCT rather than an individual RCT?

When the intervention operates at a group level (e.g., school, clinic) or contamination between individuals would bias results.

10
New cards

What potential problem arises from cluster randomization?

Fewer independent units reduce statistical power and may create intracluster correlation that requires design adjustment.

11
New cards

Why conduct a crossover trial?

Each participant acts as their own control, reducing confounding—but only works if the disease and intervention effects are reversible.

12
New cards

What design feature of the Salk polio trial increased its validity?

Double-blind, placebo-controlled randomization of over a million children minimized bias and ensured reliable causal inference.

13
New cards

What threatens internal validity most in an RCT?

Differential loss to follow-up or non-comparable measurement across arms.

14
New cards

If an RCT finds no effect, how can you tell whether the study was underpowered?

Check expected effect size, α, β, and sample size; a small n or rare outcome may produce a false negative despite a real effect.

15
New cards

Why might an RCT with perfect internal validity still lack policy impact?

Limited external validity—participants or conditions differ from the populations or settings where policy would apply.

16
New cards

Why use survival analysis instead of a simple proportion of deaths?

It accounts for differing follow-up times and censored individuals, providing more accurate time-to-event estimates.

17
New cards

What does “censoring” mean in survival analysis?

The participant’s follow-up ended before experiencing the event (lost, withdrew, or study ended). They contribute information until censoring.

18
New cards

Why can’t censored individuals be counted as alive indefinitely?

Doing so would overestimate survival; they are included only up to their last known event-free time.

19
New cards

How is survival probability at time t estimated in a KM curve?

Multiply the conditional survival probabilities up to that point: S(t)=\prod(1-di/ni). Each event time lowers S(t).

20
New cards

What does a steep drop on a KM curve indicate?

A high event rate during that interval—rapid decline in survival probability.

21
New cards

Why are censoring marks shown as tick marks on a KM curve?

To visually separate loss of follow-up from true events; they indicate information stops, not that the person died.

22
New cards

How does the life-table method differ from KM estimation?

It groups time into intervals and assumes uniform event probability within each; KM uses exact event times.

23
New cards

What is the median survival time?

The time point where S(t)=0.5; half the population has experienced the event.

24
New cards

Interpret a hazard ratio (HR) of 0.6.

The treatment group has 40% lower instantaneous risk of the event compared to control at any given time.

25
New cards

What does HR > 1 imply?

Higher hazard (greater event risk) in the treatment/exposed group relative to control.

26
New cards

How does a hazard ratio differ from relative risk?

HR compares instantaneous risks over time; RR compares cumulative probabilities over a fixed period.

27
New cards

If survival curves cross midway, what does that suggest?

Treatment effects vary over time; hazards are not proportional—KM curves may overlap despite early differences.

28
New cards

Why is proportional hazards assumption important?

Many survival models (like Cox regression) rely on constant relative hazards over time for valid inference.

29
New cards

How is the log-rank test used?

To statistically compare two or more KM survival curves; it assesses whether observed event patterns differ beyond chance.

30
New cards

What does α = 0.05 represent?

A 5% chance of falsely detecting an effect when none exists (Type I error).

31
New cards

What does β = 0.20 represent?

A 20% chance of missing a true effect (Type II error).

32
New cards

If you lower α without changing n, what happens to power?

Power decreases—stricter significance thresholds require stronger evidence to reject the null.

33
New cards

If you increase power (e.g., 0.8 → 0.9), what happens to sample size?

It must increase; higher certainty needs more participants to detect the same effect.

34
New cards

How do event frequency and sample size interact?

Fewer expected events (low incidence) require a larger n to achieve the same power.

35
New cards

How does effect size influence power?

Larger true effects create larger group differences, increasing power; small effects are harder to detect.

36
New cards

Why is the observed 95% VE in the Pfizer trial associated with extremely high power?

The effect size (difference between vaccine and placebo infection rates) was so large that chance alone could not explain it—near-certain detection.

37
New cards

If an RCT has high power but finds no effect, what can you conclude?

The null result is likely true; with sufficient power, absence of evidence suggests true absence of effect rather than insufficient sample.