CLIN RES-L01-Research Design-(B)-Sample, Power & Bias

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/51

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

52 Terms

1
New cards

What is required for a robust study design

A well-calculated sample size to ensure enough power to detect meaningful differences.

2
New cards

What is power analysis

A method to ensure a study has adequate statistical power to detect an effect.

3
New cards

Why is power analysis important in clinical research

It ensures enough participants are included to detect significant effects and produce reliable results.

4
New cards

What is sample size determination

The process of calculating the number of participants required to detect a specific effect with a given confidence and power.

5
New cards

What factors influence sample size calculation

Effect size, α level (significance level), power (1 - Beta), and variability in the population.

6
New cards

What is effect size

The magnitude of the difference expected to be detected in a study.

7
New cards

How does effect size impact sample size

Larger effect sizes typically require smaller sample sizes to detect.

8
New cards

What is an example of effect size in research

Expecting a 10-point test score increase when comparing a new teaching method to the current one.

9
New cards

What is the α level (significance level)

The probability of a Type I error, or falsely rejecting the null hypothesis.

10
New cards

What are common α values in research

0.05 or 0.01.

11
New cards

What does an α level of 0.05 mean in a drug trial

A 5% chance of concluding the drug is effective when it actually isn’t.

12
New cards

What is power (1 - Beta)

The probability of correctly rejecting the null hypothesis and detecting an effect if it exists.

13
New cards

What is a typical power level in research

80%, meaning an 80% chance of detecting a true effect.

14
New cards

What is an example of power in clinical trials

An 80% power means there's an 80% chance of detecting a significant health improvement from a medication if it works.

15
New cards

How does variability affect sample size

Higher variability in data requires larger sample sizes to detect meaningful effects.

16
New cards

What is an example of low variability in a study

Blood pressure readings of 120-125 mmHg across individuals require fewer participants.

17
New cards

What is an example of high variability in a study

Blood pressure readings of 110-140 mmHg across individuals require more participants.

18
New cards

What is power analysis

A statistical method used to determine the sample size needed to detect an effect with a specified level of confidence.

19
New cards

When is power analysis performed

It can be performed a priori (before the study) or post hoc (after the study) to evaluate study design or power adequacy.

20
New cards

Why is power analysis important for small effect sizes

It shows that a larger sample size is required to detect modest treatment effects.

21
New cards

What is a priori power analysis

A method conducted during the planning stage to estimate the sample size needed for adequate power.

22
New cards

What are typical parameters for a priori power analysis

Desired power level (commonly 80%) and α level (commonly 0.05).

23
New cards

What is an example of a priori power analysis

A study planning to test a diet's effect on weight loss with a medium effect size (Cohen's d=0.5), α=0.05, and 80% power would need 64 participants.

24
New cards

What is post hoc power analysis

A method conducted after a study to evaluate whether it had sufficient power to detect the observed effects.

25
New cards

Why perform post hoc power analysis

To understand if a study's non-significant results were due to insufficient power or the absence of an effect.

26
New cards

What is an example of post hoc power analysis

A study on exercise and blood pressure showing no effect had 50% power, suggesting it might have been underpowered.

27
New cards

How does power analysis assist in clinical trials

It helps determine the sample size needed to detect statistically significant differences confidently.

28
New cards

What factors are critical in power analysis for clinical trials

Expected effect size and variability in the population being studied.

29
New cards

What is selection bias

Occurs when individuals included in a study are not representative of the larger population, leading to skewed results.

30
New cards

When does selection bias often arise

During the recruitment phase in both randomized controlled trials (RCTs) and observational studies.

31
New cards

What is an example of selection bias in clinical research

A trial for cardiovascular treatment with predominantly healthy participants won't apply to those with severe conditions.

32
New cards

How can selection bias be detected and prevented

Through randomization, appropriate inclusion and exclusion criteria, and blinding during participant selection.

33
New cards

How can selection bias be rectified

By using statistical techniques like propensity score matching to adjust for imbalances.

34
New cards

What is performance bias

Systematic differences in care provided to study groups, unrelated to the intervention being tested.

35
New cards

What is an example of performance bias

A diabetes trial where the intervention group receives extra check-ups compared to the control group.

36
New cards

How can performance bias be detected and prevented

Through blinding of participants and researchers and standardizing care across groups.

37
New cards

How can performance bias be rectified

By using post-hoc statistical adjustments, though this may not fully eliminate its impact.

38
New cards

What is detection bias

Systematic differences in how outcomes are assessed or measured between study groups.

39
New cards

What is an example of detection bias

Researchers expecting treatment efficacy may record more improvements in the treatment group than actually occurred.

40
New cards

How can detection bias be detected and prevented

Through blinding of participants and outcome assessors, and using standardized and objective outcome measures.

41
New cards

How can detection bias be rectified

By using intention-to-treat analysis or adjusting for confounding factors during statistical analysis.

42
New cards

What is attrition bias

Occurs when participants are lost during a study, leading to an incomplete dataset and potentially non-representative groups.

43
New cards

What can attrition bias result in

Distorted study results if dropout rates differ significantly between groups.

44
New cards

What is an example of attrition bias

In a mental health exercise study, higher dropout rates in the intervention group could skew results.

45
New cards

How can attrition bias be detected and prevented

By using intent-to-treat (ITT) analysis and monitoring reasons for dropout.

46
New cards

How can attrition bias be rectified

Through imputation methods like last observation carried forward or multiple imputation.

47
New cards

What is observer bias

When a researcher's beliefs, expectations, or experiences influence how they record or interpret data.

48
New cards

How can observer bias be detected and prevented

By blinding investigators and participants and providing standardized training for data collection.

49
New cards

How can observer bias be rectified

Using post-study analysis and sensitivity testing to detect and adjust for bias.

50
New cards

What is recall bias

Bias that arises in retrospective studies when participants struggle to accurately remember past events or experiences.

51
New cards

How can recall bias be detected and prevented

By using prospective study designs and objective measures like medical records or biological samples.

52
New cards

How can recall bias be rectified

Using statistical techniques like sensitivity analysis to address potential bias.