N2NN3 - EIDM: Critical Appraisal of Intervention Studies Exercise

0.0(0)
studied byStudied by 0 people
GameKnowt Play
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/104

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

105 Terms

1
New cards

NCCMT Online Learning Module - Critical Appraisal of an Intervention Study

2
New cards

Evidence-Informed Deicision Making

- About applying the best available evidence to answer a specific question

3
New cards

Intervention Studies

- Researchers conduct intervention studies to determine the effect of an intervention on a population

Ex:

- Does using multiple communication channels for a boil water advisory increase the proportion of the population that refrains from drinking untreated water?

- Does giving away free bicycle helmets in schools increase the number of school children who wear bike helmets when riding their bikes?

- Does ginseng prevent common colds?

- Can an intensive educational program reduce rates of teen pregnancy?

4
New cards

6S Pyramid

- Single studies of interventions are at the bottom of the 6S Pyramid as they are not synthesized or pre-appraised forms of evidence

- You would only look at individual studies if you had not found evidence from sources higher in the pyramid

5
New cards

PROGRESS+ Framework

- Outlines factors for equity considerations, including place of residence, race/ethnicity/culture/language, occupation, gender/sex, religion, education, socioeconomic status, and social capital

6
New cards

#1: Are the Results Valid? - Considerations for Critical Appraisal of Intervention Studies

- Did the trial address a clearly focused issue?

Was the assignment of participants to treatments randomized?

- Were participants, health workers and study personnel "blind" to treatment?

- Were the groups similar at the start of the trial?

- Aside from the experimental intervention, were the groups treated equally?

- Were all participants who entered the trial properly accounted for at its conclusion?

7
New cards

#2: What are the Results? - Considerations for Critical Appraisal of Intervention Studies

- How large was the treatment effect?

- How precise was the treatment effect?

8
New cards

#3: How Can I Apply the Results? - Considerations for Critical Appraisal of Intervention Studies

- Can the results be applied in your context? Or to the local population?

- Were all clinically important outcomes considered?

- Are the benefits worth the harms and costs?

9
New cards

Randomized Control Trials

- Where random assignment allows for known and unknown determinants of outcome to be evenly distributed among the groups

- It is the most appropriate design to answer intervention questions

- A randomized controlled trial allows you to be more confident that any differences in the outcome are more likely due to the actual intervention than the underlying differences in the group

- In other words, randomized trials have the greatest ability to control for confounders or bias

*NOT GOOD for Intervention of prevention studies (ex. Public health)

10
New cards

What are the Results? - CASP: 11 Questions to Help You Make Sense of a Trial

Screening Questions:

Did the trial address a clearly focused issue?

- Yes, can't tell, no

An issue can be 'focused' in terms of (the population studied):

- The intervention given

- The comparator given

- The outcomes considered

An issue can be ‘focused’ in terms of:

- The population studied

- The intervention given

- The comparator given

- The outcomes considered

Was the assignment of patients to treatments randomized?

- Yes, can't tell, no

- How this was carried out?

- Was the allocation sequence concealed from researchers and patients?

Were all of the patients who entered the trial properly accounted for at its conclusion

- Yes, can't tell, no

- Was follow up complete?

- Were patients analyzed in the groups to which they were randomized?

- Was the trial stopped early?

Detailed Questions:

Were patients, health workers and study personnel ‘blind’ to treatment?

- Yes, can't tell, no

- Were the patients

- Were the health workers

- Were the study personnel

Were the groups similar at the start of the trial?

- Yes, can't tell, no

- In terms of other factors that might effect the outcome such as age, sex, social class

Aside from the experimental intervention, were the groups treated equally?

- Yes, can't tell, no

11
New cards

Are the Results of the Trial Valid? - CASP: 11 Questions to Help You Make Sense of a Trial

How large was the treatment effect?

- What outcomes are measured?

- Is the primary outcome clearly specified

- What results were found for each outcome

How precise was the estimate of the treatment effect?

- What are its confidence limits?

12
New cards

Will the Results Help Locally? - CASP: 11 Questions to Help You Make Sense of a Trial

Can the results be applied to the local population?

- Yes, can't tell, no

- Do you think that the patients covered by the trial are similar enough to your population?

- How they differ

Were all clinically important outcomes considered?

- Yes, no

- If not, does this affect the decision?

- There is other information you would like to have seen?

Are the benefits worth the harms and costs?

- Yes, no

- Even if this is not addressed by the trial, what do you think?

13
New cards

Evaluation of studies of treatment or prevention interventions. Part 1

14
New cards

Are the Results of the Study Valid?

- This question considers whether the results reported in the study are likely to reflect the true size and direction of treatment effect

- Was the research conducted in such a way as to minimize bias and lead to accurate findings, or was it designed, conducted, or analyzed in such a way as to increase the chances of an incorrect conclusion?

Primary Questions:

- Was the assignment of patients to treatments randomized, and was the randomization concealed?

- Was follow up sufficiently long and complete?

- Were patients analyzed in the groups to which they were initially randomized?

Secondary Questions:

- Were patients, clinicians, outcome assessors, and data analysts unaware of (blinded to or masked from) patient allocation?

- Were participants in each group treated equally, except for the intervention being evaluated?

- Were the groups similar at the start of the trial?

15
New cards

What Were the Results?

- Once you have determined that the results are valid, it is important to gain an understanding of what the results really mean

- If the new treatment is shown to be effective, how large is the effect?

- Is the effect clinically important?

- How precise is the treatment effect (another way of asking how likely it is that the effect is real and not a result of the play of chance)?

- The precision of a result is related to whether the study involved large numbers of people (which increases precision) or small numbers (which reduces precision)

16
New cards

Will the Results Help me in Caring for my Patients

- Firstly, you have to decide if the patients participating in the study are sufficiently similar to your patients, or whether there is a good reason why it would be inappropriate to apply the results to your patients

- Secondly, are there risks or harms associated with the treatment, which might outweigh the benefits?

17
New cards

Allocation Concealment

- Clinical recruiting patients to a study is unaware of the treatment group to which the next patient will be allocated

18
New cards

Intention to Treat Analysis

- Patients should be analyzed in the groups to which they were originally randomized regardless of whether they received or completed the allocated treatment, or even if they received the wrong treatment

19
New cards

Co-Interventions

- Because randomization should ensure that the only systematic difference between study groups is the treatment in question, it is important that this principle is not undermined by extra care given to one group and not another

20
New cards

After Validity Consideration Questions

- What is the size of the effect?

- Is the effect of sufficient clinical significance for you to want to use the intervention?

- In which patients?

21
New cards

Validity - What Were the Results?

- How large was the treatment effect?

- How precise is the estimate of treatment effect?

22
New cards

Validity - Will the Results Help Me in Caring for my Patients

- Are my patients so different from those in the study that the results don't apply?

- Is the treatment feasible in our setting?

- Were all clinically important outcomes (harms as well as benefits) considered?

23
New cards

Dichotomous Outcomes

- Yes or no

- Dead or alive

- Healed or not healed

- Studies that report dichotomous outcomes allow you to compare rates (ex. 49% of school children wearing bicycle helmets in the intervention group versus 20% in the control group)

- These rates can then be expressed in other ways, such as absolute risk difference, relative benefit increase (or the converse, relative risk reduction) or number needed to treat (or the number needed to harm)

24
New cards

Continuous Outcomes

- Length of stay

- Daily intake of food

- Results of studies that report continuous outcomes allow you to compare means (ex. participants in the intervention group walked 6000 steps per day versus 5000 steps per day in the control group)

- Results are often reported as mean differences, which are simply the differences between the means for each group

25
New cards

CER

- Control event rate

26
New cards

EER

- Experimental event rate

27
New cards

Relative Risk Reduction (RRR)

- The proportional reduction in rates of bad outcomes between experimental and control participants in a trial and is calculated as (CER−EER)/CER

28
New cards

Number Needed to Treat (NNT)

- This gives the reader an impression of the effectiveness of the intervention by describing the number of people who must be treated with the given intervention in order to prevent 1 additional bad outcome (or to promote 1 additional good outcome)

- The NNT is simply calculated as the inverse of the ARR, rounded up to the nearest whole number

29
New cards

Estimates of Effect

- Results of trials

30
New cards

Confidence Intervals (CIs)

- A statistical device used to communicate the magnitude of the uncertainty surrounding the size of a treatment effect; in other words, they represent the size of the neighborhood

- If this range is wide, our estimate lacks precision, and we are unsure of the true treatment effect

- Alternatively, if the range is narrow, precision is high, and we can be much more confident

- The sample size used in a trial is an important determinant of the precision of the result; precision increases with larger sample sizes

- CI of an odds ratio or a relative risk includes 1, there is no statistically significant difference between treatments

- Conversely if the CI of a risk or mean difference includes zero, the difference is not statistically significant

- A statistical way to describe our level of certainty or uncertainty about that estimate

- A 95% confidence interval represents the numerical range within which we are 95% certain that the true value of the effect lies

- In other words, if we conducted the study in a similar population, 95 times out of 100 the value would fall within the confidence interval

31
New cards

Absolute Risk Reduction (ARR)

- CER - EER

- Tells us how much of the effect is a result of the intervention itself

32
New cards

Focus a Question

- The most common way to focus a question regarding an intervention is to use PICO – Identifying Participants, Interventions, Comparisons and Outcomes of interest

Questions:

- What question is being addressed in the study? (What population, intervention and outcomes were the authors interested in?)

- Does this intervention address the research question?

- An intervention must address a specific and focused question. Is the question guiding the study too narrow or too broad, or is it logical?

33
New cards

Randomized Control Trials

- Randomized trials are considered the highest level of evidence for a single study

- Since the intervention should be the only difference between the intervention and control groups, randomized trials provide the strongest evidence that differences in outcomes are due to the intervention and not another factor (keep in mind that not all questions can be subjected to a randomized trial due to ethical or practical considerations)

34
New cards

Randomization

- To ensure that the group receiving the intervention and the control group are as similar as possible

- Randomization helps ensure that known and unknown characteristics that might affect the outcome are balanced between groups

- The only way to achieve balance is through randomization

- Some methods pf randomization are susceptible to bias, which may introduce error into the study results

- True randomization is achieved with a computerized random number generator

- Quasi-randomization: Can lead to a bias if there are any systemic differences

35
New cards

Participants Included in the Final Analysis - Intention to Treat

- Excluding dropouts from the intervention group or adding dropouts to the control group would make the intervention look much more effective than it really is compared to the control

- This criterion helps ensure that participants will be kept in the analysis of their original group assignment regardless of whether they discontinue the treatment

- Researchers call this "intention to treat" analysis

- Dropouts are included by substituting either the baseline measurements or the "last observation carried forward" for the final outcome measurement

36
New cards

Blinding (masking)

- Used to describe whether or not people involved in the study (participants, researchers, clinicians, etc.) know whether participants have been assigned to the active intervention group or the control

- Research reports sometimes use the terms "single," "double" or "triple" blinded, but it is more important to specify who was blinded

- In public health studies, it is sometimes not possible to achieve blinding for all groups

- Participants: If participants know whether they are in the intervention group or the control group, they may consciously or unconsciously bias their outcomes

- Health workers: If the clinical or health staff involved with participants know which group their patients are in, they may consciously or unconsciously alter their treatment plan, provide additional care or heighten their vigilance for good or bad outcomes

- Study personnel: Distortion of a measurement may be more likely if an individual required to do the measurement (e.g., blood pressure) knows the group allocation and has a belief about the likely effectiveness of the intervention

- Process of withholding information about treatment allocation from those who could potentially be influenced by this information

- Most commonly used and effective in RCTs to minimize biases

- There is not a discrete difference between single, double, and triple binding; in future there is a need for their to be a distinguished difference

*Unblinded individuals can systemically bias trial findings through conscious or subconscious mechanisms

37
New cards

Similarity Between Groups

- Before the intervention begins, we want to know if there are differences between the groups that could potentially explain differences seen in outcomes at the end

- Randomization should ensure that characteristics are relatively evenly distributed

- Some imbalances arise from a sample size that is too small; others occur by chance

- Researchers will often provide the "adjusted" results, where the adjustment takes into account baseline differences, along with the "unadjusted" results

38
New cards

Equal Treatment of Groups

- If the control group managed to get an additional intervention, it would reduce the potential of seeing a difference in outcomes between it and the intervention group

- If the intervention group managed to get additional care or information, it could magnify the potential differences in outcomes

39
New cards

Size of Treatment Effect

- The benefits and harms of any intervention may be measured by multiple outcomes

- Outcomes may be dichotomous or continuous

- An estimate for treatment effect is only statistically significant if the 95% confidence interval range excludes the value where there is no difference between groups

- If the range for the 95% confidence interval includes this value, then you cannot exclude the possibility that the true value is no difference

- To determine the value for no difference, consider the type of data

- For dichotomous data reported as a relative risk or odds ratio, there is no statistically significant difference between groups if the 95% confidence interval includes 1.0

- For continuous data reported as a mean difference, there is no statistically significant difference between groups if the 95% confidence interval includes 0.0

- Statistical significance can also be shown as the p-value

- By convention, the outcome is statistically significant if the p-value is below 0.05.

40
New cards

Precision of Treatment Effect

- Precision can only be determined by looking at the confidence interval

- If the confidence interval is wide, with a large difference between the numbers at either end of the range, the estimate of true effect lacks precision and we are unsure about the treatment effect

- If the confidence interval is narrow and the two numbers at either end of the range are relatively close, precision is high and we can be more confident in the results

- Larger sample sizes produce more precise results, so be wary of (i.e. less confident in) studies with small sample sizes and large confidence intervals

41
New cards

Applicability of Results

- To judge the generalizability of the study, consider how similar or different the study participants are to your own patients, clients or situation

Consider:

- Health care systems;

estimated costs to deliver the intervention;

skills required to deliver the intervention;

availability of special equipment and staff resources; and

likely acceptability to your community

42
New cards

Consideration of Unfavorable Outcomes

- Researchers may use several different outcomes to evaluate the effects of a treatment or intervention in order to identify the potential benefits as well as the harms of the intervention

43
New cards

Benefits vs. Harms & Costs

- Even if a finding is statistically significant, you need to decide if it is clinically meaningful

- The smallest possible effect size (i.e. the lower end of the confidence interval) can help you decide if an intervention would still be worth doing if the effect is small

- Researchers must also always look for evidence of harm, even when the trial sample size is small

- Systems may also question expenses related to treatments and may ask for an economic (cost/benefit) analysis that may or may not have been included in the study

- When critically appraising an intervention or prevention study, consider whether the researchers examined the most relevant costs and benefits for the intervention and situation

44
New cards

Certificate

knowt flashcard image
45
New cards

Cullum et al., (2008) - pp. 67-91, 104-133

46
New cards

Critical Appraisal Questions for Studies of Treatment or Prevention Interventions

Are the Results of the Study Valid:

- Was the assignment of patients to treatments randomized?

- Was the randomization concealed?

- Was the follow-up sufficiently long and complex (how long patients were followed up in order to see what happens to them as a result of their treatment; patients dropping out of a study before they reach the endpoint)?

- Were patients analyzed in the groups to which they were initially randomized?

- Were patients, clinicians, outcome assessors and data analysts unaware of (blinded to or masked from) patient allocation?

- Were participants in each group treated equally, except for the intervention being evaluated?

- Were the groups similar at the start of the trial?

What are the Results (accounts the size of the treatment effect and whether the estimate of the treatment effect is precise):

- How large was the treatment effect?

- How precise is the estimate of treatment effect?

Can I Apply the Results in Practice:

- Are my patients so different from those in the study that the result don't apply?

- Is the treatment feasible in my setting?

- Were all clinically important outcomes (harms as well as benefits) considered?

47
New cards

Sensitivity Analysis

- Recalculated the results using different assumptions about what might have happened to the lost patients

48
New cards

Intention-to-Treat Analysis

- Patients should be analyzed in the groups to which they were originally randomized regardless of whether they received or completed the allocated treatment, or even if they received the wrong treatment

49
New cards

Selection Bias

- When the investigators (who are likely to want the intervention to be effective) have control over who goes into each group and might choose the intervention group participants on the basis of their likelihood of experiencing a positive outcome

50
New cards

Sample Size of a Trial

51
New cards

Co-Intervention

- Because randomization should ensure that the only systemic difference between study groups is the treatment in question, it is important that this principle is not undermined by extra care given to one group and not the other

52
New cards

Independent Variable

- The intervention

- Those under the control of the investigator

53
New cards

Dependent Variable

- The outcomes

- Those that may be influenced by the independent variable

54
New cards

Nominal Variables

- Categorical

- Simply names of categories

- Variables that have only two possible values (dichotomous)

- May have several possible values (continuous)

- Actual amount of categories is determined by the researcher

- No hierarchy of data

55
New cards

Ordinal Variables

- All sets of ordered categories

- Do not know the size between intervals

56
New cards

Interval Variables

- Consist of an ordered set of categories, with the additional requirement that the categories form a series of intervals that are all exactly the same size

- Does not have an absolute zero point that indicates the complete absence of the variable being measured

- Ratios of the values are not meaningful

57
New cards

Ratio Variable

- Has all features of an interval variable but adds an absolute zero point, which indicates the complete absence (none) of the variable being measured

58
New cards

Measurements in Health Care

- The "real" or true value of the variable being measured, the variability of the measure, the accuracy of the instrument with which we are measuring, and the position of the patient or the skills and expectations of the person doing the measurement

- Objective measures: Less likely to be influenced by human error or bias

- Subjective measures: May be influenced by the perception of the individual doing the measuring

59
New cards

Reliability

- The degree to which a measure gives the same result twice (or more) under similar circumstances, and it may relate to the measure being used or the people using it

- The extent to which readings are similar from the same person (intra-rater; within-rater reliability) or two different people (inter-rater; between-rater reliability)

60
New cards

Validity

- The ability of a measurement tool to accurately measure what it is intended to measure

- S: Requires comparison of a given measure with a gold standard, or the best existing measure of the variable

61
New cards

Social Desirability Bias

- Peoples responses to questions may reflect their desire to under-report their socially unfavourable habits

62
New cards

Recall Bias

- Acknowledges that human memory is fallible

63
New cards

Continuous

- Studies that use continuous outcomes measures may compare the average values of the variable (ex. mean or median after treatment)

- Often compare the mean (or median) values of the outcome at the end of the follow-up period or the average change in outcome for the intervention and control groups from the baseline to the end of the follow-up

- If the average values (or changes) differ between the groups, it suggests that there may be differences in effect between the intervention and the control conditions

64
New cards

Standardized Difference

- Has no units

- It simply expresses the effect of an intervention in terms of the number of SDs between the averages of the two groups

- Puts the average difference in the context of the amount of dispersion or variation

- Allow for comparison of the size of the treatment effect when different outcome measures are used and are therefore used in meta-analyses, which attempt to compare and combine the results of several studies measuring different outcomes

65
New cards

Discrete Measures

- The measure of effect when using discrete outcome compares the risk (i.e. proportion) experiencing an event in the intervention and control groups

- The risk of an event in the intervention group (Ri) is simply the proportion of people in the group who experience the event

Ri = a/a+b

Corresponding Risk:

Rc = c/c+d

66
New cards

Normal

- The symmetrical bell-shaped curve when the values for a large sample are plotted

67
New cards

Skewed Distribution (normal distribution)

68
New cards

Interquartile Range

- Between which 50% of all the observations lie

69
New cards

Standard Deviation (SD)

- A measure of the average amount individual values differ the mean of the group; the lower the SD; the smaller the spread of values

70
New cards

Normal Distribution Curve

knowt flashcard image
71
New cards

Skewed Distribution Curve

knowt flashcard image
72
New cards

Discrete vs. Continuous Measures

Discrete:

- Outcomes can be summarized as the percentage or proportion of people who experience an event during the follow-up period

- Express the probability or risk that a person in the group of interest experienced the event at some point during the follow-up period

Continuous:

- Often recast as discrete measures in evaluative studies, especially if there is a threshold above or below which there is a clinical difference

- Studies may focus on changes in scores or the percentage of patients with severe impairments

73
New cards

Relative Risk (RR)

- Risk Ratio (RR)

- The risk of patients in the intervention group experiencing the outcome divided by the risk of patients in the control group experiencing the outcome

- The intervention and control conditions have the same effect, then, assuming that the groups are comparable in all other respects, the risk of the event will be the same in both groups (i.e. the top and bottom of the fraction will be the same, and the RR will be 1.0)

- If the risk of death is reduced in the intervention group compared with the control group, then the RR will be <1.0

- If the intervention is harmful, then the RR will be >1.0

- The further the RR is from the 1.0, the greater the strength of association between the intervention and the outcome

<p>- Risk Ratio (RR)</p><p>- The risk of patients in the intervention group experiencing the outcome divided by the risk of patients in the control group experiencing the outcome</p><p>- The intervention and control conditions have the same effect, then, assuming that the groups are comparable in all other respects, the risk of the event will be the same in both groups (i.e. the top and bottom of the fraction will be the same, and the RR will be 1.0)</p><p>- If the risk of death is reduced in the intervention group compared with the control group, then the RR will be &lt;1.0</p><p>- If the intervention is harmful, then the RR will be &gt;1.0</p><p>- The further the RR is from the 1.0, the greater the strength of association between the intervention and the outcome</p>
74
New cards

Odds Ratio (OR)

- The odds of the event in the intervention group (a/b) divided by the odds of the event in the control group (c/d)

- An OR of 1.0 means there is no difference between groups, and an OR <1.0 means that the event is less likely in the intervention group than the control group

<p>- The odds of the event in the intervention group (a/b) divided by the odds of the event in the control group (c/d)</p><p>- An OR of 1.0 means there is no difference between groups, and an OR &lt;1.0 means that the event is less likely in the intervention group than the control group</p>
75
New cards

RR & OR

- Measures the strength of association between an intervention and an outcome

- It is important to remember that the RR indicates the "relative" benefit of a treatment, not the "actual" benefit; in other words, it does not take into account the number of people who would have developed the outcome anyways

76
New cards

Absolute Risk Difference

- The impact of treatment is captured by the absolute risk difference, or when the risk is reduce the absolute risk reduction (ARR)

- Calculated simply subtracting the proportion of people experiencing the outcome in the intervention group from the proportion experiencing the outcome in the control group

77
New cards

Number Needed to Treat (NNT)

- Conveniently expresses the absolute effect of the intervention

- Simply divided by the absolute risk difference

- Represents the number of patients who need to be treated to prevent one additional event and is a useful way of expressing clinical effectiveness (i.e. the more effective an intervention, the lower the NNT)

78
New cards

Risk Reduction

- Occurs when the risk of a bad event decreases

79
New cards

Benefit Increase

- Occurs when the risk of a good event increases

80
New cards

Risk Increase

- Occurs when the risk of a bad event increases

81
New cards

Benefit Reduction

- Occurs when the risk of a good event decreases

82
New cards

Sampling Distribution of Study Results

- Not all studies give the same results due to sampling error

- Standard error (SE)

83
New cards

Mean Difference or Relative Risk Reduction

- If we repeat the same evaluation several hundred times with different samples of the same number of patients, the result would not always be the same

84
New cards

Distribution of the Results

- Shape of the curve

85
New cards

Standard Error (SE)

- The amount of chance variation from the "true" effect is given by the measure of spread or standard deviation of this distribution, which, because it indicates the amount of random sampling error (SE)

- The largest SE, the more individual study results will vary away from the true effect

86
New cards

Confidence Interval (CI) - 95%

- It provides a plausible range within which we are confident the true value falls outside of this range

- Wider the Ci, the less precise is out estimate of the treatment effect

- This precision depends on the spread of the sampling distribution measured by the SE

- This in turn depends on the sample size and the variability of what is being measured

- The smaller the number of patients in a trial or number of events observed, the greater will be the sampling error or the spread of the sampling distribution

- The greater the sampling error, the more likely it is that any one experiment will differ by chance from the true or average value, and so the wider CI

- If we increase the number of participants in a study or the number of people who are likely to experience an event such that the distribution becomes less spread out, individual study results will fall much closer to the true or average result, and the SE and the width of the CI will be reduced

- Can be used to give an idea of whether or not a treatment has any effect

- The CI of an odds ratio or relative risk includes the value 1 (i.e. same odds or risk in treated and untreated groups), then we cannot be confident that a difference exists between the intervention and control conditions

<p>- It provides a plausible range within which we are confident the true value falls outside of this range</p><p>- Wider the Ci, the less precise is out estimate of the treatment effect</p><p>- This precision depends on the spread of the sampling distribution measured by the SE</p><p>- This in turn depends on the sample size and the variability of what is being measured</p><p>- The smaller the number of patients in a trial or number of events observed, the greater will be the sampling error or the spread of the sampling distribution</p><p>- The greater the sampling error, the more likely it is that any one experiment will differ by chance from the true or average value, and so the wider CI</p><p>- If we increase the number of participants in a study or the number of people who are likely to experience an event such that the distribution becomes less spread out, individual study results will fall much closer to the true or average result, and the SE and the width of the CI will be reduced</p><p>- Can be used to give an idea of whether or not a treatment has any effect</p><p>- The CI of an odds ratio or relative risk includes the value 1 (i.e. same odds or risk in treated and untreated groups), then we cannot be confident that a difference exists between the intervention and control conditions</p>
87
New cards

Null Hypothesis

- Instead of trying to estimate a plausible range of values within which the true treatment effect is likely to lie (i.e. a CI), researchers often begin with a formal assumption that no effect exists

- Study is used to gather enough evidence to convince a neutral observer to reject the null hypothesis and to accept, the alternative hypothesis that the treatment does have an effect (or that the defendant is guilty)

- If the estimate of treatment effect is more than 1.96 SE above or below the null value, then the probability of this or a more extreme result occurring by chance when the null is, in fact, true is less than 5% (p<0.05)

- If the result is more than 2.58 SE above or below zero, then it is regarded as statistically significant at the 1% level (p<0.01)

88
New cards

Type I Error - The Risk of a False-Positive Result

- To reject the null hypothesis falsely (i.e. to say that a treatment effect exists when, in fact, the null is true)

- The risk or probability of this type of error is given by the p-value or statistical significance of the treatment effect

- The lower the p-value, the less likely it is that the result is a false positive and the lower the risk of a type I error

89
New cards

Type II Error - Risk of a False-Negative Result

- Where we wrongly accept the null hypothesis of no treatment effect

- This is a particular problem of small studies because they have more sampling error and so larger SEs

- The larger the Se, the harder it is to exclude chance, and therefore the greater the probability of falsely accepting the null hypothesis of no treatment effect

- In small studies, even large estimates of treatment effect do not provide sufficient evidence of a true effect (i.e. they are not statistically significant) because the SE is so large

- If a study is too small, the Cis can be so wide that they cannot really exclude a value indicating no effect

- When a study is undertaken, the sample size should be sufficiently large to ensure that the study will have enough power to reject the null hypothesis if a clinically important treatment effect exists

90
New cards

Tests for Different Types of Outcome Measures

*Type of outcome measure also affects the type of statistical test used to determine the extent to which an estimate of treatment effect is due to chance

91
New cards

Continuous Measures

- The treatment effect is often calculated by measuring the difference in mean improvement between groups

- In these cases (if the data are normally distributed), a t-test is commonly used

92
New cards

Categorical Variables

- When a study measures categorical variables and expresses results as proportions, then a X^2 (chi-square) test is used

- Assesses the extent to which the observed proportion in the treatment group differs from what would have been expected by chance if no real difference existed between the treatment and control groups

- If an odds ratio is used, the SE of the odds ratio can be calculated and, assuming a normal distribution, 95% Cis can be calculated, and hypothesis tests can be conducted

93
New cards

Paired Analysis

- For normally distributed continuous measures, one can use the paired t-test

- For skewed continuous paired data, the Wilcoxon signed-rank test is available

*The design of the comparison is paired or matched, the analysis must also be paired

94
New cards

Clinical Significance

- Clinical significance is not the same as statistical significance

95
New cards

Allocation Concealment

- The clinician recruiting patients to a study should be unaware of the treatment group to which the next patient will be allocated

96
New cards

Random Allocation

- Intervention groups remain the inly method of ensuring that the groups being compared are on an equivalent footing at the beginning of a study, thus eliminating selection and confusing biases

- Allocation Concealment: Shiels those involved in a trial from knowing upcoming assignments in advance. Focuses on preventing selection and confounding biases, safeguards the assignment sequence before and until allocation, and can always by successfully implemented

- Blinding: Concentrates on preventing study participants and personnel from determining the group to which participants have been assigned (which leads to ascertainment bias), safeguards the sequence after allocation, and cannot always be implemented. Everyone unaware of patient allocation to avoid bias. Reduces bias. Double blinding defined as blinding patients, clinicians, and outcome assessors. Triple blinding no longer and is replaced with descriptions stating which if the groups were unaware of allocation

97
New cards

Relative Risk Reduction

- A measure of the effect of an intervention

- Defined as the proportional reduction in rates of harmful outcomes between experimental and control participants in a trial

- Fails to discriminate huge absolute effects of the intervention from those that are trivial

- Has limitations because it fails to discriminate huge absolute effects of the intervention from those that are trivial

- Calculated as (CER-EER)/CER

98
New cards

Baseline Risk

- When the RRR ignores how rarely or commonly the outcome in question occurs in the patients entering the trial

99
New cards

Absolute Risk Reduction

- Does not discriminate between the extremes

- The AAR is dimensionless, abstract number that may be difficult to incorporate into clinical practice, we divide the ARR into 1 (or 100%) to generate the NNT

- If the AAR is large, only a small

- The absolute arithmetic difference in rates of harmful outcomes between experimental and control patients

- = CER-EER

100
New cards

NNTs

- Useful measure for making decisions about the effort expended with a particular intervention to achieve a single positive outcome

- Only useful for interventions that produce dichotomous outcomes (counts of the number of people who experience and do not experience an event)

- Should always be interpreted in the context of their precision

- Interpretation of NNTs must always consider the follow-up time associated with them

- NNTs will vary with baseline risk: The lower the baseline risk, the higher the NNT will be