1/17
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Comparing groups study approach types
Case-control study: shows cases and controls are similar expect for disease status. key analysis uses odds ratios to see whether cases and controls have different exposure histories
Cohort study: show that exposed + unexposed are similar except for exposure status. key analysis uses rate ratios to see whether exposed and unexposed have different rates of incidence disease
Experimental study: shows that individuals assigned to intervention and control groups similar except for exposure status. key analysis uses rate ratios and other measures to see if intervention + control groups have different outcomes
Hypotheses for stats tests
Designed to test for difference instead of sameness w/ stat test questions usually phrased in terms of differences. Examining entire population is best way to determine if hypotheses is true. If sample data are not consistent with stats hypotheses, hypothesis is rejected
Null hypothesis (H0) = describes expected result of stats test if no difference b/w 2 values being compared. PURELY CHANCE.
Alternative hypothesis (Ha) = describes expected result if there is a difference, being influenced by some non-random cause
Example = if we wanted to determine whether coin was fair and balanced, H0 might be that ½ flips results in heads and half in tails (P=0.5). Ha might be that number of heads + tails would be very different (P=/ 0.5)
Example 2 = coin flipped 50x w/ 40 heads, 10 tails = reject H0 due to coin probably not fair and balanced.
Rejecting null hypothesis
Concluding that values are different by rejecting the claim that the values are not different.
CAN NEVER PROVE Ha TRUE, CAN ONLY FAIL TO REJECT IT.
Failing to reject H0 = accepting H0 = there is no evidence that values are different, like values are close enough to be considered similar. Failing to reject H0 should never be taken as evidence that the values are the same, and the decision to reject/fail to reject is based on likelihood that result of a test was due to chance.
Alternative hypothesis
one-sided: claims that parameter is either larger or smaller than value given by null hypothesis
two-sided: claims that parameter is =/ to value given by H0, direction does not matter.
Type I + II error
Type I = Incorrectly rejecting H0, showing difference when there is not, false positive, bias
Type II = fail to reject H0 when you SHOULD reject H0, shows no difference when there is one, false negative
Power
Probability of finding difference b/w groups if one truly exists as % chance you will be able to reject H0 if it is really false. Probability of NOT making a type II power.
Power = 1 - b, with a high power being good + must be calculated based on estimates before doing the study in order to tweak the study.
Increases w/ sample size, effect size (bigger diff b/w groups) and precision (smaller SD)
P-values
All hypothesis tests use p-value to weigh strength of evidence to tell how much observed data disagrees w/ N0. Measures that if we repeated study, difference would be the same.
Small = more disagreement w/ H0 as there is a difference + groups being study are the same, rejecting H0. Strong.
High = less disagreement w/ H0 w/ no difference, as groups studied are likely the same. Failure to reject H0. Weak.
Close to 0.05 = marginal.
Determining P-value
Need to know distribution of test statistic under assumption that null hypothesis is true. P value is a number describing how likely it is that data would have occurred by random chance, being the probability of rejecting null hypothesis. Is arbitrarily defined.
It is a TOOL, not proof of difference. Context is required + design must be evaluated. Lower value can be interpreted as stronger relationship; however, stat significance means that it is unlikely that H0 is true. To understand strength of difference b/w 2 groups, research needs to calculate the size of effects.
Effect size
Effect size is magnitude of difference b/w groups, as absolute effect size is difference b/w average/mean outcomes in 2 different intervention groups.
T value/T test
assesses whether mean of 2 groups are statistically different from each other. Analysis is appropriate whenever you want to compare means of 2 groups, as it is a test statistic that measures difference b/w observed sample and hypothesized population.
Confidence intervals
About sample population, providing information about expected value of measure in a source population based on value of measure in a study population. Width of interval is related to sample size of study, as a larger sample size will yield a narrower confidence interval (good!).
Measures of association
Some of mc types of comparative analysis are OR used for case-control studies and RR used for cohort studies.
RR
Relative risk Is ratio of probabilities. It compares incidence or risk of event among those w/ specific exposure w/ those who were not exposed, being based on incidence of event given that we already know study participants’ exposure status.
E.g. = horse wins 2/5 races, probability winning 40%/
OR
odds compare events w/ nonevents. Compares presence to absence of exposure given that we already know about specific outcome and can be used to describe results of case control as well as retrospective cohort studies.
e.g. if horse wins 2/5 = odds of winning are 2:3
RR vs OR
Usually comparable in magnitude when disease studied is rare (e.g., cancer), but OR can overstimulate + magnify risk when disease is more common + should be avoided esp when RR can be used.
Key characteristics of experimental studies
Compare outcomes in participants assigned to an intervention or control group, assessing causality. Key statistical measure is efficacy.
Efficacy vs effectiveness
Efficacy is a narrower definition that means how well something works in an ideal or controlled setting like a clinical trial while effectiveness describes how well it works under real-world conditions.
E.g. condom efficacy 85-90%, but effectiveness 69% b/c some people use it incorrectly.
Analysis: vocabulary
Treatment-received approach = limits analysis to participants who were fully compliant wit their assigned intervention
Treatment-assigned approach = includes all participants even if they were not fully compliant w/ assigned intervention
NNT = number ended to treat, expected number of people who would have to receive a treatment to prevent an unfavorable outcome in one person