1/41
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
Null Hypothesis
States that there is no difference or association between variables that is any greater or less than that expected by chance (H₀). e.g. there is no relationship between employment status and having a flu shot at your clinic
Null Hypothesis points
You can never “accept” or “prove” the null hypothesis, which is the absence of something.
You can only “disprove” it or reject it or “fail to disprove” or fail to reject it
If you reject the null hypothesis it is because you have demonstrated support for the alternative hypothesis
Alternative Hypothesis
is usually the relationship, association, or difference that the researcher believes to be present (H₁) e.g. that employed people are less likely to get a flu shot at your clinic
What the researcher actually believes to be true (the independent variable influences or is associated with the dependent variable)
Hypothesis Testing
fancy term for determining whether you are right, involves using a statistical test to determine whether your hypothesis is true e.g. when the clinic closes that first night, you decide to collect the information that was gathered on all the patients who arrived at the clinic from 8 AM until 9 PM that week.
State the null and alternative hypothesis
Determine the significance level to be used (alpha)
Determine what statistical test to use
Calculate a p-value
Apply your decision rule
Statistical Significance
the difference you observe between two samples is large enough to conclude that it is not simply due to chance
e.g. if you take two or more representative samples form the same population, and there really is a difference between the variables in this population, you would expect to find approximately the same difference again and again
Alpha
significance level which is usually 0.05
Type I Error
the probability assigned to incorrectly rejecting the null hypothesis or making
Represented by alpha and is usually 0.05, 0.01, or 0.001 … you then reject the null hypothesis
Also called the level of significance
E.g. is alpha=0.05 there is 5% chance that reported significant results are actually not significant and therefore, there is 5% chance for a type I error.
Effect Size
Difference between group means that exists within the population
As one increases the other decreases (effect size and sample size)
To determine → divide the difference between the mean in the experimental group and the mean in the control group by the standard deviation of the control group
Weak is <0.3
Moderate=0.3-0.5
Strong=>0.5
Sample Size
An adequate sample size is largely determined by the size of the difference between group means within the population (effect size) you are attempting to find and the power needed to accurately find it.
Power
the ability to find a difference when one actually exists (sample size relates directly to power)
When sample size increases so is the power → likelihood of rejecting the null hypothesis correctly
The larger the sample size the greater the power of the study (more likely to accurately reject the null hypothesis)
Usually 0.80 (80%)
Depends on:
The level of alpha (ie. 0.05)
The effect size
The power (ie. 0.80)
Sample size
Type II Error
the error made when you fail to reject the null incorrectly, thus missing an association that is really there. Saying results are not significant when they actually are. You then incorrectly fail to reject (“accept”) the null hypothesis
Happens because sample wasn’t large enough and the study didn’t have enough power to find a difference that really existed
Are called power errors
If the power is 0.80 (80%) the chance of the type two error is 0.20 or 20% (100-80%)
If you fail to reject the null and are incorrect, you are making a Type II Error, meaning the researcher misses a relationship that does exist
Beta is another name for the chance of committing a type II error
When a sample size is too small, you have a greater chance of a type II error
Power Analysis
How sample sizes are calculated
Depends on:
The level of alpha (ie. 0.05)
The effect size
The power (ie. 0.80)
Sample size
Chi-square (X²)
appropriate when working with independent samples and an outcome or dependent variable that is nominal- or ordinal-level data. Compares the frequencies that are observed with the frequencies that are expected if the two variables were independent or had no association.
Doesnt tell you the direction of the relationship or difference
All cells within the 2 x 2 table must have an expected value greater than or equal to 5
If the chi-square test result has a p-value that is significant (less than 0.05 or whatever alpha you use), then you reject the null hypothesis
If the chi-square test result is not statistically significant (greater than 0.05 or the alpha of choice), then you fail to reject the null hypothesis
Degrees of Freeedom
refers to the number values that are “free to be unknown” once the row and column totals are in a 2 x 2 contingency table All two x two tables have one degree of freedom
df = (2 - 10 x (2 - 1)
Df = 1 x 1 = 1
Degrees of Freedom for the t-test=total sample size-two (n-1) → n is number of paired participants
Student T-Test
appropriate only when you are looking for a difference in the mean value of an outcome variable at the interval or ratio level. You are looking at the difference of the mean value of the outcome variable (needs to be interval or ratio)
Before you apply, determine these 3 things:
What is the level of measurement for the outcome variable?
Are there two samples?
Are the samples independent?
Independent Group T-Test
Used to determine significant differences between the means obtained from two independent groups (if applied to more than two groups the alpha or risk of Type I error goes up)-Average number of flowers grown in poor soil vs good soil
Outcome variable must be interval or ratio level
Dependent Group t-test
Outcome must be interval/ratio
Groups are dependent or related (Matched by color of toothbrush so both groups have equal percentages of each color tooth brus,, which reduces the effect of the toothbrush color on toothpaste preference in the study or if pre/post tests on the same group of people)
Produces a paired t value and a corresponding p result (it just uses different statistical tables)
Homogeneity of Variance (Equal Variance)
One of the assumptions of the two-sample t-test is that the two groups have the same variance. Slight departures from this assumption are okay, but if they are too extreme, a different formula should be used. Test using the Levene’s test for equality of variances which tests the null hypothesis that the variances in the two groups are not different. If Levene’s Test for equality of variances has a significant p-value, you reject the null hypothesis that the variances are the same and use the t-test analysis results when equal variances are not assumed.
Noninferiority Trials
whose point is to show that a new treatment is no worse than an old procedure e.g. (a new noninvasive procedure is found that might be a replacement for an older invasive procedure).
Sampling Methods
consists of the process of selecting the subjects for your sample from the population under study … the one you choose depends significantly on your population of interest and the options available to you at the time
Probability Sampling
consists of techniques in which the probability of selecting each subject is known // requires the researcher to identify every member of the population (not feasible with large populations) involving randomization of some sort (key idea!)
Simple Random Sampling
every subject in a population has the same chance of being selected // it doesn’t work without a limited and very well-defined population of interest e.g.(drawing out of a hat)
Systematic Sampling
involves randomly selecting your subjects according to a standardized rule
Number the whole population again
Pick a random starting point
Select every nth person
Stratified Sampling
divides the population into subsamples according to a characteristic of interest and then randomly selects the sample from these subgroups .. purpose is to ensure representatives of the characteristic
Cluster sampling
randomly selects a group or unit rather than an individual … used when it is difficult to find a list of the entire population e.g. (school, district, neighborhood)
Sampling Distribution
consists of all the possible values of a statistic from all the possible samples of a given population
Nonprobability sampling
consists of methods in which subjects do not have the same chance of being selected for participation; it is not randomized // can be used in many different ways in both quantitative and qualitative research
Convenience Sampling
collecting data from the available group
Quota Sampling
you select the proportions of the sample for different subgroups, as in stratified sampling // popular for quantitative research
Network Sample
utilizes social networks which frequently share characteristics. Can be useful for obtaining samples of groups that are hard to obtain
Purpose Sampling
The researchers selects subjects to include because they are information rich cases.
Inclusion Criteria
the list of characteristics a subject must have to be eligible to participate in your study … these identify the target population and limit the generalizability of your study results to this population.
Exclusion Criteria
the criteria or characteristics that eliminate a subject from being eligible to participate in your study… include the current or past presence of the outcome of interest
Sampling Error
some differences between the sample and the population that occur by chance
Sampling Bias
is a systematic error made in the sample selection that results in a nonrandom sample
Quota Versus Stratified Sampling
Stratified sampling has randomization within the subsamples/strata, and quota sampling does not
Central Limit Theorem
take random values in a sequence → gather average, plot those averages and those points will stack