1/63
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
What is the PECOT method used when writing a research question?
population
intervention/exposure
comparison
outcome
time
what is measurement?
A collection of data that describes a property of a variable
data types determine how we
analyze the data collected
nominal
named variable
ordinal
named and ordered variable
interval
named, ordered, equal distance between interval values, but zero does not represent the absence of a concept
Ratio
named, ordered, equal distance between interval values, zero is meaningful and represents the absence of a concept (thus you can compare levels using ratios)
give examples of nominal data
gender/sex
religion
race
true or false
employment status
give examples of ordinal data
ratings (agreement, approval, pain etc)
job class
socioeconomic status
Nominal and ordinal are grouped together as what kind of variables
categorical variables
give examples of interval variables
temperature in Fahrenheit or Celsius
IQ
SAT scores
time of day
give examples of ratio
age in years
number of cigarettes smoked per week
income in dollars
minutes spent doing physical activity
interval and ratio are grouped together as what kind of variables
continuous variables
when collecting quantitative data you should ensure that
your method matches your research question
you consider merits of self-report vs objective data collection
consider resources available
consider analysis that will be conducted
what is the research question?
where do we get qualitative data in research
observations (observe and record people, places or situations)
objective measurements
reduce bias from subjectivity
self-reports
an individual’s own account of factors such as attitudes, symptoms, beliefs, or behaviors
Questionnaires
well-written questions are key
Examples of Routine Data Collection
Mortality and morbidity reports based on death certificates, hospital records, physician information, statutory notifications
laboratory diagnosis records (pathology)
outbreak reports especially infectious diseases vaccine uptake and side-effect reports
employers’ sickness absence records
Critiques of Routine Data Collection
Despite uniform methods and recording, they are never 100% complete
Have to make the best of what is available
Where possible try to motivate and train healthcare infrastructure
Perform quality assessments to spot flaws
Common Data Collection Instruments
interviews
telephone surveys
internet surveys
medical bus
health apps
other methods
Describe how interviews work for measuring data
Often used for a small group of subjects for a broad range of topics
structured - like a survey but administered by a person; fixed responses (multiple choice)
unstructured - questions differ for each subject and can be conditional upon responses to previous questions; open responses
Advantages of using interviews as methods of measuring data
you have control of the environment
can probe for more information where necessary
qualitative data
what is a con with using interviews when for measuring data
they are expensive, infrastructure, burdensome, limits sample size
explain how telephone surveys work when using them to measure data
trained interviewers contact and gather information respondents
random digit dialing (RDD) - address based, landlines are listed
advantages with using telephone surveys when measuring data
easy to access people (95% of Americans have a phone)
good quality control - trained interviewer, recordings
anonymity - accurate responses
quick data processing - computer assisted telephone interviewing (CATI)
what are some disadvantages of using telephone surveys as
time consuming
hard to reach people - unknown number?
advantages to using internet surveys when using them to measure data
quick, free platforms
anonymous responses
no interviewer burden (better responses for sensitive topics?)
disadvantages to using internet surveys when using them to measure data
who are your respondents (selection bias!)
how do you know?
questionnaire is the tool for
collecting data
sometimes called an instrument
Survey is both the
questionnaire and the process-collecting, aggregating and analyzing the responses
What are the types of methods of survey implementation
in-person interviews (physical or virtual)
telephone surveys
mailed surveys
low response rates
online surveys, instant polls
cheap and easy to use, but overdone and we cannot always know who’s answering
Survey Research: Strengths
describes the characteristics of a large population
makes a large sample feasible
makes findings more generalizable
enables analysis of multiple variables
flexible anlysis
use of validated questionnaires means
uniform measurement
strong reliability
Survey Research: Weakness
Forcing responses into pre-determined options
superficiality
lack of context
inflexibility in design
artificiality
Risk of using poorly designed questionnaires
weak validity
General Guidelines for Survey Interviewing
dress proper
familiarize yourself with questionnaire
training (follow your training manual exactly and know how to handle the unexpected)
record things exactly, probe info in a neutral way
prioritize safety for both
constructing questions
specifically related to your research question
brief, clear, and concise
neutral, nonbiased
inquiries for a single concept
aligned with respondents’ literacy and culture
Describe the format of open-ended questions when you want to include them in your questionnaire
worded in ways where the respondent answers in text format
helpful when researching new topics or ideas
must be categorized or coded in order to analyze the information
Closed-ended questions
Response options are provided
must be exhaustive
should have a place for respondents to refuse or say they do not know
must be mutually exclusive
Easy to convert to numeric codes
Using a fill in the blank “other” response is common
for example: Have you used illegal drugs in the past month? Yes/No/IDK
How to get better responses for questionnaires
offer incentives or benefits
reduce perceived costs of responding
make it convenient and short
minimize sensitive information
Establish trust
ensure confidentiality
communicate outcome to group
be upfront about taking personal info hot topics
ask relevant questions
keep it short and cute
guidelines for questionnaires
avoid negative items
easily misinterpreted
avoid biased items and terms
culturally sensitive
formatting for guidelines
spread out and uncluttered
format for respondents
(use boxes or circles)
provide a code number beside each response
cluster like-questions together; logical order
contingency questions (skip logic)
respondents will only answer questions that are relevant to them
where to start when creating questionnaires
find existing questionnaires
published reliability and validity data
validity may be specific to population
reference resource used
alter language to be applicable to population of interest
if no existing questionnaire exists, then you can write questions
conduct pre-testing of question language, response options, and formatting before administering
population =
all people in a defined setting
sample
= subset of the population
inference
= generalizing the characteristics of the sample to the population
Bias
a process at any stage of inference tending to produce results that depart systematically from the true values
what are the main types of bias
selection bias
measurement/information bias
confounding bias
want to quantify presence, magnitude and direction of bias
Biases in Epidemiologic Research
a systematic error in the design, conduct or analysis of a study that results in a mistaken estimate of an exposure’s association with disease
produces a biased estimate
bias cannot be
eliminated
what is the good news about bias in epidemiology
it can be controlled for and minimized when designing a study
selection bias refers to any systematic error
that arises from the procedures used to select study subjects and from factors that influence study participation
give examples of selection bias
using volunteers (healthy worker effect)
using hospitalized cases being treated by the same physician vs. many physicians
excluding study subjects because of cost, distance, or other factors
what is the problem with the relationship of the data with selection bias
the problem here is that the relationship between exposure and disease is different in the sample compared to the population
In case-control studies selection bias can occur due to:
control selection bias: selection of a comparison group (“controls”) that is not representative of the population that produced the case in a case-control study
In cohort studies selection bias can occur due to:
loss to follow up can cause an overestimate or an underestimate of effect to retrospective and prospective cohort studies and in clinical trials
information baises
inaccuracy in measurement or classification of exposure, outcome, or covariates - Results in measurement error/misclassification
Hawthorne effect
example of information bias - behavior changes because they are aware of being observed
Explain Recall Bias (a type of information bias)
Cases may be more likely to recall an exposure than controls
ex: people diagnosed with a serious illness are likely to spend a lot of time thinking about what might have caused it
Systematic error
When the information we collect consistently reflects a false picture
Bias in research: any systematic error in the design, conduct, or analysis of a study that result in a mistake conclusion
Biased estimates: measurements of a value or estimate of association that are systematically incorrect
Random error
no consistent pattern of effects
ex: occasional transcription errors in recording data
if we can’t predict random error, how can we minimize it?
increase sample size - larger sample will yield more precise estimates
improve sampling procedures to reduce variability
use strict measurement protocols and reliable instruments
use appropriate statistics and analytic methods; set appropriate p-values
Sources of measurement error
written self-reports - item wording
interviews - different interviews
direct behavioral observation
observers might be biased
examining available records
practitioners might exaggerate their records/implicit bias
improper documenting
How to avoid measurement error
use unbiased wording
carefully train interviewers
understand how existing records are kept
triangulation
using several different research methods to collect the same information
Internal validity
means that there is absence of systematic error
MUST rule our random error, bias, and confounding
validity refers to a study free of bias, confounding and random error
we must have internal validity before we can make any claims of generalizability
the study provides an unbiased estimate of what it claims to estimate
IV: sources of systematic error
inaccurate measurements of study variables
poorly designed questions; faulty equipment
differences in recruitment of study participants
cases recruited from hospital population, controls from community
differences in retention of study participants
unexposed participants drop out a higher rate than exposed
comparing groups that differ unknown but important characteristics
study of car crash deaths compared cases from local college population with controls living in community
External validity
threats to external validity
small sample size
improper selection of study sample (laziness and not representative)
results from the study can be generalized to some other population
Reliability =
the consistency of a measure
measure of stability over time