1/38
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No study sessions yet.
variable
is any measurable characteristic that can take on different values.
study variable
is the variable of interest in a specific research
Scale of measurement
(nominal, ordinal, interval, ratio)
Type
(qualitative vs. quantitative)
Behavior
(continuous vs. discrete)
interview
questionnaire
observation
review of record
methods of using study variables
In-person interview
Telephone or online interview
types of interview
interview
advantage
Clarification of unclear responses
Better response rate
• Observing nonverbal cues
limitations
Time consuming
Potential influence (response bias)
questionnaire
Participants answer written questions without an interviewer
questionnaire
advantages
Efficient for large samples
Anonymity can improve accuracy
More standardized response
limitation
Misinterpretation of questions
Lower response rate than interviews
No opportunity to probe or clarify
OBSERVATION
Recording behaviors or characteristics directly.
Direct observations
Observation using equipment
types of observation
observation
advantage
Real - time, objective data
Useful for behaviors or clinical
measurements
limitation
Expensive
Observer effect (Hawthorne phenomenon)
review of records
Using existing documents (clinical charts, registries).
review of records
advantage
• Quick and inexpensive
• Historical data available
limitations
• Quality depends on source records
• Incomplete or inaccurate documentation
measurement error
is any deviation between the measured value and the true value.
study participant
observer
instrument
data processing
sources of errors
Study Participant Errors
Subjective responses influenced by memory gaps, embarrassment, or social desirabilly
Biologic variability (e.g., fluctuating blood pressure)
Observer Errors
• Prior knowledge or expectations influencing measurement
• Non-neutral behavior
• Failure to follow standard protocols
• Incorrect transcription or data encoding
• Different clinicians applying criteria inconsistently (diagnostic variability)
Instrument Errors
Poorly designed questionnaires
Unclear instructions
Calibration issues in equipment
Mechanical failure
Problems combining subscale items into a score (index problems)
Data Processing Errors
Mistakes during data entry or coding
Misuse of statistical software
Inaccurate transcription of results
validity and reliability
To ensure data quality, researchers assess ___ and ___
VALIDITY (ACCURACY)
- "Are we measuring what we are supposed to measure?"
Face validity
looks valid on surface
Content validity
all relevant domains included
Construct validity
aligns with theoretical concepts
Criterion validity
comparison with a gold standard. It is the basis for sensitivity and specificity
RELIABILITY (CONSISTENCY)
"Are measurements repeatable?"
SENSITIVITY
- Proportion of people with the disease who test positive.
A highly sensitive test → good for screening → few missed cases.
SPECIFICITY
Proportion of people without the disease who test negative.
A highly specific test → good for confirmation
→ few false alarms.
False Positive (FP)
Non-diseased labeled as diseased
False Negative (FN)
Diseased labeled as disease-free
Positive Predictive Value (PPV)
Probability that a positive test truly indicates disease
Negative Predictive Value (NPV)
Probability that a negative test truly indicates absence
cut off vlues
disease prevalence
factors affecting sensitivity and specificity
Lower cut-off
higher sensitivity, lower specificity
Higher cut-off
higher specificity, lower sensitivity
Higher prevalence
→ more true positives → improved sensitivity
Higher prevalence
→ more individuals classified as diseased → possibly lower specificity