1/19
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Operationalization of Variables
The process of defining the measurements of a phenomenon (“construct”) that isn’t directly measurable, though its existence is indicated by other phenomena
Basically transforming abstract concepts into measurable terms that researchers observe, measure, and analyze
goes from understanding the abstract concept//construct —> create an operational definition for guiding measurement —> defining a scale -→ defining the variable for researchers to analyze
Operationalization transforms an abstract concept into something measurable, allow for systematic analysis & comparison across diff individuals or groups
Operationalization Scale
Scale is used in operationalization
it is a set of items that collectively capture a construct & isi used as a variable in analysis
Operationalization Variable
The measured characteristics that differ from individual to individual
Operational process Ex w/ Social Support
Social Support Construct: Abstract concept that can’t be directly measured. Involves emotional, informational or practical help from others.
Operationalization process: to measure social support, use a procedure involving asking respondents about their perceived availability of help:
“How often do you receive emotional support from friends or family” —> never, rarely, sometimes, often, always
Scale process Ex w/ Social Support
Social Support Construct: Abstract concept that can’t be directly measured. Involves emotional, informational or practical help from others.
Social Support Scale:
How often do you receive emotional support from friends or family
How often do you receive practical help from friends or family
How satisfied are you with the support you recieve?
Variable process Ex w/ Social Support
Social Support Construct: Abstract concept that can’t be directly measured. Involves emotional, informational or practical help from others.
The variable created from the scale is the level of social support, which can differ from individual to individual
using the scale, we can quantify & analyze the lvl of social support as a variable in research
Survey Design Question Types
Closed-ended questions (categorial; ordinal or nominal), Open ended questions (free response)
Key areas for every survey design question
Demographics, key exposures, key diseases/outcomes, related exposures & outcomes (potential confounders)
confounder = something, other than the thing being studied, that is causing the results seen in a study
Close-Ended survey questions
Allow for a limited number of possible answers that either include
date/time
numeric
yes/no
paired comparisons (prefer this, prefer that)
categorical: ordinal (ranked) or nominal (unordered / no rank)
Open-Ended survey questions
Considered free response, they are questions that allow participants to explain their answers at length
ex: what is your biggest personal health concern at the present?
Validity
The extent to which a measure captures what it is intended to measure
the match between the conceptual definition & the operational definition
Refers to how well a test or instrument measures what its supposed to
about accuracy
Reliability
When a measurement procedure yields consistent scores as long as the phenomenon being measured is not changing
reliable measures are accurate, reproducible, & consistent from one testing occasion to another
Degree to which scores are free of “measurement error”
refers to the consistency of a measure
reliable test or instrument yields consistent results when administered under the same conditions
Logical Validity
Refers to the coherence & soundness of the reasoning behind a study or argument
ensures that conclusions logically follow from the premise
Consists of fave validity & content validity
Face Validity
Confidence gained from careful inspection of a concept to see if it’s appropriate “on its face”
In our collective intersubjective, informed judgement, have we measured what we want to measure?
This is the extent to which a test or instrument appears to measure what it’s supposed to
its often considered the weakest form of validity but is useful for initial assessments
Example of Face Validity (A Leadership Skills Test)
Test designed to evaluate leadership skills for a management position
Includes questions like: How would you handle a team conflict? OR How do you make strategic decisions?
the test has high face validity because it clearly appears to measure leadership skills
questions are directly related to the skills & behaviors expected of a leader
The test is appropriate for assessing leadership abilities
lower validity would occur if the test included unrelated q’s like “what is your favorite hoddy”
Importance of face validity
It influences participants engagement & trust in the assessment process
If test appears to be relevant & align w/ its intended purpose, participants are more likely to take it seriously & provide thoughtful responses
Content Validity
It reflects the extent to which a measure includes all aspects of a given construct
constructs are typically multidimensional
It ensures that a test or instrument covers all relevant aspects of the concept it measures
Example of content validity: Drivers License Test
the test is designed to measure whether an individual is qualified to operate a vehicle safely
The test should cover all the essential aspects of driving like
knowledge of traffic rules, practical driving skills, & driving in various conditions
If the test only includes questions about traffic rules & basic maneuvers in ideal conditions, it lacks content validity
it doesn’t fully capture in the range of skills needed for real world driving
Social Desirability Bias
Occurs when respondents answer questions in a way they believe is socially acceptable
ex: In health surveys, ppl might underreport unhealthy behaviors like smoking or overreport healthy habits like exercise
Validity relationship with realiability
Cannot have validity without reliability but something can be reliable but not valid