Chapter. 3: Defining and Measuring Variables
Defining and Measuring Variables
Overview of Variables
Importance of operational definitions in research: Operationally defining constructs is critical for the proper measurement and manipulation of variables.
Steps for Conducting Research
Find a Research Idea
Select a topic to explore and review existing literature to discover unanswered questions.
Form a Hypothesis
Develop a hypothesis, which is a tentative answer to the research question.
Determine How You Will Define and Measure Your Variables
Identify specific procedures for defining and measuring all research variables.
Plan to evaluate the validity and reliability of measurement procedures.
Identify Participants or Subjects
Decide the number of participants and their required characteristics.
Plan for ethical treatment of participants.
Select a Research Strategy
Consider both internal and external validity.
Choose between various research strategies: experimental vs. descriptive, correlational, nonexperimental, or quasi-experimental.
Select a Research Design
Decide among designs such as between-subjects, within-subjects, factorial, or single-case designs.
Conduct the Study
Collect data according to the established procedures.
Evaluate the Data
Use appropriate statistical methods (both descriptive and inferential) to summarize and interpret results.
Report the Results
Adhere to guidelines for formatting and style while ensuring accurate reporting. Protect anonymity and confidentiality of participants.
Refine or Reformulate Your Research Idea
Utilize findings to modify or expand upon the original research idea, or to generate new hypotheses.
Operational Definitions of Constructs
The need for operational definitions arises when measuring/manipulating a variable:
Example Constructs to Definitions:
Gratitude toward partner:
Asking individuals if they agree with the statement, "I appreciate my partner."
Observing couples and counting how often they thank one another.
Wealth:
Surveying participants to report their income.
Coding participants’ vehicles based on value: higher vehicle value indicates increased wealth.
Intelligence:
Administering an IQ test.
Recording brain activity during complex problem-solving tasks.
Recommendation: Consult previous research for ideas on operational definitions.
Evaluating Measurement Quality
Criteria for Measurement Quality:
Validity of Measurement: Determines if a measurement accurately captures what it claims to measure.
Reliability of Measurement: Assesses whether the scores remain stable/consistent across different circumstances.
Key Point: A measure can be reliable without being valid; however, it cannot be valid unless it's also reliable.
Types of Validity
Face Validity:
Looks like it measures what it is supposed to measure at first glance.
Concurrent Validity:
New measure scores correlate directly with scores from an established measure of the same variable.
Predictive Validity:
Measure accurately predicts an outcome it is intended to predict (e.g., a college entrance exam predicting college GPA).
Construct Validity:
Measurement reflects the behavior of the variable itself; e.g., weight should correlate with food intake.
Types of Reliability
Measurement inconsistency may arise from various factors such as changes in the observer, environment, or participants.
Key Metrics of Reliability:
Test-Retest Reliability:
A measure should yield consistent results over repeated trials.
Inter-Rater Reliability:
Measurements should be consistent regardless of who conducts them.
Scales of Measurement
Scale | Characteristics | Examples |
|---|---|---|
Nominal | Qualitative distinctions without quantitative distinctions | Nationality, Ethnicity, Ice cream flavor |
Ordinal | Rank-ordered sequence with identified differences and directionality | Clothing sizes (S, M, L, XL), Olympic medals |
Interval | Ordered with equal intervals but no absolute zero point | Temperature (Fahrenheit & Celsius), Golf scores |
Ratio | Ordered with equal intervals and an absolute zero point | Number of correct answers, Length, Weight |
Modalities of Measurement
Self-report measures
Physiological measures
Behavioral measures
Example Questionnaire Scale:
Very often, Often, Sometimes, Rarely
Indicators: ☐ ☐ ☐ ☐
Measurement Issues
Ceiling Effect:
All scores are clustered together at the high end, limiting output variability.
Floor Effect:
All scores cluster at the low end, similarly restricting variability.
Artifacts:
External factors may distort measurements.
Experimenter Bias: Expectations of the experimenter affecting results. Mitigate through blind designs.
Demand Characteristics:
Cues that may reveal the research purpose to participants.
Reactivity:
Participants may change their behavior when they know they are being observed; thus, reassure them of anonymity and confidentiality and encourage honest responses.