Chapter 3

Chapter 3: Defining and Measuring Variables

Introduction to Variables

  • Variables: Characteristics or conditions that can change and differ among individuals.

    • Well-defined Variables: Easily observed and measured (e.g., height, weight).

    • Abstract Variables: Intangible and more complex (e.g., motivation, self-esteem) require more sophisticated measurement techniques.

Types of Variables

  • Independent Variable (IV): Factor manipulated by the experimenter to observe its impact.

    • Levels: Different values or conditions of the independent variable, at least two required.

  • Dependent Variable (DV): Measured outcome reflecting the effect of the independent variable.

  • Subject Variables: Characteristics inherent to subjects that influence outcomes.

  • Confounded Variables: Variables that can interfere with the relationship between IV and DV.

  • Extraneous Variables: Other variables that could affect the outcome if not controlled.

Measuring Behavior

  • Frequency: Count of how often a behavior occurs.

  • Rate: Frequency of behavior relative to a specified time frame.

  • Duration: Length of time a behavior is observed.

  • Latency: Time taken from instruction to performance.

  • Topography: The specific form or pattern of behavior.

  • Force: Intensity or strength of the behavior.

  • Locus: The specific location in the environment where the behavior takes place.

Measurement Examples

  • To measure different behaviors (like kicking furniture or writing), the appropriate measuring methods from frequency, rate, duration, etc., must be selected.

Theories and Constructs

  • Theory: Set of principles explaining mechanisms behind behaviors; includes concepts not directly observed.

  • Constructs: Hypothetical entities derived from theories; cannot be observed but help to predict behaviors (e.g., intelligence, hunger).

Operational Definitions

  • Defines and measures constructs through observable behavior:

    • Specify measurement procedures for external behaviors.

    • Examples: intelligence may be operationally defined as IQ scores; hunger may be measured by hours of food deprivation.

Limitations of Operational Definitions

  • Operational definitions do not equate to the constructs themselves.

    • Poorly defined operational definitions may omit vital aspects or include irrelevant components.

General Criteria for Evaluating Measurement

  • Two key criteria: Validity and Reliability.

  • Good measurements are consistent across different testing scenarios.

Consistency of a Relationship

  • Relationship analysis involves plotting scores to observe correlations:

    • Positive Relationship: Scores move in the same direction.

    • Negative Relationship: Scores move in opposite directions.

Correlation and Consistency

  • Correlation calculations help assess the strength and direction of relationships:

    • Values near +1.00 indicate a strong positive relationship.

    • Values near -1.00 indicate a strong negative relationship.

    • A value near 0 indicates no discernible relationship.

Validity of Measurement

  • Measurement must accurately reflect what it intends to measure.

    • Face Validity: Superficial appearance of measurement.

    • Concurrent Validity: Correlation with established measures.

    • Predictive Validity: Ability to predict future behaviors.

    • Construct Validity: Alignment with theories.

    • Convergent Validity: Agreement across different measurement methods.

    • Divergent Validity: Lack of correlation between different constructs.

Reliability of Measurement

  • Consistency of measurements over time:

    • Measured score = True score + Error.

    • Sources of Error:

      • Observer Error: Human error in measurement.

      • Environmental Changes: Variability in conditions.

      • Participant Changes: Variations in participant states.

Types and Measures of Reliability

  • Test-Retest Reliability: Consistency across successive measurements.

  • Inter-Rater Reliability: Agreement between different observers.

  • Internal Consistency: Consistency of score across different parts of the same test.

Relationship of Reliability and Validity

  • Reliability is necessary for validity; without reliability, validity cannot be guaranteed.

Measurement Procedures

  • Measurement involves classifying individuals into categories:

    • Components include a set of categories and procedures for assignment.

Types of Measurement Scales

  • Nominal Scale: Qualitative categories.

  • Ordinal Scale: Ranks data and establishes order.

  • Interval Scale: Equal intervals without a true zero.

  • Ratio Scale: Includes a true zero and equal intervals, enabling various statistical methods.

Sensitivity and Range Effects

  • Range Effect: Measurement inability to detect differences.

    • Ceiling Effect: Scores cluster high with no room for variance.

    • Floor Effect: Scores cluster low, restricting measurable decreases.

Artifacts in Measurement

  • Artifact: External factors influencing data validity.

    • Experimenter Bias: Influences outcome expectations.

Limiting Experimenter Bias

  • Standardization or automation of procedures increases validity.

    • Use of Single-Blind Studies: Researcher unaware of expected results.

    • Use of Double-Blind Studies: Neither researcher nor participants know the expected outcomes.

Demand Characteristics and Participant Reactivity

  • Demand Characteristics: Cues that suggest study expectations, resulting in biased responses.

    • Reactivity: Participants altering behavior knowing they are studied.

    • Subject roles: Good, Negativistic, Apprehensive, and Faithful.

Selecting a Measurement Procedure

  • Review existing literature when selecting methods;

    • Ensure chosen methods appropriately address the research question and sensitivity needs.

robot