Measuring Behavior Change
A Quick Review: Measuring Behavior Change
Core idea: Learning is reflected by measurable changes in a behavior across multiple dimensions.
Key task: Identify a learned behavior by observing systematic changes in the behavior’s characteristics.
Dimensions commonly tracked:
Errors: whether the number of errors decreases.
Topography: the form/shape of the behavior, which can become more variable or more consistent.
Intensity: the force or magnitude of the behavior (e.g., pressure, effort).
Speed: how fast the behavior is performed.
Latency: the delay between a cue and the start of the behavior.
Rate: the frequency or number of occurrences per unit time (e.g., letters per minute).
Fluency: smoothness and flow of performance, often related to correctness rate.
Note on rate vs. errors: If you hold rate constant, changes in fluency or errors may reflect quality; increasing rate with more errors can indicate a trade-off unless rate is controlled.
Example framing: In a task like maze exploration, higher rate and reduced errors indicate learning, while slower latency and more fluent responses indicate better performance.
Practical takeaway: Learning is not inherently good or bad; it is a change in behavior relative to prior performance.
Data Metrics and Experimental Signals
Measured signals include: Letters per Minute, Intensity of Pressures (in grams), Topography, Trial progression (Trial 1 vs. Trial 15), Average Time Scores, Looks similar to frequency but corrected for incorrect responses, Speed, Latency, and Response Delay (in seconds).
Example scales observed in sessions:
Trials scale: 1 through 16 (and beyond in multi-session designs).
Intensity scale: e.g., 13, 17, 21, 25, 29, 33, 37, 41, 45, 49, 53, 57.
Topography and Latency measures captured across trials.
Specific task measures include:
“Number of re-entry errors” in an 8-arm radial maze (categories observed: 0, 1–3, 4–7, 8–10; training sessions may shift distribution).
“Average Time Scores” and speed metrics across trials.
Fluency and Letters per Minute as proxies for cognitive and motor fluency.
Conceptual note: Some graphs may show composite variables like “Looks like Frequency but adjusted for incorrect responses,” highlighting the idea that errors are accounted for in rate-like measures.
Observational insight: Data often compare early trials (Trial 1) with later trials (Trial 15) to infer learning trajectories.
A note on experimental design visuals: Phase-like progress (e.g., A/B or Training vs. Control) may be embedded in figures showing changes across trials, days, or conditions.
Identifying a Learned Behavior: Demonstrative Changes
A learned behavior can be demonstrated by measuring at least one of the following changes:
A DECREASE in Errors
More VARIABLE Topography
DECREASED Intensity
SLOWER Speed
LONGER Latency
INCREASED Rate
DECREASED Fluency
Interpretation notes:
Decreased errors strongly signals improved accuracy.
Increased variability in topography can indicate exploratory refinement or strategy change.
Decreased intensity or slower speed may reflect task-optimization or energy conservation in learning.
Longer latency can reflect more deliberation or processing time prior to response.
Increased rate typically signals quicker responding or higher throughput when appropriate.
Decreased fluency can imply a more conservative or effortful performance if rate remains high.
Important caveat: When rate is held constant, some changes (e.g., decreased fluency) may resemble increased errors; rate control helps distinguish true learning from mere pacing.
Review of Exercise from Last Class: Concrete Examples
A DECREASE in Errors:
Playing an instrument
Parallel parking
More VARIED Topography:
Making your Mom’s ‘recipe’
Explaining a complex idea
DECREASED Intensity:
“Work Smarter, Not Harder”
Making friends
Learned helplessness (as a cautionary note in interpretation)
SLOWER Speed:
Cleaning
Finishing the job your boss assigned you
LONGER Latency:
Responding to the question “What’s wrong?”
Answering questions on exams with tricky wording
INCREASED Rate:
Studying with flashcards
Class attendance
DECREASED Fluency:
If rate is held constant, this can resemble increased errors; but if rate is not held constant, fluency can vary inversely with errors.
Summary principles:
Remember learning is reflected by a change in behavior.
Learned behavior is not inherently good or bad; it is a differential change.
The Study of Learning & Behavior: Part 2
Focus: Sources of the data we study in learning and behavior research.
Sources of Data
Types covered:
Anecdote: First- or second-hand report of personal experience.
Case Study: Detailed study and description of a single case (often clinical).
Descriptive Study: Descriptive data from a group to describe its members.
Experiment: Measures the effects of one or more independent variables on one or more dependent variables.
Anecdotes
Definition: First- or second-hand reports of personal experience (e.g., “my cats come running when they hear the pop-top on the can”).
Advantages:
Can inspire ideas for case studies, descriptive studies, or experiments.
Limitations:
The plural of anecdote is not data – “The plural of anecdote is not data” (Raymond Wolfinger).
Anecdotes are not systematic and are open to many interpretations.
Case Studies
Definition: Detailed study and description of a single case; common in clinical settings (e.g., self-injurious behavior, music).
Advantages:
More systematic than anecdotes.
Limitations:
Time-intensive.
Generalizations about behavior are weak.
Cannot establish causation.
Much data come from reports rather than direct observations.
Descriptive Studies
Definition: Descriptive study describes a group by obtaining data from its members (e.g., which group solves a riddle better: 6-year-olds vs. 10-year-olds).
Advantages:
More explanatory power than case studies; larger samples.
Disadvantages:
Can suggest hypotheses but cannot test them.
The “True” Experiment
Core definition: A research design that measures the effects of one or more Independent Variables (IV) on one or more Dependent Variables (DV).
Key terms:
Independent variable (IV): The variable the researcher controls; expected to affect the DV.
Dependent variable (DV): The outcome measured; expected to vary with the IV.
Formal representations:
IV affects DV: DV = f(IV) + \epsilon
Between-Subjects Experiments
Design concept: The IV varies across two or more groups of subjects (between-subjects or between-treatment/group design).
Terms:
Experimental Group: Exposed to the IV.
Control Group: Not exposed to the IV.
Random Assignment OR Matched Sampling: Methods to create equivalent groups.
Example: Teaching 2nd graders outside (Experimental Group) vs. in a classroom (Control Group) on basic academic abilities.
Matched Subjects Design
Definition: Subjects are matched on variables that might affect the DV, then split into two or more groups.
Rationale: Ensures pre-existing differences are balanced across groups.
Illustrative example: Sleep-deprived vs. typical sleep groups; match on average sleep duration and processing speed to control confounds.
Within-Subjects Experiments
Design concept: The IV varies at different times for the same subject (repeated measures).
Typical use: Group of subjects; can also be a single-subject design (N = 1).
Example: The effect of a Learning Unit on students’ knowledge of a subject.
ABA Reversal Design (Single Subject)
Type: A within-subjects design where behavior is observed before (A) and after (B) an experimental manipulation.
Reversal: The original condition (A) is restored, sometimes followed again by the manipulation (B).
Purpose: To determine whether observed changes are due to the manipulation or other factors (medication vs. placebo distinction).
Notation: A A B B (with reversal implemented to test causality)
Animal Research and Human Learning
Benefits:
Animals allow greater experimental control.
Provides essential insights into learning and behavior that can inform human research.
Criticisms:
Ethical concerns: animal rights objections to using animals in research.
Some argue simulations or computer models could replace animal work, though this is not always feasible or sufficient.
Takeaway: A balanced approach uses both animal and human studies to understand learning and behavior.
Preview topic: Pavlovian (classical) conditioning will be explored in the next session.