Radical, laboratory-based alternative to early 20th-century psychodynamic and trait approaches.
Focuses strictly on observable behavior; private events count only if they can, in principle, be publicly verified.
Rejects all hypothetical mental entities (ego, traits, drives, etc.).
Three philosophical pillars:
Determinism: behavior is lawfully determined, not a matter of free will.
Environmentalism: history of environmental stimuli (not anatomy) is key to prediction/control.
Radical behaviorism: extends the demand for observables to thoughts, feelings, memories.
Born 1904 Susquehanna, Pennsylvania; comfortable Presbyterian home; lost faith in high school.
Early interests: music & literature → year-and-a-half "Dark Year" trying to be a writer in parents’ attic.
Identity Crisis #1 (young adult): failure in creative writing; blamed parents & literature; Eriksonian identity confusion.
Reads Watson & Pavlov → commits to behaviorism; Harvard grad program without any undergrad psych.
PhD 1931; National Research Council Fellowship; writes 30-year self-plan (avoid CNS speculation).
Junior Fellow, Harvard Society of Fellows 1933{-}36.
University of Minnesota 1936{-}45: marries Yvonne Blue; daughters Julie (1938) & Deborah (1944).
Publishes The Behavior of Organisms (1938).
Project Pigeon: pigeons guided WWII missiles; dramatic demo 1944 but no gov’t funding.
Baby-Tender (Air-Crib) for Debbie; mixed public reaction; failed commercialization.
Mid-life Identity Crisis #2: financial dependence on father, unrealized verbal-behavior book.
Chair, Indiana University 1945{-}48; writes Walden Two (utopian novel) during vacation → therapeutic.
Returns to Harvard 1948; retires 1964 (continues research & writing as emeritus until death 1990).
Major late books: Science and Human Behavior (1953), Beyond Freedom and Dignity (1971), About Behaviorism (1974), three-volume autobiography (1976, 1979, 1983).
Honors: APA Distinguished Scientific Award 1958, Presidential Medal of Science, sole APA “Lifetime Contribution” citation 1990.
E. L. Thorndike’s Law of Effect: behaviors followed by satisfiers are “stamped in”; punishments merely inhibit.
J. B. Watson’s methodological behaviorism: prediction & control of behavior via S-R habits; rejected introspection & consciousness.
Study behavior without recourse to needs, instincts, motives; hunger = unobservable fiction.
Aim: interpret (not final-cause explain) behavior via lawful relations.
Interpretation ≠ ultimate cause; generalize from simple lab findings to complex life.
Cumulative.
Empirical attitude (reject authority, demand honesty, suspend judgment).
Search for lawful order (prediction–control–description cycle).
Neutral (CS) + Unconditioned Stimulus → Conditioned Response.
Little Albert: white rat (CS) paired with loud bang (US) → fear generalizes to furry objects.
Behavior is emitted & immediately reinforced → probability increases.
Key formula: A \rightarrow B \rightarrow C (Antecedent–Behavior–Consequence).
Reinforce gross → closer → target responses (e.g., teaching severely disabled child to dress).
Works because behavior is continuous, not discrete.
Differential reinforcement history → operant discrimination (appears cognitive but is learned).
Stimulus generalization: identical stimulus elements → same responding (buying ticket to unfamiliar but similar rock band).
Dual effects: strengthens behavior & is rewarding, but “reward” is environmental, not subjective.
Positive Reinforcement: add appetitive stimulus (food, approval).
Negative Reinforcement: remove aversive stimulus (shock, anxiety).
Reinforcement > punishment for predictable control.
Type I: add aversive (shock, spanking).
Type II: remove positive (fine, toy removal).
Three side-effects: behavior suppression only, conditioned negative emotions, spread of effects (generalized avoidance).
Conditioned: acquire value via pairing with primary (money, tokens).
Generalized: linked to multiple primaries (attention, affection, approval, money).
Continuous vs intermittent (more resistant to extinction & efficient use of reinforcer).
Four basic intermittent schedules:
Fixed-Ratio (FR n): after every n responses.
Variable-Ratio (VR n): after n responses on average (slot machines).
Fixed-Interval (FI t): first response after fixed time (salary).
Variable-Interval (VI t): first response after variable time (unpredictable supervisor checks).
Withhold reinforcement → response weakens.
Slower after intermittent training; FR 1 quickest, high VR slowest (≥10{,}000 non-reinforced responses observed).
Natural selection (species history).
Cultural evolution (social contingencies).
Individual reinforcement history.
Reflexes (pupillary, rooting) selected because they aided survival.
Not all remnants remain adaptive (overeating in food-abundant societies).
Cooperative practices survive because groups with them outlast others; individuals follow practices because they were reinforced to do so, not by deliberate choice.
Maladaptive cultural remnants: division of labor reducing intrinsic reinforcement, modern warfare.
Exist but are behaviors, not causes.
Self-Awareness: observing own private events.
Drives: effects of deprivation/satiation; explanatory fictions until variables mapped.
Emotions: products of phylogenic & ontogenic reinforcement.
Purpose/Intention: labels for covert behaviors reinforcing persistence.
Higher mental processes: covert operants (e.g., searching memory = reinforcing when successful).
Creativity: random variations selected by reinforcement; analogous to mutation & natural selection.
Unconscious behavior: punished or suppressed acts/thoughts; most behavior is “unconscious” because variables unobserved.
Dreams: covert symbolic behavior allowing wish-fulfillment without punishment.
Social behavior: only individuals behave; groups reinforce membership (protection, resources). Persistence due to reinforcement schedules (intermittent rewards, lack of escape options).
Operant conditioning (reinforcement & punishment).
Describing contingencies (rules, threats, advertising).
Deprivation & satiation.
Physical restraint (prison, holding child’s hand near ravine).
Anecdote: Skinner shapes Erich Fromm’s arm-chopping gestures during antibeahviorism speech via eye contact & smiles.
Alter environment with physical aids (take extra cash to allow discretionary spending).
Remove discriminative stimuli (turn off TV while studying).
Arrange aversive escape contingencies (alarm clock across room).
Pharmacological manipulation (tranquilizers, alcohol).
Perform alternative responses (count wallpaper patterns to avoid guilt thoughts).
Ultimate control still environmental; “willpower” is fiction.
Escape (physical/psychological withdrawal) → loneliness, mistrust.
Revolt (vandalism, verbal abuse, overthrow).
Passive resistance (stubbornness, procrastination).
Excessive vigor (outdated but once-reinforced).
Excessive restraint (avoidance of punished acts).
Blocking reality (perceptual defense).
Defective self-knowledge (boasting, Messiah complex) → negative reinforcement by avoiding inadequacy.
Self-punishment (direct or via arranging punishment by others).
All therapists are controlling agents; effective when contingencies differ from punitive past.
Behavioral therapy: deliberate shaping through reinforcement; avoids “explanatory fictions.”
Techniques: systematic desensitization, token economies, contingency management, etc., all rooted in operant principles.
Reinforcer value can change; d-amphetamine ↑ nicotine’s reinforcing efficacy (Tidey et al., 2000).
Individual differences: only “responders” show stimulant-induced smoking escalation (Sigmon et al., 2003); likely due to dopamine sensitivity.
Reinforcement Sensitivity Theory (Gray & Pickering): anxiety ↔ punishment sensitivity; impulsivity ↔ reward sensitivity; interaction effects (Corr, 2002).
fMRI shows high Behavioral Activation scores → greater activation (ventral striatum, amygdala, substantia nigra, orbitofrontal cortex) to rewarding food images (Beaver et al., 2006).
Suggests personality-linked neural reactivity to reward; implications for obesity, addiction, therapy.
Generates research: very high.
Falsifiable: high (precise operational definitions).
Organizes knowledge: moderate (harder with insight, inspiration, etc.).
Guide to action: very high (education, clinical, organizational).
Internal consistency: high (precise, non-contradictory terms).
Parsimony: mixed (few constructs, but cumbersome language).
Deterministic, causal, unconscious, social/environmental.
Optimistic about engineering better societies (Walden Two) yet realist about current cultural maladaptations.
Neither inherently good nor evil; behavior judged only by reinforcement history.
Emphasizes uniqueness via individual reinforcement & genetic variability.
Law of Effect (Thorndike): S \rightarrow R \rightarrow \text{Satisfier} → stamping in.
Operant Conditioning: A \rightarrow B \rightarrow C.
Positive Reinforcement (+S^R): add appetitive.
Negative Reinforcement (-S^{R-}): remove aversive.
Punishment Type I (+S^P): add aversive; Type II (-S^{P+}): remove appetitive.
Schedules: FR, VR, FI, VI.
Extinction: \text{Nonreinforcement} \Rightarrow \downarrow P(B).
Generalized Reinforcer: stimulus linked to multiple primary reinforcers (e.g., \text{Money}).
Counteracting strategies: Escape, Revolt, Passive Resistance.