Textbook terms

  • Chapter 5: Instrumental Conditioning: Foundations [1-3]

  • Instrumental Conditioning: A type of learning where behavior is modified by the consequences it produces.

  • Discrete-Trial Procedures: Training procedures where the behavior of the subject is tested in distinct trials.

  • Free-Operant Procedures: Training procedures where the subject can respond at any time.

  • Positive Reinforcement: When a response produces a stimulus that increases the likelihood of that response.

  • Punishment: When a response produces a stimulus that decreases the likelihood of that response.

  • Negative Reinforcement: When a response removes a stimulus that increases the likelihood of that response.

  • Omission Training (Negative Punishment): When a response removes a stimulus that decreases the likelihood of that response.

  • Belongingness: The concept that the effectiveness of a reinforcer is determined by how well it aligns with the response it is meant to modify.

  • Instinctive Drift: The tendency for animals to revert to their innate behaviors even when those behaviors interfere with learning.

  • Quantity and Quality of the Reinforcer: These factors influence the effectiveness of instrumental conditioning, with larger or more preferred reinforcers generally leading to better performance.

  • Behavioral Contrast: A change in the value of a reinforcer in one situation, based on experiences with a different reinforcer in another situation.

  • Response-Reinforcer Relation: This is a critical factor in instrumental conditioning, encompassing temporal contiguity, contingency, and other factors that impact learning.

  • Learned Helplessness: A phenomenon in which prolonged exposure to uncontrollable aversive events leads to a sense of helplessness and reduced motivation to escape or avoid those events.

  • Chapter 6: Schedules of Reinforcement and Choice Behavior [3-7]

  • Schedule of Reinforcement: A rule that determines which occurrence of a response is followed by the reinforcer.

  • Intermittent Reinforcement: Schedules where the reinforcer follows some responses, but not others, leading to different patterns of responding.

  • Ratio Schedule: Reinforcement is based on the number of responses made.

  • Fixed-Ratio (FR) Schedule: A fixed number of responses are needed to produce the reinforcer. This leads to a high rate of responding with a characteristic post-reinforcement pause.

  • Variable-Ratio (VR) Schedule: The number of responses required for reinforcement varies unpredictably, resulting in a high and steady rate of responding.

  • Interval Schedule: Reinforcement is based on time since the last reinforcer.

  • Fixed-Interval (FI) Schedule: A fixed amount of time must pass since the last reinforcer before a response will be reinforced. This schedule produces a scalloped pattern of responding, with an increase in responding near the end of the interval.

  • Variable-Interval (VI) Schedule: Reinforcement is available after a variable amount of time has passed, leading to a moderate and steady rate of responding.

  • Concurrent Schedule: Two or more schedules of reinforcement operate independently and simultaneously for different responses.

  • Choice Behavior: The selection of one response alternative over others when multiple options are available.

  • Matching Law: Describes how organisms distribute their responses among available options, generally matching their response allocation to the relative rates of reinforcement for each option.

  • Molecular vs. Molar Theories of Matching: Molecular theories focus on local reinforcement patterns, while molar theories focus on overall reinforcement rates over longer time periods.

  • Concurrent-Chain Schedule: This schedule involves two stages, a choice link where the subject chooses between different reinforcement schedules and a terminal link where the chosen schedule is in effect.

  • Self-Control: The ability to choose a larger, delayed reward over a smaller, immediate reward.

  • Delay Discounting: The process of devaluing a reward because it is delayed in time.

  • Chapter 7: Instrumental Conditioning: Motivational Mechanisms [8-14]

  • Associative Structure of Instrumental Conditioning: Explanations for why instrumental responses occur, focusing on the role of associations between stimuli (S), responses (R), and outcomes (O).

  • S-R Association: An association between a stimulus and a response, often considered habitual.

  • S-O Association: An association between a stimulus and an outcome, representing the expectation that the outcome will follow the stimulus.

  • R-O Association: An association between a response and an outcome, leading to the expectation that the outcome will occur if the response is made.

  • Reinforcer Devaluation: A technique to assess the role of R-O associations, by reducing the value of the reinforcer and observing its impact on the instrumental response.

  • Premack Principle: States that a more probable behavior can reinforce a less probable behavior.

  • Response Deprivation Hypothesis: Extends the Premack Principle, stating that even a low-probability behavior can serve as a reinforcer if access to it is restricted below its baseline level.

  • Response Allocation: A framework for understanding how organisms distribute their responses across various activities.

  • Behavioral Bliss Point: The preferred distribution of responses before an instrumental conditioning procedure is introduced.

  • Behavioral Economics: The application of economic principles to understand behavior, particularly in terms of how individuals allocate their resources (time, effort) to obtain desired outcomes.

  • Demand Curve: Shows the relationship between the price of a commodity and the amount consumed.

  • Elasticity of Demand: Describes the sensitivity of consumption to changes in price.

  • Consumer Surplus: The difference between what a consumer is willing to pay for a commodity and what they actually pay.

  • Substitutability: The degree to which different commodities can satisfy the same need or want.

  • Chapter 8: Stimulus Control of Behavior [15-19]

  • Stimulus Control: When the probability of a behavior is influenced by the presence or absence of specific stimuli.

  • Stimulus Discrimination: Learning to respond differently to different stimuli.

  • Stimulus Generalization: The tendency to respond to stimuli that are similar to the training stimulus, reflecting the transfer of learning.

  • Stimulus Generalization Gradient: A graphical representation of the strength of responding to stimuli that vary in their similarity to the training stimulus.

  • Sensory Capacity and Orientation: The ability of an organism to perceive and attend to relevant stimuli.

  • Overshadowing: A phenomenon in which one stimulus in a compound stimulus more strongly controls the response, overshadowing the other stimulus.

  • Stimulus Salience: The noticeability or attention-grabbing qualities of a stimulus.

  • Type of Reinforcement: The type of reinforcer used can influence stimulus control, with certain reinforcers being more effective for specific behaviors.

  • Stimulus Elements vs. Configural Cues: Debate about whether organisms learn about individual stimulus features (elements) or the overall configuration of the stimuli.

  • Spence's Theory of Discrimination Learning: This theory explains stimulus control in terms of excitatory and inhibitory gradients.

  • Peak Shift: A phenomenon in which the peak of the generalization gradient shifts away from the S+ (reinforced stimulus) in the direction opposite the S- (nonreinforced stimulus).

  • Contextual Cues: Incidental stimuli present during learning that can influence behavior.

  • Conditional Relation: A relationship between two stimuli, where the significance of one stimulus depends on the presence of another stimulus.

  • Chapter 9: Extinction of Conditioned Behavior [2, 20-37]

  • Extinction: The decline in a conditioned response when the conditioned stimulus (CS) is presented without the unconditioned stimulus (US) in classical conditioning or when the response is no longer followed by the reinforcer in instrumental conditioning.

  • Spontaneous Recovery: The return of an extinguished response after a period of rest, indicating that extinction does not completely erase the original learning.

  • Renewal: The reappearance of an extinguished response when the context is changed, suggesting that extinction is context-specific.

  • Reinstatement: The return of an extinguished response when the subject is re-exposed to the unconditioned stimulus (US) or the reinforcer.

  • Resurgence: The reappearance of an extinguished response when another response is extinguished, highlighting the dynamic nature of behavior and how extinction of one behavior can influence others.

  • Enhancing Extinction: Techniques aimed at improving the effectiveness of extinction, including increasing the number and spacing of extinction trials, conducting extinction in multiple contexts, and presenting extinction reminder cues.

  • Paradoxical Reward Effects: Sometimes, increasing the magnitude of the reinforcer during acquisition leads to faster extinction.

  • Partial Reinforcement Extinction Effect (PREE): Extinction is slower after partial reinforcement than after continuous reinforcement, possibly due to difficulty in detecting the change in reinforcement contingencies.

  • Resistance to Change (Behavioral Momentum): Behavior that has been maintained on a schedule of reinforcement is more resistant to disruption by other factors, suggesting that a history of reinforcement can create a kind of inertia in behavior.

Please let me know if you have any other questions.

robot