1/154
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Unconditioned stimulus (US)
A cue that has some biological significance and in the absence of prior training naturally evokes a response.
Unconditioned response (UR)
The naturally occurring response to an unconditioned stimulus (US).
Conditioned stimulus (CS)
A cue that is paired with an unconditioned stimulus (US) and comes to elicit a conditioned response (CR).
Conditioned Response (CR)
The trained response to a conditioned stimulus (CS) in anticipation of the unconditioned stimulus (US) that it predicts.
Appetite Conditioning
Conditioning in which the US is a positive event (such as food delivery).
Aversive Conditioning
Conditioning in which the US is a negative event (such as a shock or airpuff to the the eye).
Eyeblink conditioning
A classical conditioning procedure in which the US is an airpuff to the eye and the conditioned and unconditioned responses are eyeblinks.
Tolerance
A decrease in reaction to a drug so that larger doses are required to achieve the same effect.
Extinction
The process of reducing a learned response to a stimulus by ceasing to pair that stimulus with a reward or punishment.
Compound conditioning
The simultaneous conditioning of two cues, usually presented at the same time.
Overshadowing
An effect seen in compound conditioning when a more salient cue within a compound acquires more association strength, and is thus more strongly conditioned, than does the less salient cue.
Blocking
A two-phase training paradigm in which prior training to one cue (CS1→US) blocks later learning of a second cue when the two are paired together in the second phase of the training (CS1 + CS2→US).
Prediction error
The difference between what was predicted and what actually occurred
error-correction learning
a mathematical specification of the conditions for learning that holds that the degree to which an outcome is surprising modulates the amount of learning that takes place.
Latent learning
A conditioning paradigm in which prior exposure to a CS retards later learning of the CS-US association during acquisition training.
US modulation theory
Any of the theories of conditioning that say the stimulus that enters into an association determined by change is how the US is processed.
CS modulation theory
Any of the theories of conditioning holding that the stimulus that enters into an association is determined by a change in how the CS is processed.
US, CS
Fill in the blanks: The Rescorla-Wagner model explains conditioning as modulation of the effectiveness of the _________ for learning, while the Mackintosh model explains conditioning through modulation of attention to the ________.
(a) Mackintosh (b) Rescorla-Wagner
From the examples below, which of these explanations of Connie's behavior would be best explained by the Rescorla-Wagner model? Which would be better explained by the Mackintosh model?
a. Connie loved the oatmeal raisin cookies so much, she devoted all of her attention to them. She didn't even bother tasting the chocolate chip cookies.
b. Connie was happy eating only the oatmeal raisin cookies, and she didn't feel any need to begin eating a new type of cookie.
Trial-level model
A theory of learning in which all of the cues that occur during a trial and all of changes that result are considered a single event.
Delay conditioning
A conditioning procedure in which there is no temporal gap between the end of the CS and the beginning of the US, and in which the CS co-terminates with the US.
Trace conditioning
A conditioning procedure in which there is a temporal gap between the end of the CS and the beginning of the US.
Interstimulus interval (ISI)
The temporal gap between the onset of the CS and the onset of the US.
Conditioned taste aversion
A condition preparation in which a subject learns to avoid a taste that has been paired with an aversive outcome, usually nausea.
Purkinje cells
a type of large, drop-shaped, and densely branching neuron in the cerebellar cortex.
Interpostitus Nucleus
One of the cerebellar deep nuclei.
Inferior olive
A nucleus of cells with connections to the thalamus, cerebellum, and spinal cord.
response timing, smaller, poorly timed Crs
The Purkinje cells are involved in _______________. The evidence includes findings that lesions of the cerebellar cortex lead to ______ and _______________, and that mice with degeneration of Purkinje cells are slow to learn eyeblink conditioning.
Purkinje cells and cerebellar deep nuclei, Purkinje cells and the interpostitus nucleus
What are the two main cerebellar regions and the major sensory-input pathways to the cerebellum? Where do these two pathways in the cerebellum converge?
activity-dependent enhancement
paired training of CS and US that produces an increase in the glutamate vesicles released from sensory to motor neurons.
does not
The relationship between a US and a UR (does/does not) involve learning.
preparatory
A CR that precedes the US is often a _________ response.
CR, UR, timing
In eyeblink conditioning, the blink is both a _________ and a _________, although they differ in their _________.
faster
In most conditioning paradigms, extinction is ____________ than the original acquisition of the conditioned response.
compensatory, homeostasis
_________ conditioned responses in the body are most often the result of a biological mechanism called _________.
context or timing
Evidence that extinction is more than just unlearning, comes primarily from studies that look at shifts in _________ between learning and testing.
first
When two cues compete to predict a US or other outcome, the one that is most strongly learned is usually the cue that is learned _________, as revealed in studies of blocking.
summed
The principle of cue competition in learning arises in the Rescorla-Wagner model, where the association weights of two cues are _________ to generate a prediction of the US.
context
The Rescorla-Wagner model's account of contingency learning depends on viewing the _________ as a conditionable CS.
prediction error
Latent inhibition cannot be explained by the Rescorla-Wagner model because during pre-exposure there is no _________.
Purkinje, interpositus
Beneath the _________ cells of the cerebellar cortex lie the cerebellar deep nuclei, including the _________ nucleus.
mossy fibers
CS information travels up to the deep nuclei of the cerebellum along axon tracts called the _________.
inferior olive
An airpuff US to the eye activates neurons in the _________, a structure in the lower part of the brainstem.
inhibit
Purkinje cells _________ the interpositus nucleus, the major output pathway driving the conditioned motor response.
poorly timed
Animals with lesions to the cerebellum show CRs, but they are _________.
hippocampus
Latent inhibition and other expressions of CS modulation are impaired or eliminated by lesions to the _________.
glutamate
The neural mechanism for habituation is thought to be a progressive decrease in the number of _________ neurotransmitter vesicles available in the sensory neuron's axon.
activity dependent enhancement, synaptic plasticity
The _______ ________ ________ of the sensory neuron's release of glutamate onto the motor neuron is a presynaptic form of _________.
initiate, inhibits
Two proteins found inside neurons play critical regulatory roles in the synapse-creation process. The first protein, CREB-1, activates genes in the neuron's nucleus that _________ the growth of new synapses. The second protein, CREB-2, _________ the actions of CREB-1.
conditioned tolerance
Rats can be protected from overdose by the _________ that they learned during the administration of lower doses of heroin in the same setting.
Rescola-Wagner
Appealing due to its simplicity, the _________ model has proven itself to be a starting point for many promising models of learning.
aversive conditioning
The learning that takes place in order to avoid or minimize the consequences of expected aversive events is known as _________.
absence, contingency
Rescorla demonstrated that conditioning to a tone stimulus depends not only on the frequency of tone-US pairings but also on the frequency of the US in the _________ of the tone. The results of his experiment imply that animals are sensitive to _________: the degree of correlation between a potential CS and US.
latent inhibition
a conditioning paradigm in which prior exposure to a CS retards later learning of the CS-US association during acquisition training.
operant conditioning
the process whereby organisms learn to make responses in order to obtain or avoid certain outcomes; compare classical conditioning.
discriminative stimulus (SD)
a stimulus that signals whether a particular response will lead to a particular outcome.
operant conditioning
Is it classical or operant? - Since retiring, Jim spends a lot of time sitting on his back porch, watching the birds and whistling. One day, he scatters crumbs, and birds come and eat them. The next day, he sits and whistles and strews crumbs, and the birds return. After a few days, as soon as Jim sits outside and starts whistling, the birds arrive.
classical conditioning
Is it classical or operant? - Shevonne's dog Snoopy is afraid of thunder. Snoopy has learned that lightning always precedes thunder, so whenever Snoopy sees lightning, he runs and hides under the bed.
operant conditioning
Is it classical or operant? - Michael takes a new job close to home, and now he can walk to work. On the first morning, there are clouds in the sky. It starts to rain while Michael is walking to work, and he gets very wet. On the next morning, there are again clouds in the sky. Michael brings his umbrella along, just in case. When it rains, he stays dry. After that, Michael carries his umbrella to work anytime the sky looks cloudy.
classical conditioning
Is it classical or operant? - In Carlos's apartment building, whenever someone flushes the toilet, the shower water becomes scalding hot, causing him to flinch. Now, whenever he's in the shower and hears the noise of flushing, he automatically flinches, knowing he's about to feel the hot water.
free-operant paradigm
An operant conditioning paradigm in which the animal can operate the experimental apparatus "freely," responding to obtain reinforcement (or avoid punishment) when it chooses.
discrete trials paradigm
An operant conditioning paradigm in which the experimenter defines the beginning and end points.
shaping
an operant conditioning technique in which reinforces guide behavior to closer and closer approximations of the desired behavior
chaining
an operant conditioning technique in which organisms are gradually trained to execute complicated sequences of discrete responses
reinforcer
A consequence of behavior that leads to increased likelihood of that behavior occurring again in the future.
primary reinforcer
a stimulus, such as food or water, sex, or sleep, that has innate biological value to the organism and can function as a reinforcer.
drive reduction theory
The theory that organisms have innate drives to obtain primary reinforcers and that learning is driven by the biological need to reduce those drives.
secondary reinforcer
A stimulus (such as money or tokens) that has no intrinsic biological value but that has been paired with primary reinforcers or that provides access to primary reinforcers.
token economy
An environment (such as a prison or schoolroom) in which tokens function the same way as money does in the outside world.
negative contrast
Situation in which an organism will respond less strongly to a less-preferred reinforcer that is provided in place of an expected preferred reinforcer than it would have if the less-preferred reinforcer had been provided all along.
punisher
A consequence of behavior that leads to decreased likelihood of that behavior occurring again in the future.
punishment
In operant conditioning, the process of providing outcomes for a behavior that decrease the probability of the behavior occurring again in the future.
differential reinforcement of alternative behaviors (DRA)
A method to decrease frequency of unwanted behaviors by instead reinforcing preferred alternative behaviors.
reinforcement schedule
A schedule determining how often reinforcement is delivered in an operant conditioning paradigm.
self-control
An organism's willingness to forego a small immediate reinforcement in favor of a large future reinforcement.
positive reinforcement
A type of operant conditioning in which the response causes a reinforcer to be "added" to the environment; over time, the response becomes more frequent.
positive punishment
A type of operant conditioning in which the response causes a punisher to be "added" to the environment; over time the response becomes less frequent.
negative reinforcement
A type of operant conditioning in which the response causes a punisher to be taken away, or "subtracted from," the environment; over time, the response becomes more frequent.
negative punishment
A type of operant conditioning in which the response causes a reinforcer to be taken away, or "subtracted from," the environment; over time, the response becomes less frequent
continuous reinforcement schedule
a reinforcement schedule in which every instance of the response is followed by the consequence
partial reinforcement schedule
A reinforcement schedule in which only some responses are reinforced
variable-ratio (VR) schedule
In operant conditioning, a reinforcement schedule in which a certain number of responses, on average, are required before a reinforcer is delivered; thus, VR 5 means, on average, every fifth response is reinforced.
variable-interval (VI) schedule
In operant conditioning, a reinforcement schedule in which the first response after a fixed amount of time, on average, is reinforced; thus V1 1-m means that the first response after one minute, on average, is reinforced.
FR (5 homeworks = 1 toy)
FR, FI, VR, or VI schedule? A first-grade teacher gives each student a gold star if the child completes that day's math worksheet; at the end of the week, five gold stars can be exchanged for a toy.
VR (20 phone calls = 2 sales on average, but he can't be sure exactly when the next sale will come; it might be immediately after a previous call resulted in a sale)
FR, FI, VR, or VI schedule? A good telemarketer scores an average of two sales for every 20 phone calls he makes, so he earns the most profit if he makes a lot of calls.
VI (can't be sure exactly when the table will be ready, so check back periodically; only the last response will be reinforced)
FR, FI, VR, or VI schedule? A couple go to their favorite restaurant on a Saturday night and are told that seating will be available in about 30 minutes. They wait in the bar and periodically return to the reception area to check whether a table is free.
FI (money can be obtained for the first donation after a 2-week interval)
FR, FI, VR, or VI schedule? Maria donates blood regularly at the local hospital; they pay her for her donation, and it makes her feel good to know she's helping people in need. However, due to hospital policy, donors have to wait at least 2 weeks between donations.
VI (can't be sure exactly when the next big wave will arrive, so the best policy is to hang around in anticipation, even immediately after catching the previous wave)
FR, FI, VR, or VI schedule? A surfer spends all available afternoons at his favorite beach, where he is sure of at least a couple of big waves every hour or so. After catching a big wave, he immediately paddles back out to await the next big one.
FR (one stick of gum = one negative reinforcement, which is removal of bad breath)
FR, FI, VR, or VI schedule? A man who likes to eat spicy foods for lunch always carries a pack of spearmint chewing gum so he can freshen his breath before returning to work in the afternoon.
FI (address occurs after a fixed 1-week interval)
FR, FI, VR, or VI schedule? The U.S. president gives a video address on Saturday mornings. He and his staff plan the central ideas of the address early each week, but revisions are made right up until the actual delivery of the speech if world events occur that need to be included.
VR (100 cards = 1 winner on average, but the very next card after a winner might also be a winner)
FR, FI, VR, or VI schedule? A woman likes to play bingo at her local church. The game is set up so that one in every 100 cards will be a winner, but it's impossible to know in advance which specific cards will win. To increase her chances of winning, the woman buys 10 cards each time she plays.
concurrent reinforcement schedule
A reinforcement schedule in which the organism can make any of several possible responses, each of which may lead to a different outcome reinforced according to a different reinforcement schedule.
matching law of choice behavior
The principle that an organism, given a choice between multiple responses, will make a particular
response at a rate proportional to how often that response is reinforced relative to the other choices
behavioral economics
the study of how organisms allocate their time and resources among possible options
bliss point
In behavioral economics, the allocation of resources that maximizes subjective value or satisfaction.
Premack principle
the theory that the opportunity to perform a highly frequent behavior can reinforce a less frequent behavior; later refined as the as the response deprivation hypothesis.
response deprivation hypothesis
a refinement of the Premack principle stating that the opportunity to perform any behavior can be reinforcing if access to that behavior is restricted
basal ganglia
A brain region that lies at the base of the forebrain and includes the dorsal striatum
dorsal striatum
A region of the basal ganglia that is important for stimulus-response learning
orbitofrontal cortex
An area of the prefrontal cortex that is important for learning to predict the outcomes of particular responses