Instructor: Dr. Jennifer Williams
Lesson Two
Psychological Terms Part
2
I hope you’ve mastered all the topics in Lesson 1, because
we’ll be covering more psychology terms in this lesson. If you don’t feel like you understand those
concepts, go back and review the material.
If you’ve never had a psychology course, this information can be
confusing. Fortunately, once you
understand some basic psychology, the mysteries of horse behavior and training
don’t seem quite so mysterious.
Previous
Lesson Summary
As horse professionals, we’re most concerned with the
psychological theory of behaviorism. Behaviorism
is the study of stimuli and responses and it focuses on behaviors you can
observe. Behaviorists aren’t concerned
with internal thought processes or feelings.
Operant conditioning is one behaviorism theory of learning. At its very basic level, operant conditioning
happens because an animal receives some type of reinforcer when he acts
on his environment. A discriminative
stimulus (what we often call a ‘cue’ as horse professionals) influences the
horse’s behavior because the desired behavior is reinforced in the presence of
the discriminative stimulus. We often
use a combination of discriminative stimuli, reinforcers, and schedules of
reinforcement to train horses to perform tasks or to stop performing
undesired behaviors. Additionally, we can use shaping, in which we
reinforce closer and closer approximations of a desired behavior, to train our
horses to do complex tasks.
You’ve probably been using operant conditioning to train
horses without even knowing it!
Example:
If you want to teach a trail horse to cross an obstacle, shaping is a
great tool. You ask the horse to face the obstacle: let’s say a creek full of running water. You squeeze with your legs to ask him to walk
forward. Because he’s hesitant, you
reward the first step towards the water by petting him and saying ‘good
boy’. You then ask him to move forward
again by squeezing your legs and reward him when he takes two steps forward by
petting him and saying ‘good boy’. Each time you ask him to move forward, you
reward him only when he steps closer to the water. Eventually you only reward him when he puts
one foot in the water, then two feet in the water, and so on. Finally you only reward him when he quietly
walks into the water and crosses the creek.
Can you identify:
…the discriminative stimulus?
…type of conditioning you used?
…the schedule of reinforcement?
…type of reinforcer?
…reinforcer?
…ultimate desired response?
Scroll
down for the answers…
Discriminative
stimulus: squeezing his sides with your
legs
Type
of conditioning: operant conditioning
with shaping
Schedule
of reinforcement: continuous
Type
of reinforcer: positive reinforcement
Reinforcer: petting the horse and saying ‘good boy’
Ultimate
desired response: crossing the creek
quietly
Classical
Conditioning
Operant conditioning focuses on voluntary responses, and classical
conditioning focuses on involuntary responses.
An involuntary response is one that an animal has no
control over. For instance, if you see
some tasty food when you are hungry, your stomach probably rumbles. Your stomach rumbling is an involuntary
response because you cannot make your stomach rumble nor can you stop it from
rumbling when you are hungry. You have
no control over it.
Although you may not have heard the term classical conditioning
before, you’ve probably seen it in action.
It was first identified by Ivan Pavlov in the 1920s when he was
conducting physiology research. Pavlov
noticed that before he presented his study dogs with food, they drooled. He then tried ringing a bell before
presenting the dogs with food. He found
that over time, his dogs started drooling as soon as they heard the bell and
before they saw or smelled the food.
Thus was born the theory of classical conditioning.
Classical conditioning works by pairing a conditioned
stimulus (in Pavlov’s case, the bell) with an unconditioned stimulus (food) to
elicit a response. Classical
conditioning first starts when an unconditioned stimulus causes an unconditioned
response. The unconditioned response
is an involuntary response: one that
happens without conscious thought or decision on the horse’s behalf. This can include drooling, fear, pain,
pleasure, feelings of comfort or safety, etc.
In Pavlov’s experiments, the unconditioned stimulus was the food and the
unconditioned response was drooling.
To use classical conditioning, you first identify an
unconditioned response that you want to influence. You then pair a neutral stimulus with
the unconditioned stimulus. A neutral stimulus
is one which doesn’t initially provoke a response. The bell in Pavlov’s experiments was the
neutral stimulus. If you pair the
neutral stimulus and the unconditioned stimulus several times, eventually the
neutral stimulus becomes a conditioned stimulus. A conditioned stimulus is one which provokes
a response, called the conditioned response. Originally behaviorists thought that the
conditioned stimulus replaced the unconditioned stimulus, but many now believe
that the conditioned stimulus lets the animal predict the unconditioned
stimulus. Because of this, even after
the conditioned stimulus is taught, you must occasionally pair the conditioned
stimulus and unconditioned stimulus if you want the conditioned stimulus to
continue evoking the conditioning response.
One unintentional application of classical conditioning
involves the creation of a phobia.
Although no one sets out to create a phobia, phobias often develop
because something painful or startling causes anxiety or fear. When the painful thing is paired with a
neutral stimulus, that stimulus can become a conditioned stimulus that causes
anxiety or fear as well. We tend to think of phobias as something only people
experience, but I’ve met phobic horses, too.
Example:
Chief was scared of the veterinarian’s truck. When the truck pulled up, Chief ran around
his paddock or tried to bolt away from his handler. This vet-truck phobia developed because one
time the veterinarian drove up in his truck (neutral stimulus) to treat a
severe cute on Chief’s leg. The
treatment hurt (unconditioned stimulus), and Chief was scared (unconditioned
response). The truck then became a
conditioned stimulus that produced the conditioned response of fear that led to
Chief running away whenever he saw the veterinarian’s truck.
The conditioned stimulus is established during acquisition. At first, the conditioned stimulus doesn’t
have the power to predict the unconditioned stimulus. It gains that ability through repeated
pairings. When the unconditioned response
is very strong, it may take only one pairing of the conditioned stimulus and
unconditioned stimulus to create an association between the two. If the unconditioned response isn’t as
strong, it may take several pairings.
Example:
Phobias often develop in just one pairing of the conditioned stimulus
and unconditioned stimulus. A horse
misbehaves with a farrier, and the farrier hits the horse several times with
his rasp. Because this horse is very
sensitive, being hit causes him a lot of pain.
In this case, being hit is an unconditioned stimulus and fear is an
unconditioned response. In the future,
whenever the farrier arrives the horse feels fearful and trembles. The farrier is now the conditioned stimulus
and the horse trembling is the conditioned response. The initial event was so traumatic for the
horse that it took just one pairing of farrier and pain to create the
conditioned response.
Example:
I have an older horse who drools at feeding time. The sight of food was the unconditioned
stimulus and drooling was the unconditioned response. After several weeks of having his feed
delivered in a small, black bucket, Magic began drooling at the sight of the
bucket. The bucket became the
conditioned stimulus and drooling his conditioned response. Because drooling is not a powerful or strong
unconditioned response, it took several pairings of the bucket and the food
before Magic began to predict the appearance of food when he saw the black
bucket.
The relationship between the conditioned stimulus, unconditioned
stimulus, conditioned response, and unconditioned response forms the basis of
classical conditioning theory. Over the
years, however, the theory has grown to include additional concepts that help
us better understand how to apply classical conditioning to horse training
The time that occurs between the conditioned stimulus and
unconditioned stimulus is known as latency. Classical conditioning often occurs only when
the conditioned stimulus comes before the unconditioned stimulus. In some scenarios, the conditioned stimulus
starts and then the unconditioned stimulus starts a few seconds later so that
the two stimuli overlap. This is called delay
conditioning.
Example: The example above of the older
horse who was classically conditioned to drool when he saw a small, black
bucket shows how delay conditioning is applied.
He sees the feed bucket, and a few seconds later he smells or sees the
feed as well as the bucket.
Trace conditioning is a type of latency in which the conditioned stimulus occurs
and then the unconditioned stimulus occurs.
In trace conditioning, a shortened latency time achieves the best
results.
Example:
The horse with the vet-truck phobia saw the truck (conditioned
stimulus). He was then taken into the
barn where he couldn’t see the truck, but the veterinarian began treating his
wound which caused him pain (unconditioned stimulus). He then associated the truck (conditioned
stimulus) with pain (unconditioned stimulus), and the truck began to elicit a
conditioned response (fear which caused the horse to run away). If the horse’s owner and veterinarian had
chatted for a while after the horse got into the barn and before the
veterinarian started treating the horse’s wound, the horse might not have
associated the truck with the wound-treatment pain. The time between the conditioned and
unconditioned stimuli would have been too long.
Stimulus generalization occurs when stimuli similar to a conditioned stimulus
elicit the conditioned response. Stimuli
that are very similar to the conditioned stimulus elicit the strongest
conditioned response while those less similar elicit a weaker conditioned
responsive.
Example:
A horse learns that the sight of a small, black bucket (conditioned
stimulus) means its meal-time, and he puts his head in his feeder (conditioned
response). Over time, he puts his head
in his feeder (conditioned response) whenever he sees a small, light-colored
bucket; a large, black bucket; or a medium-sized bucket. The horse has generalized from one specific
conditioned stimulus (small, black bucket) to several similar stimuli (small,
light-colored bucket, large, black bucket, or medium-sized bucket).
Horses can also learn to discriminate between
several different stimuli. Stimulus
discrimination, also called differential conditioning, occurs when
the horse can tell the difference between a conditioned stimulus and other
similar stimuli that weren’t paired with the unconditioned stimulus. Stimulus discrimination occurs when the
conditioned stimulus is either always or sometimes paired with an unconditioned
stimulus, and when a similar stimulus is never paired with the unconditioned
stimulus.
Example:
You can use differential conditioning to teach a stallion when it is
time to breed and when it isn’t. A
stallion sees a mare (unconditioned stimulus) and gets sexually excited
(unconditioned response). If you always
put a leather halter with a stud chain on him before taking him to breed the
mare, the leather halter and stud chain combination become a conditioned
stimulus and the stallion will become sexually aroused (conditioned response)
when haltered with that combination. If
you always put a regular flat nylon halter without a stud chain on him when
he’s leaving his stall for turnout, training, and other non-breeding
activities, eventually he’ll learn to discriminate between the two halters and
will only be aroused when the leather halter/stud chain combination is put on
him.
Although we are covering discrimination and generalization
in this lesson on classical conditioning, these two phenomena can also happen
in operant conditioning.
Second order or higher order classical conditioning allows
something that served as a conditioned stimulus to serve as an unconditioned
stimulus for a new stimulus.
Example:
My horse Magic drools (conditioned response) now when he sees me
carrying a small, black bucket (conditioned stimulus). If my feed room door starts to squeak when I
open it, Magic will hear the squeak (conditioned stimulus) before he sees the small,
black bucket (original conditioned stimulus).
Over several pairings, he’ll begin drooling when he hears the door
squeak (conditioned response) before he even sees the small, black bucket.
Finally, as with operant conditioning, we can stop a
classical conditioned response from happening.
Extinction occurs when a conditioned stimulus no longer elicits a
conditioned response. This normally
happens once the conditioned stimulus and unconditioned stimulus are no longer
paired. If the classically conditioned
response was weak, you may only have to present the conditioned stimulus
without the unconditioned a few times before the horse stops responding. However if the relationship between the
conditioned stimulus and conditioned response is very strong, it normally takes
longer.
Example:
When your horse touches an electric fence and is shocked (unconditioned
stimulus), it hurts (unconditioned response).
After a time or two of touching the fence, your horse begins to fear
(conditioned response) the fence (conditioned stimulus). When your electric fence breaks, it no longer
delivers the shock (unconditioned response) when your horse touches it
(unconditioned stimulus). Your horse
will continue to fear the fence for a while but after he touches the fence a
few times without getting shocked, he stops fearing the fence.
When the relationship between the conditioned stimulus and conditioned response is very strong, spontaneous recovery may occur. Spontaneous recovery happens when a conditioned stimulus suddenly begins eliciting a conditioned response after extinction occurs. Spontaneous recovery does not last long unless the conditioned stimulus and unconditioned stimulus are paired again.
It is very important to remember that classical
conditioning can be used only with involuntary behaviors. These are instinctive behaviors over which
the horse has little control. This
includes drooling, sexual excitement, fear, hunger, and similar behaviors. You can’t use classical conditioning to teach
a horse to stop when you say “whoa”, since stopping is voluntary. You can, however, use classical conditioning
to teach a horse to avoid something through fear, as in the example with the
electric fence. The classical
conditioning that occurs with our horses often involves behaviors we didn’t
intend to create: no one wants their
horse to fear all men, but stimulus generalization may cause your horse to fear
all men if one man caused him pain. Understanding classical conditioning is
important as it gives us another tool to understand how horses learn.
Non-Associative
Learning
Both classical conditioning and operant conditioning are
considered associative learning: a type of learning in which the horse learns
a relationship between two things. In
classical conditioning the horse learns a relationship between two stimuli (the
conditioned stimulus and the unconditioned stimulus) and in operant
conditioning the horse learns a relationship between his behavior and a
reinforcer.
In non-associative learning, the horse learns to
react to a single event or stimulus without reinforcers. We’re interested in three types of
non-associative learning: sensitization,
habituation, and desensitization.
Sensitization is a learning process that occurs when an animal learns to
react more quickly to a stimulus each time it is presented to him.
Example:
A great example of sensitization occurs when a horse learns to respond
more quickly to leg aids. In the
beginning, your horse doesn’t understand to go forward more quickly when he
feels your legs against his sides. You
often need to kick your horse at first to get him moving faster. After several rides, he probably starts
speeding up when you nudge him with your heels. After a few more training sessions, he’ll move
out when you squeeze with your legs.
Finally after a lot of training, your horse picks up the pace when you
barely squeeze your legs. He’s become
sensitized to the feel of pressure against his sides.
Example:
Sensitization can occur without our direction. If you are riding and your horse hears the
loud noise made by a firecracker, he might jump. If another firecracker goes off a few minutes
later, he jumps again. Before long, he
reacts more strongly to each firecracker.
Sensitization often generalizes to similar stimuli. In the above example with the fireworks, your
horse might start jumping each time he hears a loud noise – so if someone slams
a door, he jumps.
Habituation is almost the opposite of sensitization: the horse learns to respond less quickly or
less strongly after repeated exposure to a stimulus. Habituation works best when the stimulus is
repeated fairly close in time and at a similar intensity. Habituation explains how horses adapt to
their environment.
Example:
I once boarded my horses at a barn next to a small air strip. The first several times an airplane rose up
from the runway, both horses startled and ran around the pasture. But after several planes in a row took off,
they stopped spooking and continued grazing.
They had habituated to the airplanes.
Example:
Habituation can occur when we don’t intend. I’ve seen this often with lesson horses and
others who are ridden by novice riders.
The rider kicks the horse to get him moving, but he continues kicking
and banging on the horse’s sides. Before
long, the horse habituates to someone kicking his sides and continues plodding
along at the same speed, regardless of what his rider is doing.
Because both habituation and sensitization occur without
obvious reinforcement, they can sometimes happen without our knowledge or
direction. This is one of the reasons it
is important to pay attention to both the cues you administer to your horse and
what’s going on in his environment.
So your horse doesn’t become habituated to cues, timing is
important. Your horse may stop
responding if you don’t remove the cue once he does what you wanted. And if you don’t pay attention to what’s
upsetting your horse or try to shelter him from anything in his environment
that upsets him, you can accidentally sensitize him to stimuli.
Example:
My neighbors are building a new house close to my riding arena. The heavy machinery that’s brought in to dig
drenches, pour the foundation, and complete other work is loud and startling to
my horse Freckles. It is tempting to
stop riding, put her away, and wait until a time when everything is quiet. But if I stop riding Freckles whenever a
piece of machinery moves or makes noise, she’s going to become more reactive
each time she sees it (she would be sensitized to the sight and sound of the
machinery). Instead, I keep working
Freckles while the machinery moves around.
Over time, she’s becoming habituated to the commotion and doesn’t pay
any attention to it.
The terms desensitization and habituation are often
used interchangeably, but they’re different concepts. During desensitization, horses become less
fearful after repeated exposure to an aversive or negative stimulus.
Unlike sensitization and habituation which can be done
accidentally, desensitization is nearly always a deliberate lesson. I use it to help teach a horse to tolerate
something that’s caused him pain, stress, or fear in the past. Through the
equine rescue organize that I founded and manage, I frequently foster horses
with behavior problems. Often, they
haven’t been handled at all and are scared of everything, or they’ve been
handled badly and have developed several fears.
Because of this, I need to desensitize them to many things.
One method of desensitization is called flooding. This is where you overwhelm a horse with a
stimulus until he stops responding.
Example:
A horse is scared of plastic shopping bags. Using the principle of flooding to
desensitize him, you surround him with plastic bags. They hang from his feed bucket, his stall
door, his paddock fence, and from his halter.
At first, he runs around in terror.
But over time his terror diminishes and eventually he’s standing quietly
next to the rustling bags.
Using flooding to desensitize a horse can present problems
or concerns. If the horse is truly
scared and panicked, he may hurt himself or his handler in his efforts to
flee. In the above example, the horse
couldn’t escape plastic bags tied to his halter and might eventually crash
through a fence trying to escape. He ends up hurt and even more scared than he
was originally.
You also must time the application and removal of the
frightening stimulus carefully: if you
remove the scary stimulus before the horse has accepted it, he will become more
frightened of it. In the above example,
if the only plastic bags were tied to the horse’s feed bucket and you took the
bucket out of his stall while he was still running in circles or trembling, he
would probably become more scared of the bags instead of less scared.
I prefer to take desensitization a little slower. I don’t
overwhelm him by surrounding him with something scary, but instead I gradually
expose him to the stimulus in a very controlled setting, wait until he accepts
it, and then increase the amount of exposure.
This method often takes longer than flooding, but it keeps the horse and
handler much safer.
Timing is critical with this method, too. If you take the stimulus away too quickly,
the horse might become more scared of it.
But if you push him too far or overexpose him, he also may become more
scared of the item.
Example:
You can use this method to desensitize your horse to the sights and
sounds of plastic shopping bags. Your
horse needs to be wearing a halter and lead.
Hold the lead in one hand, and hold the plastic bag in another. To begin, hold the plastic bag as far away
from the horse as you can. Wave it
around so that it rustles and moves. I
let the horse walk around me and keep his/her nose tipped toward me. When the horse settles down, stops moving,
and relaxes, I stop waving the plastic bag and give the horse a rest. The goal here is to expose the horse to the
scary thing and show him it won’t hurt him.
You
need to understand how fearful the horse is of the object so that you don’t
overexpose him in the beginning. If I
push the horse’s tolerance too far, and he panics, I’ll move the plastic bag a
little further away, but continue moving it around. If you stop moving the bag while the horse is
scared, he’s likely to get more scared of it.
After the horse rests, I move the bag a little closer to him and wave
the bag until the horse settles. Each
time you expose the horse to the plastic bag, you move it a little closer to
him. Eventually you will be rubbing the
horse with the bag, moving it beneath his belly, and riding him while carrying
the bag. This method may take several
training sessions before your horse is no longer scared by the plastic bag, but
you keep the horse safer and less stressed by using this method as opposed to
flooding him by tying plastic bags to his halter, his saddle, and all around
his stall.
Clicker
Training
Clicker training is a fun way to tie several of the
psychology topics we’ve covered. You may
have seen clicker training used with dolphins, circus animals, zoo animals or
even dogs, but it is also gaining popularity among horse owners. In fact, I did my doctorate research in
clicker training because horse owners wanted to better understand how and when
it works.
When clicker training, the trainer uses a small device,
called a clicker for the sound it emits, to signal when an animal has performed
a desired task.
Example:
When a trained dolphin jumps through a hoop, the trainer presses a
button on a device called a clicker and the dolphin hears a loud “click”
sound. The sound lets him know that he’s
performed the right behavior. The
trainer follows up by tossing a fish to the dolphin.
A
Clicker
Clicker training ties the concepts of classical
conditioning and operant conditioning together. When you use operant conditioning to train a
horse, the reinforcer that you give him either increases or decreases the
likelihood that he will perform a behavior that we want.
The sound of a click on its own isn’t a very good
reinforcer and wouldn’t influence the horse’s behavior. Using classical conditioning, however, you
teach the horse to associate the sound of a click with food. Clicker trainers sometimes call this charging
the clicker and it is the first step in clicker training. You do this by clicking and immediately
giving your horse a piece of food. After
several repetitions, your horse should be looking for food as soon as he hears
the click. When charging the clicker,
the click is the conditioned stimulus, the food is the unconditioned stimulus,
and the pleasure of eating is the response.
Once your horse establishes a connection between the sound
of the click and food, you can clicker train him. To use clicker training, give him a cue (the
discriminative stimulus) to signal when you want the behavior. After he performs the behavior, immediately
give the click (secondary reinforcer) and then follow up with a piece of food
(primarily reinforcer).
Example:
If you want to teach your horse to lift his foot, say “foot”
(discriminative stimulus) when you are standing by his shoulder. If he lifts his foot, you click (positive
secondary reinforcer) and follow the click with a piece of food (positive primary
reinforcer). Before long, he’ll be
picking up his foot when he hears the cue “foot”.
Some trainers always follow the click with food
(continuous schedule of reinforcement).
Using this method, the click sound becomes a signal to the horse (called
a bridge) that the horse did the right thing and that the primary
reinforcer, food, is coming.
Most trainers switch to an intermittent schedule of
reinforcement after the horse has made the connection between the click noise
and food. Using an intermittent schedule
of reinforcement, the horse always receives a click when he does the right
thing, but the click sound is not always followed by food.
The use of secondary reinforcers provides immediate
feedback to your horse. It lets him know
that the task he performed is the right one as soon as he performs it. When you use a primary reinforcer like food,
it may take many seconds for you to administer it. For example, you might need to reach into
your pocket, get a piece of food and give it to your horse. A delay between the behavior confuses your
horse. He may think that you are
reinforcing a different behavior or may think the food is random and not linked
to a behavior. Either situation means
he’s not likely to do what you want the next time you give him the cue.
Example:
You can clicker training your horse to lower his head in response to
slight pressure on his poll. First
establish a connection between the clicker and food. Then place one hand on your horse’s poll and
give slight pressure. When your horse
drops his head, click and then give a piece of food. Before long your horse will lower his head to
the slightest of pressure on his poll.
This is useful when haltering or bridling your horse.
If you waited several seconds and
then gave him a piece of food without a click, your horse might get
confused. After he lowered his head, he
turned it to the side and that’s when he got the food. He might think that you were rewarding him
for turning his head to the side, and he would be more likely to do that
instead of lowering his head the next time you put pressure on his poll.
Clicker training isn’t for everyone, and in my research
trials the horses I trained didn’t learn faster (or slower) with a clicker than
when they were rewarded with food only.
But during those trials I discovered that using a clicker made me focus
on the horse more. I wanted to give him
the click the second he did what I asked.
A lot of people who use clicker training say they really appreciate that
aspect of it.
Jawhari
was taught to reach out and touch his nose to the crop using clicker training
Even if you think that the idea of clicker training isn’t
for you, you’re probably using other secondary reinforcers. If you use the phrases ‘good boy’ and ‘good
job’, you are using secondary reinforcers.
You can follow ‘good boy’ or ‘good job’ up with rest, a piece of food, or
something else that gives those words significance and allows them to act as a
reinforcer. Although you skip the
“charging the clicker” step by classically conditioning the words ‘good boy’
and food, you create the secondary reinforcer after several pairings of the
desired behavior, followed by the words ‘good job’ and a piece of food. In that scenario, training takes a little
longer initially as the horse has to learn that ‘good job’ is often followed by
food or rest. But once he makes that
association, you can use that secondary reinforcer when training other
behaviors.
This lesson wraps up our discussion of psychological terms
in horse training. I hope these terms
help you better understand how you train your horses and why those training
methods work. In our next lesson, we’ll
learn how horses communicate both verbally and non-verbally (body
language). When you understand what your
horse is saying, you’ll become a better horseman because you can tailor your
training and handling methods to better fit his needs. Understanding what your horse is telling you
will also keep you safer anytime you interact with a horse.
Assignment:
Describe
how you would train a horse to perform a task using either operant conditioning
or classical conditioning. Identify the
stimuli, responses, reinforcers and other psychological terms/topics you use.
Please send essay to Dr. Williams at equinebehaviorinstructor@gmail.com
Be sure to include your full name and email
address on the document – not just in the email.