EQUINE CLICKER TRAINING.....
                                                               using precision and positive reinforcement to teach horses and people
 
         

Clicker Expo 2009: Providence, Rhode Island

    I attended my fourth Clicker Expo from March 27-29, 2009 in Providence, Rhode Island.  Every Clicker Expo I have attended has been full of new information and taken my understanding of clicker training and my skills to a new level.  I think it is important for clicker trainers to continually access new material because clicker training is changing and growing as each trainer gets more experienced and as new trainers (who bring new insight and different skills) enter the field.  The Expo takes place over 3 days and in each time slot there are 5 sessions, so it is only possible to attend a total of 9 different sessions.  Some of the sessions I attended were ones I had seen before, but I knew I would pick up new things and that some might have added new material.  I am going to share a little bit about each session and the key points I took away from them.

Ken Ramirez: Working for the Joy of It: A Systematic Look at Non-Food Reinforcers

I had seen this talk before and I wrote about secondary reinforcers in my article on Clicker Expo from last year.  He goes into great detail about how to choose and condition secondary reinforcers and make sure the animal accepts them.  I think I still struggle with how to use these well with horses. I have conditioned some secondary reinforcers with my horses (a specific stroke on the neck and a verbal) but I am not sure they are as effective as I would like. Ken talks about how he uses the "refrigerator test" to see if his secondary reinforcers are working.  When he presents or offers a secondary reinforcer, he wants the animal to light up with the kind of response it would get if you opened the refrigerator and they were hoping for a treat.  When you bring out the ball to throw as a reinforcer, does the animal light up with excitement? 

I think one piece I got out of his talk this year is that one way to introduce the idea of secondary reinforcers is to vary the type of primary reinforcer you use in your training sessions. An animal that is already used to a variety of food rewards is going to be more accepting of a variety of reinforcers including both primary and secondary reinforcers. Because he works with zoo animals where the environment does not vary that much, he feels that variety in training is very important. He wants the animals to really enjoy their training sessions and so he varies what types of behaviors he works on, duration of behaviors, who is training (an animal can have several trainers) and the types of reinforcers. 

Because my horses are used to the click meaning food is coming, I have not been quite sure if it is fair to click and offer a different reinforcer. I asked him about this and he said that every training situation is different and I need to evaluate what will work best for my horse. If the horse expects food after the click, I might not want to change that.  One option is to offer the secondary reinforcer without clicking. He does not always click when he offers a secondary reinforcer. It depends upon whether or not he wants the precision of the click to mark the behavior.  An animal with a very predictable reinforcement schedule is hard to change.  It is better to start off using variety in reinforcers from the beginning so that the animal's expectations are that reinforcement can be variable.

He presented a lot of other information on how to train with secondary reinforcers including pitfalls and suggestions for choosing them.  There is more information on them in his book "Animal Training."

Morten Egtvedt and Cecilie Koste:  The ABC's of OTCh (Obedience work for dogs)

I attended their seminar on backchaining last year and this winter I read their book on clicker training. I wanted to see more of their work, so even though I do not do obedience, I attended this lecture. I am always amazed how much I can learn about training from subjects that seem totally unrelated to what I do.

They started the session by some information on the importance of choosing a good puppy and then went through their list of basic skills. They teach their obedience dogs 18 basic skills with 5 levels of difficulty before they reach their desired level of fluency.  All of these behaviors are taught without formal cues. They start by capturing behaviors and then the dogs learn which behavior to offer through a combination of context and trial and error. This may seem confusing but since they only work on one behavior in each session, they find the dogs catch on very fast. They call this the "windows" system and the 18 behaviors are in folders.

They showed a nice video of how this works and it started to make sense.   The dog learns that certain behaviors are reinforced from certain positions relative to the handler.  If the dog is in heel position, there are only a few behaviors it can offer (sit, down, bark, look at me) and it will offer each one to see which gets a click. Once one of those behaviors is clicked, the dog continues to offer it and the trainer can work on improving that behavior. I had heard this described before and read about it in their book, but it wasn't until I saw the video that I really understood how it worked.  What was interesting was how quick the dogs were to offer behaviors.

There are a couple of reasons they do all this work without adding cues. One reason is that they want to add the cue to the final finished behavior. The other reason is that if the dog gets used to having all its behaviors on cue and under stimulus control, the dog can get in the habit of waiting for a cue.  This might seem like a good thing as most people don't want their dogs throwing behavior at them all the time, but it means that when they do want to shape new behaviors, the dog does not offer them.  By delaying the addition of the cue and allowing the dog to offer behaviors (even already trained behaviors) off cue at certain times, the dog remains more actively involved in the training process for its entire life.

They also had some discussion on backchaining and using classical conditioning to improve performance. They do a lot of work on "doggie zen" so that the dogs will work in the presence of an open bowl of food and they do many repetitions of each behavior. Morten firmly believes that if you can get good fluency in each behavior, the behavior is resistant to extinction and will be there despite distractions and the dog will remember the behavior forever.

If you are interested in their work, their e-book is available from their web site (www.canisclickertraining.com). They also offer a free 7 day e-mail training course that is brief but gives some idea of their approach.   It is a little hard to figure out how to translate some of this to the horse world and what we want to do with our horses, but one thing I have been experimenting with is doing less (less negative reinforcement, less prompting, less body language) to see if my horses will do more thinking and experimenting on their own so that they really learn some behaviors instead of letting me direct them through them.  I have really just started playing with this, but I will report back at some point. Cecilie says there are clicker trainers working on implementing their system with horses and coming up with the basic behaviors that are the foundation for some horse disciplines.

Dr. Jesús Rosales-Ruiz: Broken Clicks

Last year Jesús and his graduate students were looking at different ways that changing the click/treat relationship could break down and cause frustration or a breakdown in behavior when training animals. They presented information on delaying the click and using treatless clicks.  Some of that was covered in this year's talk, but most of this year's talk focused on the function of the clicker as a cue to go get the reinforcer and what happens when that breaks down. 

One of his graduate students was doing a study on jackpots and she had a dog touch a target and then come get his reinforcement from a food bowl near where she was sitting on a stool. She would just drop the food in the bowl.  The food sometimes bounced out of the bowl so she decided to use a PVC tube to deliver the food.  She would sit on the stool, but instead of tossing the food down (and moving her arm), she would just drop it into the end of the tube and let it slide down into the bowl.  The unexpected result of this change in food delivery was that the dog didn't know what to do when it heard the click because it was relying on some kind of visual cues from the handler about food delivery.  And then, even when she took time to show the dog how the food delivery worked, the dog was still confused. It started to do things like go to the bowl after touching the target, whether it had been clicked or not.

There was a complete breakdown in the behavior loop of touch the target, click, go and get the reinforcer.  On one hand, this might seem trivial and easy to fix, but it brought up some interesting questions, the most important one being "what does the click mean?"   Jesus believes that the click has two functions. It means "yes, you did it right" and it means "go get your reinforcement."   When they added the food tube, the click was still marking the behavior, but instead of going to get the reinforcement, the dog was stalling out.   The click seemed to be a cue to look at the handler, not go get the food. When the handler offered no information about how to get the reinforcement, the dog got confused. 

This made Jesús look more closely at how we condition the clicker. There are two theories about what the clicker is.  One theory is that the clicker is a conditioned reinforcer that has been created by pairing the click with the treat through classical conditioning.  The other theory is that the clicker is a discriminative stimulus that means a primary reinforcer is coming. This was Skinner's theory and he believed the clicker got its power through operant conditioning.   Currently, the prevailing theory is that the clicker is a conditioned reinforcer and the best way to make this association is through pairing. This is why some people "charge" the clicker.  But Jesus is starting to think that it is more than that and starting to consider Skinner's idea about how to condition the clicker.  In order for clicker training to work well the animal has to clearly know that the click means "go get your reinforcement" and how to do that. 

It is another important piece in how to keep your training efficient meaning there is little time between the click, reinforcement and the dog repeating the behavior again. Jesus recommends thinking of this as a chain.  The chain is behavior - click - approach - reinforce. Any break in this chain disrupts the training process.   He is now recommending that people start by teaching the animal how to get the reinforcement before they add the clicker. So you teach the animal how to get the reinforcement, then you teach the animal when to get the reinforcement (when it hears the click) and then you teach the animal how to get you to click. 

Later in the session, he showed how using treatless clicks would disrupt the click -> approach - >reinforce chain because the click no longer always meant approach the handler for reinforcement.  He also had examples of how delaying the treat did the same thing. If the dog was clicked and the handler delayed the treat for 5 seconds, there were a lot of undesirable consequences including the addition of frustration behaviors and the behavior that was being reinforced was the one the dog was doing when the food was delivered.

I think what I got out of this discussion was that the food delivery was very important and that it was important to keep the click -> approach -> reinforce chain strong.  There was some discussion about what happens if you are using other reinforcers or if  there are situations where you want the animal to stay in position when you click and Jesus said this was ok as long as you took the time to teach the dog what to do when it got the click. Dogs can learn different ways of getting reinforcers in different situations as long as you are consistent about how you indicate to the dog how to get reinforcement, or you teach the dog what to do in each situation.

Karen Pryor: Neuroscience and Clicker Training

This session was a preview of some of the information in Karen's new book (Reaching the Animal Mind) which is due out in June.   The book is about the "why" of clicker training, not the "how." It is about why clicker training works and she spent the session explaining how she followed different bits of information to find some answers. Karen knew that clicker training worked. But she wanted to know why it worked. Why do animals learn so fast? Why do they remember things for so long?

She started out by describing the Lindsay Wood study which showed that dogs learned faster when a clicker was used as compared to a verbal "yes."  Lindsay studied 20 shelter dogs and taught them to go out to a target.  The speed at which the clicker dogs learned the behavior was 45% faster than those trained with the verbal.  There was less difference when she looked at maintaining the behavior, but the clicker was still better. 

Karen started collecting information about how the brains works and this led her to Joseph LeDoux who was studying fear responses in the amygdala. The amygdala is the primitive part of our brain that processes certain kinds of information very quickly.  When you are scared by something and jump, you often jump before you are conscious of what you jumped at. Your body just responds "without thinking."  The amygdala is what makes this possible.   Joseph LeDoux was studying conditioned fear stimuli and seeing responses that had some of the same characteristics as clicker training.  The response was learned in or two exposures, remembered forever and it caused emotion (fear in his case).   He has a book on the subject called "The Emotional Brain: The Mysterious Underpinnings of Emotional Life."

Karen tried to see Dr. LeDoux to talk about what was processed by the amygdala, but was unsuccessful. However, she did find someone who pointed her to Peter Holland, who was studying appetance and positive reinforcers.  His group was also seeing rapid learning, long retention and an emotional response (excitement and elation).  They answered her question which is that all conditioned reinforcers go through the amygdala.  This means that the animal does not go through a "thinking" process to interpret the conditioned reinforcer.  It just responds automatically.  As a side note, she explained this is why some animals are afraid of the click. They are already programmed to be alert for quick, sharp sounds. And as a further side note, it explains why words or the voice is not as good as a conditioned reinforcer.

The fact that conditioned reinforcers and primary reinforcers are handled differently by the brain has some implications for training.  Primary reinforcers are helpful for general learning.  Her example was that if you are providing primary reinforcers, the animal is learning that being with you is a good place to be.  I think this explains why we can use classical conditioning to change an animal's emotional response. If good things are happening, it changes an animal's general perception of the situation.   In contrast, secondary reinforcers (conditioned reinforcers) are more specific.  They create very specific learning situations for animals.  The point here is not that using primary reinforcers is better than secondary reinforcers or vice versa, but that they use different pathways in the brain and produce different outcomes. 

She also mentioned that a lot of people think of secondary reinforcers as being reinforcers that are things that animals like, but do not need and are therefore of "lesser" value.  BF Skinner thought of secondary reinforcers as being valuable because to animals they are about access to primary reinforcers.  So they are not of "lesser" value, they are just one step removed from primary reinforcers. This ties in with the concept of cues as reinforcers because they are another step farther out in the sequence of primary, secondary, cue.   Following this logic, cues are tertiary reinforcers.

Karen spent quite a lot of time talking about the importance of cues as reinforcers and why clicker trainers need them to function as conditioned reinforcers. In some of the discussions on poisoned cues, presenters have talked about how poisoning a cue is a problem because we want the animal to be happy to hear the cue.  A good cue indicates the possibility of reinforcement, not a threat of punishment.  This comes down to the difference between commands which are traditionally trained to mean "if you do this, you can avoid punishment," and cues which mean "this behavior could lead to reinforcement if you do it now."   She talked a lot about why cues don't work and how to identify what has gone wrong.

The last piece about the neuroscience that she discussed was some work being done by Jaak Panksepp on the SEEKING circuit.  The SEEKING circuit explains why animals like clicker training so much.  Jaak Panksepp believes there is a part of the brain that is activated when we are searching our environment for items of interest. We are not searching because we need something at that moment, but because we are enjoying exploring our environment.  For people, the obvious comparison is going shopping just for fun.  Clicker training activates the SEEKING circuit by giving the animal little puzzles to figure out. These puzzles lead to reinforcement but the animal is not doing them because it needs the reinforcement (although the dog might like food, it is not working because it is starving), but because it is enjoying the mental challenge or stimulation.

Karen pulled all these pieces together to explain why clicker training works so well and why animals like it so much. I am going to end with a quote from her talk.  "All the benefits of the primitive path of the conditioned reinforcers-- rapid learning, long retention, elation and joy--can be built in to our cues as well as our clicks. the end result is the happy experience of being in the SEEKING mode together, instead of in the old (equally innate) mode of training based on social dominance and fear."

Her new book that explains this topic "Reaching the Animal Mind" is being released on June 16, 2009

Joan Orr:  Puppy Perfect

We are getting a new puppy soon so I went to this session.  Joan had great video showing what to do, and not do.  I am not going to go into any detail here, but if you are getting a puppy, I recommend her puppy videos.

Michele Pouliot:  Dance, Dance Revolution

I picked this session because I have always been curious about freestyle with dogs. I think it would be fun to do the same thing with horses and I did get some ideas on creative ways to teach an animal more body awareness.  With horses, I tend to use a lot of negative reinforcement to teach body awareness, but she uses targeting and takes advantage of changes in the environment or interactions with props to teach the dogs new moves.

Alexandra Kurland:  The Finely Tuned Trainer

Alex gave a session on the importance of good mechanics and food delivery to create clean training loops.  She has already posted some information on this in her post on "loopy" training on the_click_that_teaches list, so I am going to just share some highlights here. It is always good to get a reminder about the importance of good mechanics and food delivery.  I often think of food delivery as being important because it keeps us safe and leads to polite and calm horses, but Alex showed it had even greater implications for good training.   Some of the key components of good food delivery are to feed out away from your horse, feed in a predictable place, do not preload your hand or put your hand in your food pocket before you click, and practice "economy of motion."

She emphasized that because food delivery is part of the animal's initial experience with clicker training, it is important to get it right.  You don't want an animal's first experience with clicker training to be stressful.  The animal needs to know how to get the food promptly, efficiently and safely.  This might mean you have to practice food delivery on your own before even starting with the horse.  She had some fun video of her own experiments with food tossing for practice and in a training session with Panda.  Horse trainers can practice with people or by themselves on the basics of good food delivery.

One of the reasons Alex is looking at food delivery is that she sees a lot of horses who are frustrated in the early stages of clicker training.  With some horses who have not been hand fed, it is important to recognize that time spent on food delivery is worth the effort.  She is even seeing that good hand feeding might be a skill to practice before starting clicker training.   Part of Alex's talk was on creating clean training loops and she says this is can be viewed as another application for backchaining.  In an ideal training loop, there is a steady flow of cue->behavior->click->reinforcement->cue->behavior->click->reinforcement and so on. 

Since the last item in the chain is the food delivery, it might make sense to start there with a new horse.  When I first learned about clicker training, some people recommended "charging" the clicker.  This meant clicking and feeding until the animal learned that the click predicted food was coming. Now we often start with targeting because horses seem to get the idea that they can make the trainer click more quickly.  The chain we are creating is: present target->behavior->click->reinforcement.   What Alex is suggesting is that for some horses, this is actually too complicated because in order to chain behaviors together well, an animal has to know all the individual behaviors.  Taking time to work just on food delivery makes the end of the chain very solid and removes potential for frustration over how food is delivered.

I wrote the training sequence as present target->behavior->click->reinforcement, but this is not really what you see if you have a good training loop going. In a good training loop, there are no gaps. The animal goes directly from reinforcement back to touching the target.  Clean training is very efficient and not stressful for the animal. No time is wasted having the animal look for food, or getting distracted by hand movements. By tightening up food delivery, you create these nice clean training loops where there is no break between food delivery and the animal going right back to offering behavior. This reminds me of Kathy Sdao who says that good training is rhythmic. You get into a nice pattern of cue, click, feed, cue, click, feed where the animal is offering the behavior promptly and without confusion.

If you look at the training loop I described above, you can see another place where it could get broken and that is if the animal did not respond promptly to the cue.   Alex talked about poisoned cues a little and why it is so important that our cues are not poisoned.    A poisoned cue is ambiguous to the animal. It could mean reinforcement is coming or it could mean punishment is coming.  This leads to hesitation in responding to the cue and creates a break in the training sequence of  cue->behavior->click->reinforcement->. If presenting the target is a poisoned cue, you are not going to get the same kind of happy and enthusiastic response you would get if it was not poisoned and the training loop can be broken at the "cue" part.  

The take-home message I got from the session was that clear and consistent food delivery is important and that disruptions in the flow of training are important. Instead of assuming the animal just "doesn't get it," look at your mechanics. Watch and ask, does my animal know what to do when I click? In most cases with my horses, I want them to stop and I bring the food to them.  So, the horse has to learn to wait for the food and also know where to look for it. Am I giving the horse information about where to find the food. Can he clearly tell if I am going to feed out in front of hiss nose? Or back toward the chest? 

This does not mean we have to feed exactly the same way for every exercise, but it means we do have to take the time to show the horse how to find its reinforcement and be consistent about how we do things.   This is more important with animals with different kinds of reinforcers, but even with horses, there might be times when we want the animal to stay out and other times when we do want the animal to return to us. If we don't take the time to teach the animal where to find the reinforcement, training is going to be more stressful.  This should sound familiar if you read the notes about the session on Broken Clicks with Jesus Rosales-Ruiz. He and Alex are working together on this topic and their sessions were different pieces of the same puzzle.

The other interesting point that I will mention is that she said if you have a training problem, especially with animals learning undesirable chains, you need to think of converting open chains to closed loops. Remember that something always happens before the behavior you want. If you want to get rid of a behavior that is early in a chain, you can create a closed loop excluding it.

Morten Egtvedt and Cecilie Koste:  Reliability: Back Chaining in Action

Last year I went to their lecture, but not their lab. This year I went to the ABC's of OTCh lecture and made it to the backchaining lab.  I had read their book and I wanted to see them teaching backchaining in real life so I went to this lab.  If you are not familiar with backchaining, it is teaching a behavior chain by starting with the last element in the chain.  You build the chain by then adding each new element in reverse order of the final chain. If I want to chain skip, clap, jump, I start by teaching each behavior separately. Then I chain them together starting with jump. I cue jump, c/t, jump c/t, jump c/t until the animal is fluent in that. Then I cue clap, jump, c/t, clap, jump, c/t, clap, jump, c/t until that part is fluent. Then I cue skip, clap, jump c/t. When the animal can do the whole chain (skip, clap, jump, c/t) 5 times in a row , then the animal is fluent in the entire chain.

They started by having us work in groups of three and teach each other a behavior chain of 5 behaviors.   This was great practice as when we were the trainer, we all made the most common mistakes which are:

Clicking before the end of the chain
Not having each behavior well enough trained before inserting it into the chain (for people this translated into not defining the behavior)
Giving the cue at the wrong time.  You give the cue at the moment when you would have clicked. It is easy to be too early or too late
Giving the cue even if the behavior did not meet criteria (the cue is a reinforcer. Giving the cue is as good as clicking so if you continue the chain, you are reinforcing sloppy versions of the behavior)

As the trainee, we found it was easy to anticipate and perform a behavior before the cue was given, and it was also easy to try to skip behaviors and jump to the end of the chain.  These are ways that animals often "test" the chain when they are being backchained.  Morten and Cecilie pointed out that these are really common problems and you want the animal to do them so that it has a chance to figure out that it has to follow the chain correctly in order to reach the reinforcement.

After we practiced with people,  people with dogs backchained 3 simple behaviors their dogs knew. This was fascinating to watch. There was a lot of anticipation and mixing of behaviors. The dogs would start a new behavior without ending the previous behavior.  Dogs would skip to the end of the chain or get stuck when they didn't get clicked for each individual piece.  Morten and Cecilie explained that you can allow one mistake and just restart, but if you get two, and certainly if you get three, you need to take the problem behavior out of the chain and fix it before putting it back in.  Once the behavior has been corrected, you have to go back and rebuild the chain from the beginning.   You can sometimes rebuild the chain using "mini-chains" if there are sections that were not affected by the break.

I think backchaining is really fascinating. I asked if the dogs understood the concept of backchaining so that once they had been backchained a few times, it was easier to teach them new chains.  They said that yes, dogs do understand and it gets easier as you train more chains this way.

Ken Ramirez: Using What You've Learned: How to develop your own training plan

This was a great session to attend at the end of Clicker Expo because Ken's goal was to help us figure out how to sort through all the information we had gotten to see what was most useful for our current training. He talked about how to recognize your own training level and what you need to consider when you get training advice from someone else.  Just because someone else can train a behavior in a certain way, it doesn't mean anyone can do it. It depends upon the trainer's experience, the animal's experience, their relationship, the trainer's mechanical skills, and the animal's physical ability.

He spent some time talking about why trainers need to have good shaping plans. I have to say that this is something I hear again and again and I know I don't do enough of it. It is much easier to just go out and try things, especially if I have not trained a behavior before. But I think I would probably be a better trainer if I was more organized about what my plan was for each day.  He had some good suggestions for training and shaping plans.  They were:

Map your behavior
There are multiple good paths
Learn from others, but develop your own plan
Be flexible (it's a guide, not set in stone)
Stay focused on your goal
Keep good records

When writing shaping plans, he suggests writing the plan two different ways. Both start by writing down where you are starting and your final goal.   Then can write the plan by going "forward" so that you start with your current behavior and putting in steps until you reach the finished behavior. Or you can start with the finished behavior and write the plan going "backward." Often this will create two different shaping plans and you can then choose which one looks better.  When you write a shaping plan, you need to take into account if there are foundation behaviors you need to train before you can shape the new behavior.

He then went on to discuss solving more advanced training problems.  He uses a flow chart when he does problem solving and it looks like this:

Identify the problem
Determine the cause (if possible)
Consider the balance of reinforcers vs. punishers - this is where he often finds the answer
Implement a plan
Constant monitoring

He went into detail on how to do each step starting with breaking down the behavior into components to figure out what part was broken.  Good note taking can be helpful here as you can often find the first occurrence of a behavior if you keep a training journal and note taking can help identify patterns.  Determining the cause is not always possible, but he provided a list of 8 reasons that behavior breaks down (Desmond and Laule, 1985). The causes could be environmental, social, psychological, physical, trainer error, session use, regression, or desensitization .   Once you have considered possible causes, you want to see if you can change the situation and for this you need to look at the reinforcers and punishers. 

He used the example of a dog who is being trained at a park and stops working.  Start by listing all the available reinforcers on one side and the possible punishers on the other side. This should include all reinforcers and punishers, not just those the trainer is intentionally using.  Then view this as a scale with punishers on one side and reinforcers on the other.  See if you can shift the balance by either adding reinforcers or removing punishers.  At some point, the balance should change and the dog will start working again. 

Finally he talked about understanding motivation.  Understanding motivation is very important for animal trainers and he had the following list of things to consider when looking at motivation(Sullivan, 2003)

Animals seek to control their environment
Selfishness - "what's in it for me?"
Consequences are reinforcing or punishing (seldom neutral)
Past consequences create motivation for behavior
Reinforcing and aversive stimuli are present in every environment
Animals select stimuli that are important to them (not the trainer)
Mix of these things creates an overall motivational balance
Our ability to control that balance impacts whether or not the animal learns
Dependent on animal's perception and internal state
Behavior is only indicator of internal state

I thought he did a nice job of explaining how to integrate new information into your training program and how to improve your problem solving skills. He made it seem very doable.

In addition to the sessions, there was a panel discussion and a talk on Ethology by Irene Pepperberg.  I also had a lot of good lunchtime and between session conversations with other clicker trainers. The positive energy at Clicker Expo is wonderful and I enjoyed watching many of the dogs work.

 

 

Katie Bartlett, 2009 - please do not copy or distribute without my permission

   

 

 

Home | Articles | Clicker Basics | Community | FAQ | Getting Started | Horse Stories | Links | Photos | Resources

 

Equine Clicker Training