{"id":89,"date":"2019-08-07T16:55:09","date_gmt":"2019-08-07T20:55:09","guid":{"rendered":"https:\/\/pressbooks.bccampus.ca\/psychologyh5p\/chapter\/operant-conditioning\/"},"modified":"2022-02-21T02:00:28","modified_gmt":"2022-02-21T07:00:28","slug":"operant-conditioning","status":"publish","type":"chapter","link":"https:\/\/pressbooks.bccampus.ca\/psychologyh5p\/chapter\/operant-conditioning\/","title":{"raw":"Operant Conditioning","rendered":"Operant Conditioning"},"content":{"raw":"<div class=\"textbox textbox--learning-objectives\"><header class=\"textbox__header\">\r\n<p class=\"textbox__title\">Learning Objectives<\/p>\r\n\r\n<\/header>\r\n<div class=\"textbox__content\">\r\n\r\nBy the end of this section, you will be able to:\r\n<ul>\r\n \t<li>Define operant conditioning<\/li>\r\n \t<li>Explain the difference between reinforcement and punishment<\/li>\r\n \t<li>Distinguish between reinforcement schedules<\/li>\r\n<\/ul>\r\n<\/div>\r\n<\/div>\r\n<p id=\"fs-idp78365344\">The previous section of this chapter focused on the type of associative learning known as classical conditioning. Remember that in classical conditioning, something in the environment triggers a reflex automatically, and researchers train the organism to react to a different stimulus. Now we turn to the second type of associative learning, <span data-type=\"term\">operant conditioning<\/span>. In operant conditioning, organisms learn to associate a behavior and its consequence. A pleasant consequence makes that behavior more likely to be repeated in the future. For example, Spirit, a dolphin at the National Aquarium in Baltimore, does a flip in the air when her trainer blows a whistle. The consequence is that she gets a fish.<\/p>\r\n\r\n<table id=\"fs-idp18763408\" style=\"width: 100%\" summary=\"This table has three columns and three rows. The first row is a header row; the first column has no heading; the second column is labeled \u201creinforcement\u201d and the third column is labeled \u201cpunishment.\u201d The second row is labeled \u201cpositive\u201d and the third row is labeled \u201cnegative.\u201d The cell under \u201creinforcement\u201d and \u201cpositive\u201d reads, \u201cSomething is added to increase the likelihood of a behavior.\u201d The cell under \u201cpunishment\u201d and \u201cpositive\u201d reads, \u201cSomething is added to decrease the likelihood of a behavior.\u201d The cell under \u201creinforcement\u201d and \u201cnegative\u201d reads, \u201cSomething is removed to increase the likelihood of a behavior.\u201d The cell under \u201cpunishment\u201d and \u201cnegative\u201d reads, \u201cSomething is removed to decrease the likelihood of a behavior.\u201d\"><caption><span data-type=\"title\">Classical and Operant Conditioning Compared<\/span><\/caption><colgroup> <col data-width=\"150\" \/> <col data-width=\"250\" \/> <col data-width=\"250\" \/><\/colgroup>\r\n<thead>\r\n<tr>\r\n<th><\/th>\r\n<th>Classical Conditioning<\/th>\r\n<th>Operant Conditioning<\/th>\r\n<\/tr>\r\n<\/thead>\r\n<tbody>\r\n<tr>\r\n<td>Conditioning approach<\/td>\r\n<td>An unconditioned stimulus (such as food) is paired with a neutral stimulus (such as a bell). The neutral stimulus eventually becomes the conditioned stimulus, which brings about the conditioned response (salivation).<\/td>\r\n<td>The target behavior is followed by reinforcement or punishment to either strengthen or weaken it, so that the learner is more likely to exhibit the desired behavior in the future.<\/td>\r\n<\/tr>\r\n<tr>\r\n<td>Stimulus timing<\/td>\r\n<td>The stimulus occurs immediately before the response.<\/td>\r\n<td>The stimulus (either reinforcement or punishment) occurs soon after the response.<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<p id=\"fs-idm74300512\">Psychologist B. F. <span class=\"no-emphasis\" data-type=\"term\">Skinner<\/span> saw that classical conditioning is limited to existing behaviors that are reflexively elicited, and it doesn\u2019t account for new behaviors such as riding a bike. He proposed a theory about how such behaviors come about. Skinner believed that behavior is motivated by the consequences we receive for the behavior: the reinforcements and punishments. His idea that learning is the result of consequences is based on the law of effect, which was first proposed by psychologist Edward <span class=\"no-emphasis\" data-type=\"term\">Thorndike<\/span>. According to the <span data-type=\"term\">law of effect<\/span>, behaviors that are followed by consequences that are satisfying to the organism are more likely to be repeated, and behaviors that are followed by unpleasant consequences are less likely to be repeated (Thorndike, 1911). Essentially, if an organism does something that brings about a desired result, the organism is more likely to do it again. If an organism does something that does not bring about a desired result, the organism is less likely to do it again. An example of the law of effect is in employment. One of the reasons (and often the main reason) we show up for work is because we get paid to do so. If we stop getting paid, we will likely stop showing up\u2014even if we love our job.<\/p>\r\n<p id=\"fs-idm71227408\">Working with Thorndike\u2019s law of effect as his foundation, Skinner began conducting scientific experiments on animals (mainly rats and pigeons) to determine how organisms learn through operant conditioning (Skinner, 1938). He placed these animals inside an operant conditioning chamber, which has come to be known as a \u201cSkinner box\u201d. A Skinner box contains a lever (for rats) or disk (for pigeons) that the animal can press or peck for a food reward via the dispenser. Speakers and lights can be associated with certain behaviors. A recorder counts the number of responses made by the animal.<\/p>\r\n\r\n<div id=\"Figure06_03_Skinnerbox\" class=\"bc-figure figure\">\r\n\r\n[caption id=\"\" align=\"aligncenter\" width=\"649\"]<img src=\"https:\/\/s3-us-west-2.amazonaws.com\/courses-images\/wp-content\/uploads\/sites\/2293\/2017\/08\/01160056\/CNX_Psych_06_03_Skinnerbox_n.jpg\" alt=\"A photograph shows B.F. Skinner. An illustration shows a rat in a Skinner box: a chamber with a speaker, lights, a lever, and a food dispenser.\" width=\"649\" height=\"255\" data-media-type=\"image\/jpeg\" \/> (a) B. F. Skinner developed operant conditioning for systematic study of how behaviors are strengthened or weakened according to their consequences. (b) In a Skinner box, a rat presses a lever in an operant conditioning chamber to receive a food reward. (credit a: modification of work by \"Silly rabbit\"\/Wikimedia Commons)[\/caption]\r\n\r\n<\/div>\r\n\r\n<\/div>\r\n<p id=\"fs-idm87885600\">In discussing operant conditioning, we use several everyday words\u2014positive, negative, reinforcement, and punishment\u2014in a specialized manner. In operant conditioning, positive and negative do not mean good and bad. Instead, <em data-effect=\"italics\">positive<\/em> means you are adding something, and <em data-effect=\"italics\">negative<\/em> means you are taking something away. <em data-effect=\"italics\">Reinforcement<\/em> means you are increasing a behavior, and <em data-effect=\"italics\">punishment<\/em> means you are decreasing a behavior. Reinforcement can be positive or negative, and punishment can also be positive or negative. All reinforcers (positive or negative) <em data-effect=\"italics\">increase<\/em> the likelihood of a behavioral response. All punishers (positive or negative) <em data-effect=\"italics\">decrease<\/em> the likelihood of a behavioral response. Now let\u2019s combine these four terms: positive reinforcement, negative reinforcement, positive punishment, and negative punishment.<\/p>\r\n\r\n<table id=\"fs-idm23618624\" style=\"width: 100%\" summary=\"This table has three columns and three rows. The first row is a header row; the first column has no heading; the second column is labeled \u201creinforcement\u201d and the third column is labeled \u201cpunishment.\u201d The second row is labeled \u201cpositive\u201d and the third row is labeled \u201cnegative.\u201d The cell under \u201creinforcement\u201d and \u201cpositive\u201d reads, \u201cSomething is added to increase the likelihood of a behavior.\u201d The cell under \u201cpunishment\u201d and \u201cpositive\u201d reads, \u201cSomething is added to decrease the likelihood of a behavior.\u201d The cell under \u201creinforcement\u201d and \u201cnegative\u201d reads, \u201cSomething is removed to increase the likelihood of a behavior.\u201d The cell under \u201cpunishment\u201d and \u201cnegative\u201d reads, \u201cSomething is removed to decrease the likelihood of a behavior.\u201d\"><caption><span data-type=\"title\">Positive and Negative Reinforcement and Punishment<\/span><\/caption><colgroup> <col data-width=\"150\" \/> <col data-width=\"250\" \/> <col data-width=\"250\" \/><\/colgroup>\r\n<thead>\r\n<tr>\r\n<th><\/th>\r\n<th>Reinforcement<\/th>\r\n<th>Punishment<\/th>\r\n<\/tr>\r\n<\/thead>\r\n<tbody>\r\n<tr>\r\n<td>Positive<\/td>\r\n<td>Something is <em data-effect=\"italics\">added<\/em> to <em data-effect=\"italics\">increase<\/em> the likelihood of a behavior.<\/td>\r\n<td>Something is <em data-effect=\"italics\">added<\/em> to <em data-effect=\"italics\">decrease<\/em> the likelihood of a behavior.<\/td>\r\n<\/tr>\r\n<tr>\r\n<td>Negative<\/td>\r\n<td>Something is <em data-effect=\"italics\">removed<\/em> to <em data-effect=\"italics\">increase<\/em> the likelihood of a behavior.<\/td>\r\n<td>Something is <em data-effect=\"italics\">removed<\/em> to <em data-effect=\"italics\">decrease<\/em> the likelihood of a behavior.<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<div id=\"fs-idm83383296\" class=\"bc-section section\" data-depth=\"1\">\r\n<h1 data-type=\"title\">Reinforcement<\/h1>\r\n<p id=\"fs-idm38842112\">The most effective way to teach a person or animal a new behavior is with positive reinforcement. In <span data-type=\"term\">positive reinforcement<\/span>, a desirable stimulus is added to increase a behavior.<\/p>\r\n<p id=\"fs-idp79926416\">For example, you tell your five-year-old son, Jerome, that if he cleans his room, he will get a toy. Jerome quickly cleans his room because he wants a new art set. Let\u2019s pause for a moment. Some people might say, \u201cWhy should I reward my child for doing what is expected?\u201d But in fact we are constantly and consistently rewarded in our lives. Our paychecks are rewards, as are high grades and acceptance into our preferred school. Being praised for doing a good job and for passing a driver\u2019s test is also a reward. Positive reinforcement as a learning tool is extremely effective. It has been found that one of the most effective ways to increase achievement in school districts with below-average reading scores was to pay the children to read. Specifically, second-grade students in Dallas were paid $2 each time they read a book and passed a short quiz about the book. The result was a significant increase in reading comprehension (Fryer, 2010). What do you think about this program? If Skinner were alive today, he would probably think this was a great idea. He was a strong proponent of using operant conditioning principles to influence students\u2019 behavior at school. In fact, in addition to the Skinner box, he also invented what he called a teaching machine that was designed to reward small steps in learning (Skinner, 1961)\u2014an early forerunner of computer-assisted learning. His teaching machine tested students\u2019 knowledge as they worked through various school subjects. If students answered questions correctly, they received immediate positive reinforcement and could continue; if they answered incorrectly, they did not receive any reinforcement. The idea was that students would spend additional time studying the material to increase their chance of being reinforced the next time (Skinner, 1961).<\/p>\r\n<p id=\"fs-idp10041904\">In <span data-type=\"term\">negative reinforcement<\/span>, an undesirable stimulus is removed to increase a behavior. For example, car manufacturers use the principles of negative reinforcement in their seatbelt systems, which go \u201cbeep, beep, beep\u201d until you fasten your seatbelt. The annoying sound stops when you exhibit the desired behavior, increasing the likelihood that you will buckle up in the future. Negative reinforcement is also used frequently in horse training. Riders apply pressure\u2014by pulling the reins or squeezing their legs\u2014and then remove the pressure when the horse performs the desired behavior, such as turning or speeding up. The pressure is the negative stimulus that the horse wants to remove.<\/p>\r\n\r\n<\/div>\r\n<div id=\"fs-idp10589648\" class=\"bc-section section\" data-depth=\"1\">\r\n<h1 data-type=\"title\">Punishment<\/h1>\r\n<p id=\"fs-idp69373824\">Many people confuse negative reinforcement with punishment in operant conditioning, but they are two very different mechanisms. Remember that reinforcement, even when it is negative, always increases a behavior. In contrast, <span data-type=\"term\">punishment<\/span> always decreases a behavior. In <span data-type=\"term\">positive punishment<\/span>, you add an undesirable stimulus to decrease a behavior. An example of positive punishment is scolding a student to get the student to stop texting in class. In this case, a stimulus (the reprimand) is added in order to decrease the behavior (texting in class). In <span data-type=\"term\">negative punishment<\/span>, you remove an aversive stimulus to decrease behavior. For example, when a child misbehaves, a parent can take away a favorite toy. In this case, a stimulus (the toy) is removed in order to decrease the behavior.<\/p>\r\n<p id=\"fs-idp18787968\">Punishment, especially when it is immediate, is one way to decrease undesirable behavior. For example, imagine your four-year-old son, Brandon, hit his younger brother. You have Brandon write 100 times \u201cI will not hit my brother\" (positive punishment). Chances are he won\u2019t repeat this behavior. While strategies like this are common today, in the past children were often subject to physical punishment, such as spanking. It\u2019s important to be aware of some of the drawbacks in using physical punishment on children. First, punishment may teach fear. Brandon may become fearful of the street, but he also may become fearful of the person who delivered the punishment\u2014you, his parent. Similarly, children who are punished by teachers may come to fear the teacher and try to avoid school (Gershoff et al., 2010). Consequently, most schools in the United States have banned corporal punishment. Second, punishment may cause children to become more aggressive and prone to antisocial behavior and delinquency (Gershoff, 2002). They see their parents resort to spanking when they become angry and frustrated, so, in turn, they may act out this same behavior when they become angry and frustrated. For example, because you spank Brenda when you are angry with her for her misbehavior, she might start hitting her friends when they won\u2019t share their toys.<\/p>\r\n<p id=\"fs-idm71039648\">While positive punishment can be effective in some cases, Skinner suggested that the use of punishment should be weighed against the possible negative effects. Today\u2019s psychologists and parenting experts favor reinforcement over punishment\u2014they recommend that you catch your child doing something good and reward her for it.<\/p>\r\n<span style=\"font-family: Helvetica, Arial, 'GFS Neohellenic', sans-serif;font-size: 1em;font-weight: bold\">Shaping<\/span>\r\n<div id=\"fs-idm68615072\" class=\"bc-section section\" data-depth=\"2\">\r\n<p id=\"fs-idm28441328\">In his operant conditioning experiments, Skinner often used an approach called shaping. Instead of rewarding only the target behavior, in <span data-type=\"term\">shaping<\/span>, we reward successive approximations of a target behavior. Why is shaping needed? Remember that in order for reinforcement to work, the organism must first display the behavior. Shaping is needed because it is extremely unlikely that an organism will display anything but the simplest of behaviors spontaneously. In shaping, behaviors are broken down into many small, achievable steps. The specific steps used in the process are the following:<\/p>\r\n\r\n<div id=\"fs-idm150457776\" data-type=\"list\" data-list-type=\"enumerated\" data-number-style=\"arabic\">\r\n<div data-type=\"item\">Reinforce any response that resembles the desired behavior.<\/div>\r\n<div data-type=\"item\">Then reinforce the response that more closely resembles the desired behavior. You will no longer reinforce the previously reinforced response.<\/div>\r\n<div data-type=\"item\">Next, begin to reinforce the response that even more closely resembles the desired behavior.<\/div>\r\n<div data-type=\"item\">Continue to reinforce closer and closer approximations of the desired behavior.<\/div>\r\n<div data-type=\"item\">Finally, only reinforce the desired behavior.<\/div>\r\n<\/div>\r\n<p id=\"fs-idp67217216\">Shaping is often used in teaching a complex behavior or chain of behaviors. Skinner used shaping to teach pigeons not only such relatively simple behaviors as pecking a disk in a Skinner box, but also many unusual and entertaining behaviors, such as turning in circles, walking in figure eights, and even playing ping pong; the technique is commonly used by animal trainers today. An important part of shaping is stimulus discrimination. Recall Pavlov\u2019s dogs\u2014he trained them to respond to the tone of a bell, and not to similar tones or sounds. This discrimination is also important in operant conditioning and in shaping behavior.<span id=\"fs-idp12279424\" data-type=\"media\" data-alt=\"\"><\/span><\/p>\r\n\r\n<div id=\"fs-idp36100464\" class=\"note psychology link-to-learning\" data-type=\"note\" data-has-label=\"true\" data-label=\"Link to Learning\">\r\n<div class=\"textbox\">\r\n\r\nHere is a brief video of Skinner\u2019s pigeons playing ping pong: <a href=\"https:\/\/www.youtube.com\/watch?v=vGazyH6fQQ4\">BF Skinner Foundation - Pigeon Ping Pong Clip<\/a>.\r\n\r\n[embed]https:\/\/www.youtube.com\/embed\/vGazyH6fQQ4[\/embed]\r\n\r\n<\/div>\r\n<\/div>\r\n<p id=\"fs-idm52775424\">It\u2019s easy to see how shaping is effective in teaching behaviors to animals, but how does shaping work with humans? Let\u2019s consider parents whose goal is to have their child learn to clean his room. They use shaping to help him master steps toward the goal. Instead of performing the entire task, they set up these steps and reinforce each step. First, he cleans up one toy. Second, he cleans up five toys. Third, he chooses whether to pick up ten toys or put his books and clothes away. Fourth, he cleans up everything except two toys. Finally, he cleans his entire room.<\/p>\r\n\r\n<h2 data-type=\"title\">Test Your Understanding.<\/h2>\r\n<div class=\"textbox shaded\">[h5p id=\"158\"]<\/div>\r\n&nbsp;\r\n\r\n<\/div>\r\n<\/div>\r\n<div id=\"fs-idp92695056\" class=\"bc-section section\" data-depth=\"1\">\r\n<h1 data-type=\"title\">Primary and Secondary Reinforcers<\/h1>\r\n<p id=\"fs-idp91000112\">Rewards such as stickers, praise, money, toys, and more can be used to reinforce learning. Let\u2019s go back to Skinner\u2019s rats again. How did the rats learn to press the lever in the Skinner box? They were rewarded with food each time they pressed the lever. For animals, food would be an obvious reinforcer.<\/p>\r\n<p id=\"fs-idp17704528\">What would be a good reinforce for humans? For your daughter Sydney, it was the promise of a toy if she cleaned her room. How about Joaquin, the soccer player? If you gave Joaquin a piece of candy every time he made a goal, you would be using a <span data-type=\"term\">primary reinforcer<\/span>. Primary reinforcers are reinforcers that have innate reinforcing qualities. These kinds of reinforcers are not learned. Water, food, sleep, shelter, sex, and touch, among others, are primary reinforcers. Pleasure is also a primary reinforcer. Organisms do not lose their drive for these things. For most people, jumping in a cool lake on a very hot day would be reinforcing and the cool lake would be innately reinforcing\u2014the water would cool the person off (a physical need), as well as provide pleasure.<\/p>\r\n<p id=\"fs-idm40453856\">A <span data-type=\"term\">secondary reinforcer<\/span> has no inherent value and only has reinforcing qualities when linked with a primary reinforcer. Praise, linked to affection, is one example of a secondary reinforcer, as when you called out \u201cGreat shot!\u201d every time Joaquin made a goal. Another example, money, is only worth something when you can use it to buy other things\u2014either things that satisfy basic needs (food, water, shelter\u2014all primary reinforcers) or other secondary reinforcers. If you were on a remote island in the middle of the Pacific Ocean and you had stacks of money, the money would not be useful if you could not spend it. What about the stickers on the behavior chart? They also are secondary reinforcers.<\/p>\r\n<p id=\"fs-idm76039456\">Sometimes, instead of stickers on a sticker chart, a token is used. Tokens, which are also secondary reinforcers, can then be traded in for rewards and prizes. Entire behavior management systems, known as token economies, are built around the use of these kinds of token reinforcers. Token economies have been found to be very effective at modifying behavior in a variety of settings such as schools, prisons, and mental hospitals. For example, a study by Cangi and Daly (2013) found that use of a token economy increased appropriate social behaviors and reduced inappropriate behaviors in a group of autistic school children. Autistic children tend to exhibit disruptive behaviors such as pinching and hitting. When the children in the study exhibited appropriate behavior (not hitting or pinching), they received a \u201cquiet hands\u201d token. When they hit or pinched, they lost a token. The children could then exchange specified amounts of tokens for minutes of playtime.<\/p>\r\n\r\n<div id=\"fs-idp834352\" class=\"note psychology everyday-connection\" data-type=\"note\" data-has-label=\"true\" data-label=\"Everyday Connection\">\r\n<div class=\"title\" data-type=\"title\">Behavior Modification in Children<\/div>\r\n<p id=\"fs-idp61089648\">Parents and teachers often use behavior modification to change a child\u2019s behavior. Behavior modification uses the principles of operant conditioning to accomplish behavior change so that undesirable behaviors are switched for more socially acceptable ones. Some teachers and parents create a sticker chart, in which several behaviors are listed. Sticker charts are a form of token economies, as described in the text. Each time children perform the behavior, they get a sticker, and after a certain number of stickers, they get a prize, or reinforcer. The goal is to increase acceptable behaviors and decrease misbehavior. Remember, it is best to reinforce desired behaviors, rather than to use punishment. In the classroom, the teacher can reinforce a wide range of behaviors, from students raising their hands, to walking quietly in the hall, to turning in their homework. At home, parents might create a behavior chart that rewards children for things such as putting away toys, brushing their teeth, and helping with dinner. In order for behavior modification to be effective, the reinforcement needs to be connected with the behavior; the reinforcement must matter to the child and be done consistently.<\/p>\r\n\r\n<div id=\"Figure06_03_Stickers\" class=\"bc-figure figure\">\r\n\r\n[caption id=\"\" align=\"aligncenter\" width=\"488\"]<img src=\"https:\/\/s3-us-west-2.amazonaws.com\/courses-images\/wp-content\/uploads\/sites\/2293\/2017\/08\/01160059\/CNX_Psych_06_03_Stickers.jpg\" alt=\"A photograph shows a child placing stickers on a chart hanging on the wall.\" width=\"488\" height=\"325\" data-media-type=\"image\/jpeg\" \/> Sticker charts are a form of positive reinforcement and a tool for behavior modification. Once this little girl earns a certain number of stickers for demonstrating a desired behavior, she will be rewarded with a trip to the ice cream parlor. (credit: Abigail Batchelder)[\/caption]\r\n\r\n<\/div>\r\n<p id=\"fs-idm65104\">Time-out is another popular technique used in behavior modification with children. It operates on the principle of negative punishment. When a child demonstrates an undesirable behavior, she is removed from the desirable activity at hand. For example, say that Sophia and her brother Mario are playing with building blocks. Sophia throws some blocks at her brother, so you give her a warning that she will go to time-out if she does it again. A few minutes later, she throws more blocks at Mario. You remove Sophia from the room for a few minutes. When she comes back, she doesn\u2019t throw blocks.<\/p>\r\n<p id=\"fs-idp70817968\">There are several important points that you should know if you plan to implement time-out as a behavior modification technique. First, make sure the child is being removed from a desirable activity and placed in a less desirable location. If the activity is something undesirable for the child, this technique will backfire because it is more enjoyable for the child to be removed from the activity. Second, the length of the time-out is important. The general rule of thumb is one minute for each year of the child\u2019s age. Sophia is five; therefore, she sits in a time-out for five minutes. Setting a timer helps children know how long they have to sit in time-out. Finally, as a caregiver, keep several guidelines in mind over the course of a time-out: remain calm when directing your child to time-out; ignore your child during time-out (because caregiver attention may reinforce misbehavior); and give the child a hug or a kind word when time-out is over.<\/p>\r\n\r\n<div id=\"Figure06_03_Timeout\" class=\"bc-figure figure\">\r\n\r\n[caption id=\"\" align=\"aligncenter\" width=\"649\"]<img src=\"https:\/\/s3-us-west-2.amazonaws.com\/courses-images\/wp-content\/uploads\/sites\/2293\/2017\/08\/01160103\/CNX_Psych_06_03_Timeout.jpg\" alt=\"Photograph A shows several children climbing on playground equipment. Photograph B shows a child sitting alone at a table looking at the playground.\" width=\"649\" height=\"231\" data-media-type=\"image\/jpeg\" \/> Time-out is a popular form of negative punishment used by caregivers. When a child misbehaves, he or she is removed from a desirable activity in an effort to decrease the unwanted behavior. For example, (a) a child might be playing on the playground with friends and push another child; (b) the child who misbehaved would then be removed from the activity for a short period of time. (credit a: modification of work by Simone Ramella; credit b: modification of work by \u201cJefferyTurner\u201d\/Flickr)[\/caption]\r\n\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<div id=\"fs-idm69586640\" class=\"bc-section section\" data-depth=\"1\">\r\n<h1 data-type=\"title\">Reinforcement Schedules<\/h1>\r\nRemember, the best way to teach a person or animal a behavior is to use positive reinforcement. For example, Skinner used positive reinforcement to teach rats to press a lever in a Skinner box. At first, the rat might randomly hit the lever while exploring the box, and out would come a pellet of food. After eating the pellet, what do you think the hungry rat did next? It hit the lever again, and received another pellet of food. Each time the rat hit the lever, a pellet of food came out. When an organism receives a reinforcer each time it displays a behavior, it is called <span data-type=\"term\">continuous reinforcement<\/span>. This reinforcement schedule is the quickest way to teach someone a behavior, and it is especially effective in training a new behavior. Let\u2019s look back at the dog that was learning to sit earlier in the chapter. Now, each time he sits, you give him a treat. Timing is important here: you will be most successful if you present the reinforcer immediately after he sits, so that he can make an association between the target behavior (sitting) and the consequence (getting a treat).\r\n<div class=\"textbox\">\r\n\r\nWatch this video clip where veterinarian Dr. Sophia Yin shapes a dog\u2019s behavior using the steps outlined above: <a href=\"https:\/\/www.youtube.com\/watch?v=L0XuafyPwkg&amp;feature=emb_rel_pause\">Free Shaping with an Australian CattleDog | drsophiayin.com<\/a>.\r\n\r\n[embed]https:\/\/www.youtube.com\/embed\/L0XuafyPwkg[\/embed]\r\n\r\n<\/div>\r\n<p id=\"fs-idm40360032\">Once a behavior is trained, researchers and trainers often turn to another type of reinforcement schedule\u2014partial reinforcement. In <span data-type=\"term\">partial reinforcement<\/span>, also referred to as intermittent reinforcement, the person or animal does not get reinforced every time they perform the desired behavior. There are several different types of partial reinforcement schedules. These schedules are described as either fixed or variable, and as either interval or ratio. <em data-effect=\"italics\">Fixed<\/em> refers to the number of responses between reinforcements, or the amount of time between reinforcements, which is set and unchanging. <em data-effect=\"italics\">Variable<\/em> refers to the number of responses or amount of time between reinforcements, which varies or changes. <em data-effect=\"italics\">Interval<\/em> means the schedule is based on the time between reinforcements, and <em data-effect=\"italics\">ratio<\/em> means the schedule is based on the number of responses between reinforcements.<\/p>\r\n\r\n<table id=\"fs-idp66772976\" style=\"width: 100%\" summary=\"This table has four columns and five rows. The first row is a header row with these headings: \u201creinforcement schedule,\u201d \u201cdescription,\u201d \u201cresult,\u201d and \u201cexample.\u201d Row 1 is labeled \u201cfixed interval\u201d; the \u201cdescription\u201d reads \u201cReinforcement is delivered at predictable time intervals (e.g., after 5, 10, 15, and 20 minutes)\u201d; the \u201cresult\u201d reads \u201cModerate response rate with significant pauses after reinforcement\u201d; the \u201cexample\u201d reads \u201cHospital patient uses patient-controlled, doctor-timed pain relief.\u201d Row 2 is labeled \u201cfixed interval\u201d; the \u201cdescription\u201d reads \u201cReinforcement is delivered at unpredictable time intervals (e.g., after 5, 7, 10, and 20 minutes)\u201d; the \u201cresult\u201d reads \u201cModerate yet steady response rate\u201d; the \u201cexample\u201d reads \u201cChecking Facebook.\u201d Row 3 is labeled \u201cfixed ratio\u201d; the \u201cdescription\u201d reads \u201cReinforcement is delivered after a predictable number of responses (e.g., after 2, 4, 6, and 8 responses)\u201d; the \u201cresult\u201d reads \u201cHigh response rate with pauses after reinforcement\u201d; the \u201cexample\u201d reads \u201cPiecework\u2014factory worker getting paid for every x number of items manufactured.\u201d Row 4 is labeled \u201cvariable ratio\u201d; the \u201cdescription\u201d reads \u201cReinforcement is delivered after an unpredictable number of responses (e.g., after 1, 4, 5, and 9 responses).\u201d; the \u201cresult\u201d reads \u201cHigh and steady response rate\u201d; the \u201cexample\u201d reads \u201cGambling.\u201d\"><caption><span data-type=\"title\">Reinforcement Schedules<\/span><\/caption><colgroup> <col data-width=\"100\" \/> <col data-width=\"200\" \/> <col data-width=\"200\" \/> <col data-width=\"200\" \/><\/colgroup>\r\n<thead>\r\n<tr style=\"height: 34px\">\r\n<th style=\"height: 34px;width: 106.906px\">Reinforcement Schedule<\/th>\r\n<th style=\"height: 34px;width: 343.906px\" data-valign=\"top\">Description<\/th>\r\n<th style=\"height: 34px;width: 211.906px\" data-valign=\"top\">Result<\/th>\r\n<th style=\"height: 34px;width: 266.906px\" data-valign=\"top\">Example<\/th>\r\n<\/tr>\r\n<\/thead>\r\n<tbody>\r\n<tr style=\"height: 52px\">\r\n<td style=\"height: 52px;width: 106.906px\">Fixed interval<\/td>\r\n<td style=\"height: 52px;width: 344.906px\">Reinforcement is delivered at predictable time intervals (e.g., after 5, 10, 15, and 20 minutes).<\/td>\r\n<td style=\"height: 52px;width: 212.906px\">Moderate response rate with significant pauses after reinforcement<\/td>\r\n<td style=\"height: 52px;width: 267.906px\">Hospital patient uses patient-controlled, doctor-timed pain relief<\/td>\r\n<\/tr>\r\n<tr style=\"height: 34px\">\r\n<td style=\"height: 34px;width: 106.906px\">Variable interval<\/td>\r\n<td style=\"height: 34px;width: 344.906px\">Reinforcement is delivered at unpredictable time intervals (e.g., after 5, 7, 10, and 20 minutes).<\/td>\r\n<td style=\"height: 34px;width: 212.906px\">Moderate yet steady response rate<\/td>\r\n<td style=\"height: 34px;width: 267.906px\">Checking Facebook<\/td>\r\n<\/tr>\r\n<tr style=\"height: 52px\">\r\n<td style=\"height: 52px;width: 106.906px\">Fixed ratio<\/td>\r\n<td style=\"height: 52px;width: 344.906px\">Reinforcement is delivered after a predictable number of responses (e.g., after 2, 4, 6, and 8 responses).<\/td>\r\n<td style=\"height: 52px;width: 212.906px\">High response rate with pauses after reinforcement<\/td>\r\n<td style=\"height: 52px;width: 267.906px\">Piecework\u2014factory worker getting paid for every x number of items manufactured<\/td>\r\n<\/tr>\r\n<tr style=\"height: 52px\">\r\n<td style=\"height: 52px;width: 106.906px\">Variable ratio<\/td>\r\n<td style=\"height: 52px;width: 344.906px\">Reinforcement is delivered after an unpredictable number of responses (e.g., after 1, 4, 5, and 9 responses).<\/td>\r\n<td style=\"height: 52px;width: 212.906px\">High and steady response rate<\/td>\r\n<td style=\"height: 52px;width: 267.906px\">Gambling<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<p id=\"fs-idp92292992\">Now let\u2019s combine these four terms. A <span data-type=\"term\">fixed interval reinforcement schedule<\/span> is when behavior is rewarded after a set amount of time. For example, June undergoes major surgery in a hospital. During recovery, she is expected to experience pain and will require prescription medications for pain relief. June is given an IV drip with a patient-controlled painkiller. Her doctor sets a limit: one dose per hour. June pushes a button when pain becomes difficult, and she receives a dose of medication. Since the reward (pain relief) only occurs on a fixed interval, there is no point in exhibiting the behavior when it will not be rewarded.<\/p>\r\n<p id=\"fs-idm73740432\">With a <span data-type=\"term\">variable interval reinforcement schedule<\/span>, the person or animal gets the reinforcement based on varying amounts of time, which are unpredictable. Say that Manuel is the manager at a fast-food restaurant. Every once in a while someone from the quality control division comes to Manuel\u2019s restaurant. If the restaurant is clean and the service is fast, everyone on that shift earns a $20 bonus. Manuel never knows when the quality control person will show up, so he always tries to keep the restaurant clean and ensures that his employees provide prompt and courteous service. His productivity regarding prompt service and keeping a clean restaurant are steady because he wants his crew to earn the bonus.<\/p>\r\n<p id=\"fs-idm38553920\">With a <span data-type=\"term\">fixed ratio reinforcement schedule<\/span>, there are a set number of responses that must occur before the behavior is rewarded. Carla sells glasses at an eyeglass store, and she earns a commission every time she sells a pair of glasses. She always tries to sell people more pairs of glasses, including prescription sunglasses or a backup pair, so she can increase her commission. She does not care if the person really needs the prescription sunglasses, Carla just wants her bonus. The quality of what Carla sells does not matter because her commission is not based on quality; it\u2019s only based on the number of pairs sold. This distinction in the quality of performance can help determine which reinforcement method is most appropriate for a particular situation. Fixed ratios are better suited to optimize the quantity of output, whereas a fixed interval, in which the reward is not quantity based, can lead to a higher quality of output.<\/p>\r\n<p id=\"fs-idp60038688\">In a <span data-type=\"term\">variable ratio reinforcement schedule<\/span>, the number of responses needed for a reward varies. This is the most powerful partial reinforcement schedule. An example of the variable ratio reinforcement schedule is gambling. Imagine that Sarah\u2014generally a smart, thrifty woman\u2014visits Las Vegas for the first time. She is not a gambler, but out of curiosity she puts a quarter into the slot machine, and then another, and another. Nothing happens. Two dollars in quarters later, her curiosity is fading, and she is just about to quit. But then, the machine lights up, bells go off, and Sarah gets 50 quarters back. That\u2019s more like it! Sarah gets back to inserting quarters with renewed interest, and a few minutes later she has used up all her gains and is $10 in the hole. Now might be a sensible time to quit. And yet, she keeps putting money into the slot machine because she never knows when the next reinforcement is coming. She keeps thinking that with the next quarter she could win $50, or $100, or even more. Because the reinforcement schedule in most types of gambling has a variable ratio schedule, people keep trying and hoping that the next time they will win big. This is one of the reasons that gambling is so addictive\u2014and so resistant to extinction.<\/p>\r\n<p id=\"fs-idm69667632\">In operant conditioning, extinction of a reinforced behavior occurs at some point after reinforcement stops, and the speed at which this happens depends on the reinforcement schedule. In a variable ratio schedule, the point of extinction comes very slowly, as described above. But in the other reinforcement schedules, extinction may come quickly. For example, if June presses the button for the pain relief medication before the allotted time her doctor has approved, no medication is administered. She is on a fixed interval reinforcement schedule (dosed hourly), so extinction occurs quickly when reinforcement doesn\u2019t come at the expected time. Among the reinforcement schedules, variable ratio is the most productive and the most resistant to extinction. Fixed interval is the least productive and the easiest to extinguish.<\/p>\r\n\r\n<div id=\"Figure06_03_Response\" class=\"bc-figure figure\">\r\n\r\n[caption id=\"\" align=\"aligncenter\" width=\"487\"]<img src=\"https:\/\/s3-us-west-2.amazonaws.com\/courses-images\/wp-content\/uploads\/sites\/2293\/2017\/08\/01160106\/CNX_Psych_06_03_Response.jpg\" alt=\"A graph has an x-axis labeled \u201cTime\u201d and a y-axis labeled \u201cCumulative number of responses.\u201d Two lines labeled \u201cVariable Ratio\u201d and \u201cFixed Ratio\u201d have similar, steep slopes. The variable ratio line remains straight and is marked in random points where reinforcement occurs. The fixed ratio line has consistently spaced marks indicating where reinforcement has occurred, but after each reinforcement, there is a small drop in the line before it resumes its overall slope. Two lines labeled \u201cVariable Interval\u201d and \u201cFixed Interval\u201d have similar slopes at roughly a 45-degree angle. The variable interval line remains straight and is marked in random points where reinforcement occurs. The fixed interval line has consistently spaced marks indicating where reinforcement has occurred, but after each reinforcement, there is a drop in the line.\" width=\"487\" height=\"360\" data-media-type=\"image\/jpeg\" \/> The four reinforcement schedules yield different response patterns. The variable ratio schedule is unpredictable and yields high and steady response rates, with little if any pause after reinforcement (e.g., gambler). A fixed ratio schedule is predictable and produces a high response rate, with a short pause after reinforcement (e.g., eyeglass saleswoman). The variable interval schedule is unpredictable and produces a moderate, steady response rate (e.g., restaurant manager). The fixed interval schedule yields a scallop-shaped response pattern, reflecting a significant pause after reinforcement (e.g., surgery patient).[\/caption]\r\n\r\n<\/div>\r\n<div id=\"fs-idp12379456\" class=\"note psychology connect-the-concepts\" data-type=\"note\" data-has-label=\"true\" data-label=\"Connect the Concepts\">\r\n<h2 data-type=\"title\">Test Your Understanding<\/h2>\r\n<div class=\"textbox shaded\">[h5p id=\"160\"]<\/div>\r\n&nbsp;\r\n<h2 class=\"title\" data-type=\"title\">Gambling and the Brain<\/h2>\r\n<p id=\"fs-idm87670320\">Skinner (1953) stated, \u201cIf the gambling establishment cannot persuade a patron to turn over money with no return, it may achieve the same effect by returning part of the patron's money on a variable-ratio schedule\u201d (p. 397).<\/p>\r\n<p id=\"fs-idp3278368\">Skinner uses gambling as an example of the power and effectiveness of conditioning behavior based on a variable ratio reinforcement schedule. In fact, Skinner was so confident in his knowledge of gambling addiction that he even claimed he could turn a pigeon into a pathological gambler (\u201cSkinner\u2019s Utopia,\u201d 1971). Beyond the power of variable ratio reinforcement, gambling seems to work on the brain in the same way as some addictive drugs. The Illinois Institute for Addiction Recovery (n.d.) reports evidence suggesting that pathological gambling is an addiction similar to a chemical addiction. Specifically, gambling may activate the reward centers of the brain, much like cocaine does. Research has shown that some pathological gamblers have lower levels of the neurotransmitter (brain chemical) known as norepinephrine than do normal gamblers (Roy, et al., 1988). According to a study conducted by Alec Roy and colleagues, norepinephrine is secreted when a person feels stress, arousal, or thrill; pathological gamblers use gambling to increase their levels of this neurotransmitter. Another researcher, neuroscientist Hans Breiter, has done extensive research on gambling and its effects on the brain. Breiter (as cited in Franzen, 2001) reports that \u201cMonetary reward in a gambling-like experiment produces brain activation very similar to that observed in a cocaine addict receiving an infusion of cocaine\u201d (para. 1). Deficiencies in serotonin (another neurotransmitter) might also contribute to compulsive behavior, including a gambling addiction.<\/p>\r\n<p id=\"fs-idm32812400\">It may be that pathological gamblers\u2019 brains are different than those of other people, and perhaps this difference may somehow have led to their gambling addiction, as these studies seem to suggest. However, it is very difficult to ascertain the cause because it is impossible to conduct a true experiment (it would be unethical to try to turn randomly assigned participants into problem gamblers). Therefore, it may be that causation actually moves in the opposite direction\u2014perhaps the act of gambling somehow changes neurotransmitter levels in some gamblers\u2019 brains. It also is possible that some overlooked factor, or confounding variable, played a role in both the gambling addiction and the differences in brain chemistry.<\/p>\r\n\r\n<div id=\"Figure06_03_Gambling\" class=\"bc-figure figure\">\r\n\r\n[caption id=\"\" align=\"aligncenter\" width=\"488\"]<img src=\"https:\/\/s3-us-west-2.amazonaws.com\/courses-images\/wp-content\/uploads\/sites\/2293\/2017\/08\/01160111\/CNX_Psych_06_03_Gambling.jpg\" alt=\"A photograph shows four digital gaming machines.\" width=\"488\" height=\"325\" data-media-type=\"image\/jpeg\" \/> Some research suggests that pathological gamblers use gambling to compensate for abnormally low levels of the hormone norepinephrine, which is associated with stress and is secreted in moments of arousal and thrill. (credit: Ted Murphy)[\/caption]\r\n\r\n<\/div>\r\n&nbsp;\r\n\r\n<span style=\"font-family: Helvetica, Arial, 'GFS Neohellenic', sans-serif;font-size: 1.2em;font-weight: bold\">Cognition and Latent Learning<\/span>\r\n\r\n<\/div>\r\n<\/div>\r\n<div id=\"fs-idp21904336\" class=\"bc-section section\" data-depth=\"1\">\r\n<p id=\"fs-idp12478528\">Although strict behaviorists such as Skinner and Watson refused to believe that cognition (such as thoughts and expectations) plays a role in learning, another behaviorist, Edward C. <span class=\"no-emphasis\" data-type=\"term\">Tolman<\/span>, had a different opinion. Tolman\u2019s experiments with rats demonstrated that organisms can learn even if they do not receive immediate reinforcement (Tolman &amp; Honzik, 1930; Tolman, Ritchie, &amp; Kalish, 1946). This finding was in conflict with the prevailing idea at the time that reinforcement must be immediate in order for learning to occur, thus suggesting a cognitive aspect to learning.<\/p>\r\n<p id=\"fs-idp18878912\">In the experiments, Tolman placed hungry rats in a maze with no reward for finding their way through it. He also studied a comparison group that was rewarded with food at the end of the maze. As the unreinforced rats explored the maze, they developed a <span data-type=\"term\">cognitive map<\/span>: a mental picture of the layout of the maze. After 10 sessions in the maze without reinforcement, food was placed in a goal box at the end of the maze. As soon as the rats became aware of the food, they were able to find their way through the maze quickly, just as quickly as the comparison group, which had been rewarded with food all along. This is known as <span data-type=\"term\">latent learning<\/span>: learning that occurs but is not observable in behavior until there is a reason to demonstrate it.<\/p>\r\n\r\n<div id=\"Figure06_03_Ratmaze\" class=\"bc-figure figure\">\r\n\r\n[caption id=\"\" align=\"aligncenter\" width=\"975\"]<img src=\"https:\/\/s3-us-west-2.amazonaws.com\/courses-images\/wp-content\/uploads\/sites\/2293\/2017\/08\/01160115\/CNX_Psych_06_03_Ratmaze.jpg\" alt=\"An illustration shows three rats in a maze, with a starting point and food at the end.\" width=\"975\" height=\"700\" data-media-type=\"image\/jpeg\" \/> Psychologist Edward Tolman found that rats use cognitive maps to navigate through a maze. Have you ever worked your way through various levels on a video game? You learned when to turn left or right, move up or down. In that case you were relying on a cognitive map, just like the rats in a maze. (credit: modification of work by \"FutUndBeidl\"\/Flickr)[\/caption]\r\n\r\n<\/div>\r\n<p id=\"fs-idp90491328\">Latent learning also occurs in humans. Children may learn by watching the actions of their parents but only demonstrate it at a later date, when the learned material is needed. For example, suppose that Ravi\u2019s dad drives him to school every day. In this way, Ravi learns the route from his house to his school, but he\u2019s never driven there himself, so he has not had a chance to demonstrate that he\u2019s learned the way. One morning Ravi\u2019s dad has to leave early for a meeting, so he can\u2019t drive Ravi to school. Instead, Ravi follows the same route on his bike that his dad would have taken in the car. This demonstrates latent learning. Ravi had learned the route to school, but had no need to demonstrate this knowledge earlier.<\/p>\r\n\r\n<div id=\"fs-idm100264144\" class=\"note psychology everyday-connection\" data-type=\"note\" data-has-label=\"true\" data-label=\"Everyday Connection\">\r\n<div class=\"title\" data-type=\"title\">This Place Is Like a Maze<\/div>\r\n<p id=\"fs-idm40396976\">Have you ever gotten lost in a building and couldn\u2019t find your way back out? While that can be frustrating, you\u2019re not alone. At one time or another we\u2019ve all gotten lost in places like a museum, hospital, or university library. Whenever we go someplace new, we build a mental representation\u2014or cognitive map\u2014of the location, as Tolman\u2019s rats built a cognitive map of their maze. However, some buildings are confusing because they include many areas that look alike or have short lines of sight. Because of this, it\u2019s often difficult to predict what\u2019s around a corner or decide whether to turn left or right to get out of a building. Psychologist Laura Carlson (2010) suggests that what we place in our cognitive map can impact our success in navigating through the environment. She suggests that paying attention to specific features upon entering a building, such as a picture on the wall, a fountain, a statue, or an escalator, adds information to our cognitive map that can be used later to help find our way out of the building.<\/p>\r\n\r\n<\/div>\r\n<div id=\"fs-idm100862352\" class=\"note psychology link-to-learning\" data-type=\"note\" data-has-label=\"true\" data-label=\"Link to Learning\">\r\n<div class=\"textbox\">\r\n\r\nWatch this video to learn more about Carlson\u2019s studies on cognitive maps and navigation in buildings: <a href=\"https:\/\/www.youtube.com\/watch?v=TU6tSkdbPh4\">Getting Lost in Buildings<\/a>.\r\n\r\n[embed]https:\/\/www.youtube.com\/embed\/TU6tSkdbPh4[\/embed]\r\n\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<div id=\"fs-idm44985792\" class=\"summary\" data-depth=\"1\">\r\n<h1 data-type=\"title\">Summary<\/h1>\r\nOperant conditioning is based on the work of B. F. Skinner. Operant conditioning is a form of learning in which the motivation for a behavior happens <em data-effect=\"italics\">after<\/em> the behavior is demonstrated. An animal or a human receives a consequence after performing a specific behavior. The consequence is either a reinforcer or a punisher. All reinforcement (positive or negative) <em data-effect=\"italics\">increases<\/em> the likelihood of a behavioral response. All punishment (positive or negative) <em data-effect=\"italics\">decreases<\/em> the likelihood of a behavioral response. Several types of reinforcement schedules are used to reward behavior depending on either a set or variable period of time.\r\n\r\n<\/div>\r\n<div id=\"fs-idm72743152\" class=\"review-questions\" data-depth=\"1\">\r\n<h1 data-type=\"title\">Review Questions<\/h1>\r\n<div id=\"fs-idm71327520\" class=\"exercise\" data-type=\"exercise\">\r\n<div id=\"fs-idm99950288\" class=\"solution\" data-type=\"solution\">\r\n<p id=\"fs-idp92528784\">[h5p id=\"161\"]<\/p>\r\n\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<div id=\"fs-idm13321104\" class=\"critical-thinking\" data-depth=\"1\">\r\n<h1 data-type=\"title\">Critical Thinking Questions<\/h1>\r\n<div id=\"fs-idp93582624\" class=\"exercise\" data-type=\"exercise\">\r\n<div id=\"fs-idm33747648\" class=\"solution\" data-type=\"solution\">\r\n<div class=\"textbox shaded\"><details><summary><span style=\"font-size: 14pt\">\u00a0 \u00a0 What is a Skinner box and what is its purpose?<\/span><\/summary>A Skinner box is an operant conditioning chamber used to train animals such as rats and pigeons to perform certain behaviors, like pressing a lever. When the animals perform the desired behavior, they receive a reward: food or water.\r\n\r\n<\/details>&nbsp;\r\n\r\n<details><summary><span style=\"font-size: 14pt\">\u00a0 \u00a0 What is the difference between negative reinforcement and punishment?<\/span><\/summary>In negative reinforcement you are taking away an undesirable stimulus in order to increase the frequency of a certain behavior (e.g., buckling your seat belt stops the annoying beeping sound in your car and increases the likelihood that you will wear your seatbelt). Punishment is designed to reduce a behavior (e.g., you scold your child for running into the street in order to decrease the unsafe behavior.)\r\n\r\n<\/details>&nbsp;\r\n\r\n<details><summary><span style=\"font-size: 14pt\">\u00a0 \u00a0 What is shaping and how would you use shaping to teach a dog to roll over?<\/span><\/summary>Shaping is an operant conditioning method in which you reward closer and closer approximations of the desired behavior. If you want to teach your dog to roll over, you might reward him first when he sits, then when he lies down, and then when he lies down and rolls onto his back. Finally, you would reward him only when he completes the entire sequence: lying down, rolling onto his back, and then continuing to roll over to his other side.\r\n\r\n<\/details><\/div>\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<div id=\"fs-idm62771840\" class=\"personal-application\" data-depth=\"1\">\r\n<h1 data-type=\"title\">Personal Application Questions<\/h1>\r\n<div id=\"fs-idp12870480\" class=\"exercise\" data-type=\"exercise\">\r\n<div id=\"fs-idp20064432\" class=\"problem\" data-type=\"problem\">Explain the difference between negative reinforcement and punishment, and provide several examples of each based on your own experiences.<\/div>\r\n&nbsp;\r\n\r\n<\/div>\r\n<div data-type=\"problem\"><\/div>\r\n<div id=\"fs-idm62771840\" class=\"personal-application\" data-depth=\"1\">\r\n<div data-type=\"problem\">Think of a behavior that you have that you would like to change. How could you use behavior modification, specifically positive reinforcement, to change your behavior? What is your positive reinforcer?<\/div>\r\n<div id=\"fs-idm101364864\" class=\"exercise\" data-type=\"exercise\">\r\n<h1><span style=\"font-family: Helvetica, Arial, 'GFS Neohellenic', sans-serif;font-size: 1em\">Glossary<\/span><\/h1>\r\n<\/div>\r\n<\/div>\r\n[h5p id=\"164\"]\r\n<h3>Media Attributions<\/h3>\r\n<ul>\r\n \t<li>\"<a href=\"https:\/\/www.youtube.com\/watch?v=I_ctJqjlrHA\">Operant conditioning<\/a>\" by <a class=\"yt-simple-endpoint style-scope yt-formatted-string\" style=\"font-size: 14pt\" href=\"https:\/\/www.youtube.com\/channel\/UC3vmOr8rVhzZKYwhS_Nl6Xg\">jenningh<\/a>. Standard YouTube License.<\/li>\r\n \t<li>\"<a href=\"https:\/\/www.youtube.com\/watch?v=vGazyH6fQQ4\">BF Skinner Foundation - Pigeon Ping Pong Clip<\/a>\" by <a class=\"yt-simple-endpoint style-scope yt-formatted-string\" style=\"font-size: 14pt\" href=\"https:\/\/www.youtube.com\/channel\/UC-cO_UIkJYUacckkE7LPwTA\">bfskinnerfoundation<\/a>. Standard YouTube License.<\/li>\r\n \t<li>\"<a style=\"text-align: initial;font-size: 14pt\" href=\"https:\/\/www.youtube.com\/watch?v=L0XuafyPwkg&amp;feature=emb_rel_pause\">Free Shaping with an Australian CattleDog | drsophiayin.com<\/a><span style=\"text-align: initial;font-size: 14pt\"><span style=\"text-align: initial;font-size: 14pt\">\" by <\/span><\/span><a class=\"yt-simple-endpoint style-scope yt-formatted-string\" style=\"font-size: 14pt\" href=\"https:\/\/www.youtube.com\/channel\/UC33WtSzCCnRaY8kQqb3hDsQ\">Sophia Yin<\/a>. Standard YouTube License.<\/li>\r\n \t<li>\"<a href=\"https:\/\/www.youtube.com\/watch?v=TU6tSkdbPh4\">Getting Lost in Buildings<\/a>\" by <a class=\"yt-simple-endpoint style-scope yt-formatted-string\" style=\"font-size: 14pt\" href=\"https:\/\/www.youtube.com\/channel\/UCFxceKSCxIpEYlBKlryO6uw\">University of Notre Dame<\/a>. Standard YouTube License.<\/li>\r\n<\/ul>\r\n<\/div>","rendered":"<div class=\"textbox textbox--learning-objectives\">\n<header class=\"textbox__header\">\n<p class=\"textbox__title\">Learning Objectives<\/p>\n<\/header>\n<div class=\"textbox__content\">\n<p>By the end of this section, you will be able to:<\/p>\n<ul>\n<li>Define operant conditioning<\/li>\n<li>Explain the difference between reinforcement and punishment<\/li>\n<li>Distinguish between reinforcement schedules<\/li>\n<\/ul>\n<\/div>\n<\/div>\n<p id=\"fs-idp78365344\">The previous section of this chapter focused on the type of associative learning known as classical conditioning. Remember that in classical conditioning, something in the environment triggers a reflex automatically, and researchers train the organism to react to a different stimulus. Now we turn to the second type of associative learning, <span data-type=\"term\">operant conditioning<\/span>. In operant conditioning, organisms learn to associate a behavior and its consequence. A pleasant consequence makes that behavior more likely to be repeated in the future. For example, Spirit, a dolphin at the National Aquarium in Baltimore, does a flip in the air when her trainer blows a whistle. The consequence is that she gets a fish.<\/p>\n<table id=\"fs-idp18763408\" style=\"width: 100%\" summary=\"This table has three columns and three rows. The first row is a header row; the first column has no heading; the second column is labeled \u201creinforcement\u201d and the third column is labeled \u201cpunishment.\u201d The second row is labeled \u201cpositive\u201d and the third row is labeled \u201cnegative.\u201d The cell under \u201creinforcement\u201d and \u201cpositive\u201d reads, \u201cSomething is added to increase the likelihood of a behavior.\u201d The cell under \u201cpunishment\u201d and \u201cpositive\u201d reads, \u201cSomething is added to decrease the likelihood of a behavior.\u201d The cell under \u201creinforcement\u201d and \u201cnegative\u201d reads, \u201cSomething is removed to increase the likelihood of a behavior.\u201d The cell under \u201cpunishment\u201d and \u201cnegative\u201d reads, \u201cSomething is removed to decrease the likelihood of a behavior.\u201d\">\n<caption><span data-type=\"title\">Classical and Operant Conditioning Compared<\/span><\/caption>\n<colgroup>\n<col data-width=\"150\" \/>\n<col data-width=\"250\" \/>\n<col data-width=\"250\" \/><\/colgroup>\n<thead>\n<tr>\n<th><\/th>\n<th>Classical Conditioning<\/th>\n<th>Operant Conditioning<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>Conditioning approach<\/td>\n<td>An unconditioned stimulus (such as food) is paired with a neutral stimulus (such as a bell). The neutral stimulus eventually becomes the conditioned stimulus, which brings about the conditioned response (salivation).<\/td>\n<td>The target behavior is followed by reinforcement or punishment to either strengthen or weaken it, so that the learner is more likely to exhibit the desired behavior in the future.<\/td>\n<\/tr>\n<tr>\n<td>Stimulus timing<\/td>\n<td>The stimulus occurs immediately before the response.<\/td>\n<td>The stimulus (either reinforcement or punishment) occurs soon after the response.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p id=\"fs-idm74300512\">Psychologist B. F. <span class=\"no-emphasis\" data-type=\"term\">Skinner<\/span> saw that classical conditioning is limited to existing behaviors that are reflexively elicited, and it doesn\u2019t account for new behaviors such as riding a bike. He proposed a theory about how such behaviors come about. Skinner believed that behavior is motivated by the consequences we receive for the behavior: the reinforcements and punishments. His idea that learning is the result of consequences is based on the law of effect, which was first proposed by psychologist Edward <span class=\"no-emphasis\" data-type=\"term\">Thorndike<\/span>. According to the <span data-type=\"term\">law of effect<\/span>, behaviors that are followed by consequences that are satisfying to the organism are more likely to be repeated, and behaviors that are followed by unpleasant consequences are less likely to be repeated (Thorndike, 1911). Essentially, if an organism does something that brings about a desired result, the organism is more likely to do it again. If an organism does something that does not bring about a desired result, the organism is less likely to do it again. An example of the law of effect is in employment. One of the reasons (and often the main reason) we show up for work is because we get paid to do so. If we stop getting paid, we will likely stop showing up\u2014even if we love our job.<\/p>\n<p id=\"fs-idm71227408\">Working with Thorndike\u2019s law of effect as his foundation, Skinner began conducting scientific experiments on animals (mainly rats and pigeons) to determine how organisms learn through operant conditioning (Skinner, 1938). He placed these animals inside an operant conditioning chamber, which has come to be known as a \u201cSkinner box\u201d. A Skinner box contains a lever (for rats) or disk (for pigeons) that the animal can press or peck for a food reward via the dispenser. Speakers and lights can be associated with certain behaviors. A recorder counts the number of responses made by the animal.<\/p>\n<div id=\"Figure06_03_Skinnerbox\" class=\"bc-figure figure\">\n<figure style=\"width: 649px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/s3-us-west-2.amazonaws.com\/courses-images\/wp-content\/uploads\/sites\/2293\/2017\/08\/01160056\/CNX_Psych_06_03_Skinnerbox_n.jpg\" alt=\"A photograph shows B.F. Skinner. An illustration shows a rat in a Skinner box: a chamber with a speaker, lights, a lever, and a food dispenser.\" width=\"649\" height=\"255\" data-media-type=\"image\/jpeg\" \/><figcaption class=\"wp-caption-text\">(a) B. F. Skinner developed operant conditioning for systematic study of how behaviors are strengthened or weakened according to their consequences. (b) In a Skinner box, a rat presses a lever in an operant conditioning chamber to receive a food reward. (credit a: modification of work by &#8220;Silly rabbit&#8221;\/Wikimedia Commons)<\/figcaption><\/figure>\n<\/div>\n<p id=\"fs-idm87885600\">In discussing operant conditioning, we use several everyday words\u2014positive, negative, reinforcement, and punishment\u2014in a specialized manner. In operant conditioning, positive and negative do not mean good and bad. Instead, <em data-effect=\"italics\">positive<\/em> means you are adding something, and <em data-effect=\"italics\">negative<\/em> means you are taking something away. <em data-effect=\"italics\">Reinforcement<\/em> means you are increasing a behavior, and <em data-effect=\"italics\">punishment<\/em> means you are decreasing a behavior. Reinforcement can be positive or negative, and punishment can also be positive or negative. All reinforcers (positive or negative) <em data-effect=\"italics\">increase<\/em> the likelihood of a behavioral response. All punishers (positive or negative) <em data-effect=\"italics\">decrease<\/em> the likelihood of a behavioral response. Now let\u2019s combine these four terms: positive reinforcement, negative reinforcement, positive punishment, and negative punishment.<\/p>\n<table id=\"fs-idm23618624\" style=\"width: 100%\" summary=\"This table has three columns and three rows. The first row is a header row; the first column has no heading; the second column is labeled \u201creinforcement\u201d and the third column is labeled \u201cpunishment.\u201d The second row is labeled \u201cpositive\u201d and the third row is labeled \u201cnegative.\u201d The cell under \u201creinforcement\u201d and \u201cpositive\u201d reads, \u201cSomething is added to increase the likelihood of a behavior.\u201d The cell under \u201cpunishment\u201d and \u201cpositive\u201d reads, \u201cSomething is added to decrease the likelihood of a behavior.\u201d The cell under \u201creinforcement\u201d and \u201cnegative\u201d reads, \u201cSomething is removed to increase the likelihood of a behavior.\u201d The cell under \u201cpunishment\u201d and \u201cnegative\u201d reads, \u201cSomething is removed to decrease the likelihood of a behavior.\u201d\">\n<caption><span data-type=\"title\">Positive and Negative Reinforcement and Punishment<\/span><\/caption>\n<colgroup>\n<col data-width=\"150\" \/>\n<col data-width=\"250\" \/>\n<col data-width=\"250\" \/><\/colgroup>\n<thead>\n<tr>\n<th><\/th>\n<th>Reinforcement<\/th>\n<th>Punishment<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>Positive<\/td>\n<td>Something is <em data-effect=\"italics\">added<\/em> to <em data-effect=\"italics\">increase<\/em> the likelihood of a behavior.<\/td>\n<td>Something is <em data-effect=\"italics\">added<\/em> to <em data-effect=\"italics\">decrease<\/em> the likelihood of a behavior.<\/td>\n<\/tr>\n<tr>\n<td>Negative<\/td>\n<td>Something is <em data-effect=\"italics\">removed<\/em> to <em data-effect=\"italics\">increase<\/em> the likelihood of a behavior.<\/td>\n<td>Something is <em data-effect=\"italics\">removed<\/em> to <em data-effect=\"italics\">decrease<\/em> the likelihood of a behavior.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<div id=\"fs-idm83383296\" class=\"bc-section section\" data-depth=\"1\">\n<h1 data-type=\"title\">Reinforcement<\/h1>\n<p id=\"fs-idm38842112\">The most effective way to teach a person or animal a new behavior is with positive reinforcement. In <span data-type=\"term\">positive reinforcement<\/span>, a desirable stimulus is added to increase a behavior.<\/p>\n<p id=\"fs-idp79926416\">For example, you tell your five-year-old son, Jerome, that if he cleans his room, he will get a toy. Jerome quickly cleans his room because he wants a new art set. Let\u2019s pause for a moment. Some people might say, \u201cWhy should I reward my child for doing what is expected?\u201d But in fact we are constantly and consistently rewarded in our lives. Our paychecks are rewards, as are high grades and acceptance into our preferred school. Being praised for doing a good job and for passing a driver\u2019s test is also a reward. Positive reinforcement as a learning tool is extremely effective. It has been found that one of the most effective ways to increase achievement in school districts with below-average reading scores was to pay the children to read. Specifically, second-grade students in Dallas were paid $2 each time they read a book and passed a short quiz about the book. The result was a significant increase in reading comprehension (Fryer, 2010). What do you think about this program? If Skinner were alive today, he would probably think this was a great idea. He was a strong proponent of using operant conditioning principles to influence students\u2019 behavior at school. In fact, in addition to the Skinner box, he also invented what he called a teaching machine that was designed to reward small steps in learning (Skinner, 1961)\u2014an early forerunner of computer-assisted learning. His teaching machine tested students\u2019 knowledge as they worked through various school subjects. If students answered questions correctly, they received immediate positive reinforcement and could continue; if they answered incorrectly, they did not receive any reinforcement. The idea was that students would spend additional time studying the material to increase their chance of being reinforced the next time (Skinner, 1961).<\/p>\n<p id=\"fs-idp10041904\">In <span data-type=\"term\">negative reinforcement<\/span>, an undesirable stimulus is removed to increase a behavior. For example, car manufacturers use the principles of negative reinforcement in their seatbelt systems, which go \u201cbeep, beep, beep\u201d until you fasten your seatbelt. The annoying sound stops when you exhibit the desired behavior, increasing the likelihood that you will buckle up in the future. Negative reinforcement is also used frequently in horse training. Riders apply pressure\u2014by pulling the reins or squeezing their legs\u2014and then remove the pressure when the horse performs the desired behavior, such as turning or speeding up. The pressure is the negative stimulus that the horse wants to remove.<\/p>\n<\/div>\n<div id=\"fs-idp10589648\" class=\"bc-section section\" data-depth=\"1\">\n<h1 data-type=\"title\">Punishment<\/h1>\n<p id=\"fs-idp69373824\">Many people confuse negative reinforcement with punishment in operant conditioning, but they are two very different mechanisms. Remember that reinforcement, even when it is negative, always increases a behavior. In contrast, <span data-type=\"term\">punishment<\/span> always decreases a behavior. In <span data-type=\"term\">positive punishment<\/span>, you add an undesirable stimulus to decrease a behavior. An example of positive punishment is scolding a student to get the student to stop texting in class. In this case, a stimulus (the reprimand) is added in order to decrease the behavior (texting in class). In <span data-type=\"term\">negative punishment<\/span>, you remove an aversive stimulus to decrease behavior. For example, when a child misbehaves, a parent can take away a favorite toy. In this case, a stimulus (the toy) is removed in order to decrease the behavior.<\/p>\n<p id=\"fs-idp18787968\">Punishment, especially when it is immediate, is one way to decrease undesirable behavior. For example, imagine your four-year-old son, Brandon, hit his younger brother. You have Brandon write 100 times \u201cI will not hit my brother&#8221; (positive punishment). Chances are he won\u2019t repeat this behavior. While strategies like this are common today, in the past children were often subject to physical punishment, such as spanking. It\u2019s important to be aware of some of the drawbacks in using physical punishment on children. First, punishment may teach fear. Brandon may become fearful of the street, but he also may become fearful of the person who delivered the punishment\u2014you, his parent. Similarly, children who are punished by teachers may come to fear the teacher and try to avoid school (Gershoff et al., 2010). Consequently, most schools in the United States have banned corporal punishment. Second, punishment may cause children to become more aggressive and prone to antisocial behavior and delinquency (Gershoff, 2002). They see their parents resort to spanking when they become angry and frustrated, so, in turn, they may act out this same behavior when they become angry and frustrated. For example, because you spank Brenda when you are angry with her for her misbehavior, she might start hitting her friends when they won\u2019t share their toys.<\/p>\n<p id=\"fs-idm71039648\">While positive punishment can be effective in some cases, Skinner suggested that the use of punishment should be weighed against the possible negative effects. Today\u2019s psychologists and parenting experts favor reinforcement over punishment\u2014they recommend that you catch your child doing something good and reward her for it.<\/p>\n<p><span style=\"font-family: Helvetica, Arial, 'GFS Neohellenic', sans-serif;font-size: 1em;font-weight: bold\">Shaping<\/span><\/p>\n<div id=\"fs-idm68615072\" class=\"bc-section section\" data-depth=\"2\">\n<p id=\"fs-idm28441328\">In his operant conditioning experiments, Skinner often used an approach called shaping. Instead of rewarding only the target behavior, in <span data-type=\"term\">shaping<\/span>, we reward successive approximations of a target behavior. Why is shaping needed? Remember that in order for reinforcement to work, the organism must first display the behavior. Shaping is needed because it is extremely unlikely that an organism will display anything but the simplest of behaviors spontaneously. In shaping, behaviors are broken down into many small, achievable steps. The specific steps used in the process are the following:<\/p>\n<div id=\"fs-idm150457776\" data-type=\"list\" data-list-type=\"enumerated\" data-number-style=\"arabic\">\n<div data-type=\"item\">Reinforce any response that resembles the desired behavior.<\/div>\n<div data-type=\"item\">Then reinforce the response that more closely resembles the desired behavior. You will no longer reinforce the previously reinforced response.<\/div>\n<div data-type=\"item\">Next, begin to reinforce the response that even more closely resembles the desired behavior.<\/div>\n<div data-type=\"item\">Continue to reinforce closer and closer approximations of the desired behavior.<\/div>\n<div data-type=\"item\">Finally, only reinforce the desired behavior.<\/div>\n<\/div>\n<p id=\"fs-idp67217216\">Shaping is often used in teaching a complex behavior or chain of behaviors. Skinner used shaping to teach pigeons not only such relatively simple behaviors as pecking a disk in a Skinner box, but also many unusual and entertaining behaviors, such as turning in circles, walking in figure eights, and even playing ping pong; the technique is commonly used by animal trainers today. An important part of shaping is stimulus discrimination. Recall Pavlov\u2019s dogs\u2014he trained them to respond to the tone of a bell, and not to similar tones or sounds. This discrimination is also important in operant conditioning and in shaping behavior.<span id=\"fs-idp12279424\" data-type=\"media\" data-alt=\"\"><\/span><\/p>\n<div id=\"fs-idp36100464\" class=\"note psychology link-to-learning\" data-type=\"note\" data-has-label=\"true\" data-label=\"Link to Learning\">\n<div class=\"textbox\">\n<p>Here is a brief video of Skinner\u2019s pigeons playing ping pong: <a href=\"https:\/\/www.youtube.com\/watch?v=vGazyH6fQQ4\">BF Skinner Foundation &#8211; Pigeon Ping Pong Clip<\/a>.<\/p>\n<p><iframe loading=\"lazy\" id=\"oembed-1\" title=\"BF Skinner Foundation - Pigeon Ping Pong Clip\" width=\"500\" height=\"375\" src=\"https:\/\/www.youtube.com\/embed\/vGazyH6fQQ4?feature=oembed&#38;rel=0\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"><\/iframe><\/p>\n<\/div>\n<\/div>\n<p id=\"fs-idm52775424\">It\u2019s easy to see how shaping is effective in teaching behaviors to animals, but how does shaping work with humans? Let\u2019s consider parents whose goal is to have their child learn to clean his room. They use shaping to help him master steps toward the goal. Instead of performing the entire task, they set up these steps and reinforce each step. First, he cleans up one toy. Second, he cleans up five toys. Third, he chooses whether to pick up ten toys or put his books and clothes away. Fourth, he cleans up everything except two toys. Finally, he cleans his entire room.<\/p>\n<h2 data-type=\"title\">Test Your Understanding.<\/h2>\n<div class=\"textbox shaded\">\n<div id=\"h5p-158\">\n<div class=\"h5p-iframe-wrapper\"><iframe id=\"h5p-iframe-158\" class=\"h5p-iframe\" data-content-id=\"158\" style=\"height:1px\" src=\"about:blank\" frameBorder=\"0\" scrolling=\"no\" title=\"Operant Conditioning -- Reinforcement vs. Punishment\"><\/iframe><\/div>\n<\/div>\n<\/div>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<div id=\"fs-idp92695056\" class=\"bc-section section\" data-depth=\"1\">\n<h1 data-type=\"title\">Primary and Secondary Reinforcers<\/h1>\n<p id=\"fs-idp91000112\">Rewards such as stickers, praise, money, toys, and more can be used to reinforce learning. Let\u2019s go back to Skinner\u2019s rats again. How did the rats learn to press the lever in the Skinner box? They were rewarded with food each time they pressed the lever. For animals, food would be an obvious reinforcer.<\/p>\n<p id=\"fs-idp17704528\">What would be a good reinforce for humans? For your daughter Sydney, it was the promise of a toy if she cleaned her room. How about Joaquin, the soccer player? If you gave Joaquin a piece of candy every time he made a goal, you would be using a <span data-type=\"term\">primary reinforcer<\/span>. Primary reinforcers are reinforcers that have innate reinforcing qualities. These kinds of reinforcers are not learned. Water, food, sleep, shelter, sex, and touch, among others, are primary reinforcers. Pleasure is also a primary reinforcer. Organisms do not lose their drive for these things. For most people, jumping in a cool lake on a very hot day would be reinforcing and the cool lake would be innately reinforcing\u2014the water would cool the person off (a physical need), as well as provide pleasure.<\/p>\n<p id=\"fs-idm40453856\">A <span data-type=\"term\">secondary reinforcer<\/span> has no inherent value and only has reinforcing qualities when linked with a primary reinforcer. Praise, linked to affection, is one example of a secondary reinforcer, as when you called out \u201cGreat shot!\u201d every time Joaquin made a goal. Another example, money, is only worth something when you can use it to buy other things\u2014either things that satisfy basic needs (food, water, shelter\u2014all primary reinforcers) or other secondary reinforcers. If you were on a remote island in the middle of the Pacific Ocean and you had stacks of money, the money would not be useful if you could not spend it. What about the stickers on the behavior chart? They also are secondary reinforcers.<\/p>\n<p id=\"fs-idm76039456\">Sometimes, instead of stickers on a sticker chart, a token is used. Tokens, which are also secondary reinforcers, can then be traded in for rewards and prizes. Entire behavior management systems, known as token economies, are built around the use of these kinds of token reinforcers. Token economies have been found to be very effective at modifying behavior in a variety of settings such as schools, prisons, and mental hospitals. For example, a study by Cangi and Daly (2013) found that use of a token economy increased appropriate social behaviors and reduced inappropriate behaviors in a group of autistic school children. Autistic children tend to exhibit disruptive behaviors such as pinching and hitting. When the children in the study exhibited appropriate behavior (not hitting or pinching), they received a \u201cquiet hands\u201d token. When they hit or pinched, they lost a token. The children could then exchange specified amounts of tokens for minutes of playtime.<\/p>\n<div id=\"fs-idp834352\" class=\"note psychology everyday-connection\" data-type=\"note\" data-has-label=\"true\" data-label=\"Everyday Connection\">\n<div class=\"title\" data-type=\"title\">Behavior Modification in Children<\/div>\n<p id=\"fs-idp61089648\">Parents and teachers often use behavior modification to change a child\u2019s behavior. Behavior modification uses the principles of operant conditioning to accomplish behavior change so that undesirable behaviors are switched for more socially acceptable ones. Some teachers and parents create a sticker chart, in which several behaviors are listed. Sticker charts are a form of token economies, as described in the text. Each time children perform the behavior, they get a sticker, and after a certain number of stickers, they get a prize, or reinforcer. The goal is to increase acceptable behaviors and decrease misbehavior. Remember, it is best to reinforce desired behaviors, rather than to use punishment. In the classroom, the teacher can reinforce a wide range of behaviors, from students raising their hands, to walking quietly in the hall, to turning in their homework. At home, parents might create a behavior chart that rewards children for things such as putting away toys, brushing their teeth, and helping with dinner. In order for behavior modification to be effective, the reinforcement needs to be connected with the behavior; the reinforcement must matter to the child and be done consistently.<\/p>\n<div id=\"Figure06_03_Stickers\" class=\"bc-figure figure\">\n<figure style=\"width: 488px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/s3-us-west-2.amazonaws.com\/courses-images\/wp-content\/uploads\/sites\/2293\/2017\/08\/01160059\/CNX_Psych_06_03_Stickers.jpg\" alt=\"A photograph shows a child placing stickers on a chart hanging on the wall.\" width=\"488\" height=\"325\" data-media-type=\"image\/jpeg\" \/><figcaption class=\"wp-caption-text\">Sticker charts are a form of positive reinforcement and a tool for behavior modification. Once this little girl earns a certain number of stickers for demonstrating a desired behavior, she will be rewarded with a trip to the ice cream parlor. (credit: Abigail Batchelder)<\/figcaption><\/figure>\n<\/div>\n<p id=\"fs-idm65104\">Time-out is another popular technique used in behavior modification with children. It operates on the principle of negative punishment. When a child demonstrates an undesirable behavior, she is removed from the desirable activity at hand. For example, say that Sophia and her brother Mario are playing with building blocks. Sophia throws some blocks at her brother, so you give her a warning that she will go to time-out if she does it again. A few minutes later, she throws more blocks at Mario. You remove Sophia from the room for a few minutes. When she comes back, she doesn\u2019t throw blocks.<\/p>\n<p id=\"fs-idp70817968\">There are several important points that you should know if you plan to implement time-out as a behavior modification technique. First, make sure the child is being removed from a desirable activity and placed in a less desirable location. If the activity is something undesirable for the child, this technique will backfire because it is more enjoyable for the child to be removed from the activity. Second, the length of the time-out is important. The general rule of thumb is one minute for each year of the child\u2019s age. Sophia is five; therefore, she sits in a time-out for five minutes. Setting a timer helps children know how long they have to sit in time-out. Finally, as a caregiver, keep several guidelines in mind over the course of a time-out: remain calm when directing your child to time-out; ignore your child during time-out (because caregiver attention may reinforce misbehavior); and give the child a hug or a kind word when time-out is over.<\/p>\n<div id=\"Figure06_03_Timeout\" class=\"bc-figure figure\">\n<figure style=\"width: 649px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/s3-us-west-2.amazonaws.com\/courses-images\/wp-content\/uploads\/sites\/2293\/2017\/08\/01160103\/CNX_Psych_06_03_Timeout.jpg\" alt=\"Photograph A shows several children climbing on playground equipment. Photograph B shows a child sitting alone at a table looking at the playground.\" width=\"649\" height=\"231\" data-media-type=\"image\/jpeg\" \/><figcaption class=\"wp-caption-text\">Time-out is a popular form of negative punishment used by caregivers. When a child misbehaves, he or she is removed from a desirable activity in an effort to decrease the unwanted behavior. For example, (a) a child might be playing on the playground with friends and push another child; (b) the child who misbehaved would then be removed from the activity for a short period of time. (credit a: modification of work by Simone Ramella; credit b: modification of work by \u201cJefferyTurner\u201d\/Flickr)<\/figcaption><\/figure>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"fs-idm69586640\" class=\"bc-section section\" data-depth=\"1\">\n<h1 data-type=\"title\">Reinforcement Schedules<\/h1>\n<p>Remember, the best way to teach a person or animal a behavior is to use positive reinforcement. For example, Skinner used positive reinforcement to teach rats to press a lever in a Skinner box. At first, the rat might randomly hit the lever while exploring the box, and out would come a pellet of food. After eating the pellet, what do you think the hungry rat did next? It hit the lever again, and received another pellet of food. Each time the rat hit the lever, a pellet of food came out. When an organism receives a reinforcer each time it displays a behavior, it is called <span data-type=\"term\">continuous reinforcement<\/span>. This reinforcement schedule is the quickest way to teach someone a behavior, and it is especially effective in training a new behavior. Let\u2019s look back at the dog that was learning to sit earlier in the chapter. Now, each time he sits, you give him a treat. Timing is important here: you will be most successful if you present the reinforcer immediately after he sits, so that he can make an association between the target behavior (sitting) and the consequence (getting a treat).<\/p>\n<div class=\"textbox\">\n<p>Watch this video clip where veterinarian Dr. Sophia Yin shapes a dog\u2019s behavior using the steps outlined above: <a href=\"https:\/\/www.youtube.com\/watch?v=L0XuafyPwkg&amp;feature=emb_rel_pause\">Free Shaping with an Australian CattleDog | drsophiayin.com<\/a>.<\/p>\n<p><iframe loading=\"lazy\" id=\"oembed-2\" title=\"Free Shaping with an Australian CattleDog | drsophiayin.com\" width=\"500\" height=\"375\" src=\"https:\/\/www.youtube.com\/embed\/L0XuafyPwkg?feature=oembed&#38;rel=0\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"><\/iframe><\/p>\n<\/div>\n<p id=\"fs-idm40360032\">Once a behavior is trained, researchers and trainers often turn to another type of reinforcement schedule\u2014partial reinforcement. In <span data-type=\"term\">partial reinforcement<\/span>, also referred to as intermittent reinforcement, the person or animal does not get reinforced every time they perform the desired behavior. There are several different types of partial reinforcement schedules. These schedules are described as either fixed or variable, and as either interval or ratio. <em data-effect=\"italics\">Fixed<\/em> refers to the number of responses between reinforcements, or the amount of time between reinforcements, which is set and unchanging. <em data-effect=\"italics\">Variable<\/em> refers to the number of responses or amount of time between reinforcements, which varies or changes. <em data-effect=\"italics\">Interval<\/em> means the schedule is based on the time between reinforcements, and <em data-effect=\"italics\">ratio<\/em> means the schedule is based on the number of responses between reinforcements.<\/p>\n<table id=\"fs-idp66772976\" style=\"width: 100%\" summary=\"This table has four columns and five rows. The first row is a header row with these headings: \u201creinforcement schedule,\u201d \u201cdescription,\u201d \u201cresult,\u201d and \u201cexample.\u201d Row 1 is labeled \u201cfixed interval\u201d; the \u201cdescription\u201d reads \u201cReinforcement is delivered at predictable time intervals (e.g., after 5, 10, 15, and 20 minutes)\u201d; the \u201cresult\u201d reads \u201cModerate response rate with significant pauses after reinforcement\u201d; the \u201cexample\u201d reads \u201cHospital patient uses patient-controlled, doctor-timed pain relief.\u201d Row 2 is labeled \u201cfixed interval\u201d; the \u201cdescription\u201d reads \u201cReinforcement is delivered at unpredictable time intervals (e.g., after 5, 7, 10, and 20 minutes)\u201d; the \u201cresult\u201d reads \u201cModerate yet steady response rate\u201d; the \u201cexample\u201d reads \u201cChecking Facebook.\u201d Row 3 is labeled \u201cfixed ratio\u201d; the \u201cdescription\u201d reads \u201cReinforcement is delivered after a predictable number of responses (e.g., after 2, 4, 6, and 8 responses)\u201d; the \u201cresult\u201d reads \u201cHigh response rate with pauses after reinforcement\u201d; the \u201cexample\u201d reads \u201cPiecework\u2014factory worker getting paid for every x number of items manufactured.\u201d Row 4 is labeled \u201cvariable ratio\u201d; the \u201cdescription\u201d reads \u201cReinforcement is delivered after an unpredictable number of responses (e.g., after 1, 4, 5, and 9 responses).\u201d; the \u201cresult\u201d reads \u201cHigh and steady response rate\u201d; the \u201cexample\u201d reads \u201cGambling.\u201d\">\n<caption><span data-type=\"title\">Reinforcement Schedules<\/span><\/caption>\n<colgroup>\n<col data-width=\"100\" \/>\n<col data-width=\"200\" \/>\n<col data-width=\"200\" \/>\n<col data-width=\"200\" \/><\/colgroup>\n<thead>\n<tr style=\"height: 34px\">\n<th style=\"height: 34px;width: 106.906px\">Reinforcement Schedule<\/th>\n<th style=\"height: 34px;width: 343.906px\" data-valign=\"top\">Description<\/th>\n<th style=\"height: 34px;width: 211.906px\" data-valign=\"top\">Result<\/th>\n<th style=\"height: 34px;width: 266.906px\" data-valign=\"top\">Example<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr style=\"height: 52px\">\n<td style=\"height: 52px;width: 106.906px\">Fixed interval<\/td>\n<td style=\"height: 52px;width: 344.906px\">Reinforcement is delivered at predictable time intervals (e.g., after 5, 10, 15, and 20 minutes).<\/td>\n<td style=\"height: 52px;width: 212.906px\">Moderate response rate with significant pauses after reinforcement<\/td>\n<td style=\"height: 52px;width: 267.906px\">Hospital patient uses patient-controlled, doctor-timed pain relief<\/td>\n<\/tr>\n<tr style=\"height: 34px\">\n<td style=\"height: 34px;width: 106.906px\">Variable interval<\/td>\n<td style=\"height: 34px;width: 344.906px\">Reinforcement is delivered at unpredictable time intervals (e.g., after 5, 7, 10, and 20 minutes).<\/td>\n<td style=\"height: 34px;width: 212.906px\">Moderate yet steady response rate<\/td>\n<td style=\"height: 34px;width: 267.906px\">Checking Facebook<\/td>\n<\/tr>\n<tr style=\"height: 52px\">\n<td style=\"height: 52px;width: 106.906px\">Fixed ratio<\/td>\n<td style=\"height: 52px;width: 344.906px\">Reinforcement is delivered after a predictable number of responses (e.g., after 2, 4, 6, and 8 responses).<\/td>\n<td style=\"height: 52px;width: 212.906px\">High response rate with pauses after reinforcement<\/td>\n<td style=\"height: 52px;width: 267.906px\">Piecework\u2014factory worker getting paid for every x number of items manufactured<\/td>\n<\/tr>\n<tr style=\"height: 52px\">\n<td style=\"height: 52px;width: 106.906px\">Variable ratio<\/td>\n<td style=\"height: 52px;width: 344.906px\">Reinforcement is delivered after an unpredictable number of responses (e.g., after 1, 4, 5, and 9 responses).<\/td>\n<td style=\"height: 52px;width: 212.906px\">High and steady response rate<\/td>\n<td style=\"height: 52px;width: 267.906px\">Gambling<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p id=\"fs-idp92292992\">Now let\u2019s combine these four terms. A <span data-type=\"term\">fixed interval reinforcement schedule<\/span> is when behavior is rewarded after a set amount of time. For example, June undergoes major surgery in a hospital. During recovery, she is expected to experience pain and will require prescription medications for pain relief. June is given an IV drip with a patient-controlled painkiller. Her doctor sets a limit: one dose per hour. June pushes a button when pain becomes difficult, and she receives a dose of medication. Since the reward (pain relief) only occurs on a fixed interval, there is no point in exhibiting the behavior when it will not be rewarded.<\/p>\n<p id=\"fs-idm73740432\">With a <span data-type=\"term\">variable interval reinforcement schedule<\/span>, the person or animal gets the reinforcement based on varying amounts of time, which are unpredictable. Say that Manuel is the manager at a fast-food restaurant. Every once in a while someone from the quality control division comes to Manuel\u2019s restaurant. If the restaurant is clean and the service is fast, everyone on that shift earns a $20 bonus. Manuel never knows when the quality control person will show up, so he always tries to keep the restaurant clean and ensures that his employees provide prompt and courteous service. His productivity regarding prompt service and keeping a clean restaurant are steady because he wants his crew to earn the bonus.<\/p>\n<p id=\"fs-idm38553920\">With a <span data-type=\"term\">fixed ratio reinforcement schedule<\/span>, there are a set number of responses that must occur before the behavior is rewarded. Carla sells glasses at an eyeglass store, and she earns a commission every time she sells a pair of glasses. She always tries to sell people more pairs of glasses, including prescription sunglasses or a backup pair, so she can increase her commission. She does not care if the person really needs the prescription sunglasses, Carla just wants her bonus. The quality of what Carla sells does not matter because her commission is not based on quality; it\u2019s only based on the number of pairs sold. This distinction in the quality of performance can help determine which reinforcement method is most appropriate for a particular situation. Fixed ratios are better suited to optimize the quantity of output, whereas a fixed interval, in which the reward is not quantity based, can lead to a higher quality of output.<\/p>\n<p id=\"fs-idp60038688\">In a <span data-type=\"term\">variable ratio reinforcement schedule<\/span>, the number of responses needed for a reward varies. This is the most powerful partial reinforcement schedule. An example of the variable ratio reinforcement schedule is gambling. Imagine that Sarah\u2014generally a smart, thrifty woman\u2014visits Las Vegas for the first time. She is not a gambler, but out of curiosity she puts a quarter into the slot machine, and then another, and another. Nothing happens. Two dollars in quarters later, her curiosity is fading, and she is just about to quit. But then, the machine lights up, bells go off, and Sarah gets 50 quarters back. That\u2019s more like it! Sarah gets back to inserting quarters with renewed interest, and a few minutes later she has used up all her gains and is $10 in the hole. Now might be a sensible time to quit. And yet, she keeps putting money into the slot machine because she never knows when the next reinforcement is coming. She keeps thinking that with the next quarter she could win $50, or $100, or even more. Because the reinforcement schedule in most types of gambling has a variable ratio schedule, people keep trying and hoping that the next time they will win big. This is one of the reasons that gambling is so addictive\u2014and so resistant to extinction.<\/p>\n<p id=\"fs-idm69667632\">In operant conditioning, extinction of a reinforced behavior occurs at some point after reinforcement stops, and the speed at which this happens depends on the reinforcement schedule. In a variable ratio schedule, the point of extinction comes very slowly, as described above. But in the other reinforcement schedules, extinction may come quickly. For example, if June presses the button for the pain relief medication before the allotted time her doctor has approved, no medication is administered. She is on a fixed interval reinforcement schedule (dosed hourly), so extinction occurs quickly when reinforcement doesn\u2019t come at the expected time. Among the reinforcement schedules, variable ratio is the most productive and the most resistant to extinction. Fixed interval is the least productive and the easiest to extinguish.<\/p>\n<div id=\"Figure06_03_Response\" class=\"bc-figure figure\">\n<figure style=\"width: 487px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/s3-us-west-2.amazonaws.com\/courses-images\/wp-content\/uploads\/sites\/2293\/2017\/08\/01160106\/CNX_Psych_06_03_Response.jpg\" alt=\"A graph has an x-axis labeled \u201cTime\u201d and a y-axis labeled \u201cCumulative number of responses.\u201d Two lines labeled \u201cVariable Ratio\u201d and \u201cFixed Ratio\u201d have similar, steep slopes. The variable ratio line remains straight and is marked in random points where reinforcement occurs. The fixed ratio line has consistently spaced marks indicating where reinforcement has occurred, but after each reinforcement, there is a small drop in the line before it resumes its overall slope. Two lines labeled \u201cVariable Interval\u201d and \u201cFixed Interval\u201d have similar slopes at roughly a 45-degree angle. The variable interval line remains straight and is marked in random points where reinforcement occurs. The fixed interval line has consistently spaced marks indicating where reinforcement has occurred, but after each reinforcement, there is a drop in the line.\" width=\"487\" height=\"360\" data-media-type=\"image\/jpeg\" \/><figcaption class=\"wp-caption-text\">The four reinforcement schedules yield different response patterns. The variable ratio schedule is unpredictable and yields high and steady response rates, with little if any pause after reinforcement (e.g., gambler). A fixed ratio schedule is predictable and produces a high response rate, with a short pause after reinforcement (e.g., eyeglass saleswoman). The variable interval schedule is unpredictable and produces a moderate, steady response rate (e.g., restaurant manager). The fixed interval schedule yields a scallop-shaped response pattern, reflecting a significant pause after reinforcement (e.g., surgery patient).<\/figcaption><\/figure>\n<\/div>\n<div id=\"fs-idp12379456\" class=\"note psychology connect-the-concepts\" data-type=\"note\" data-has-label=\"true\" data-label=\"Connect the Concepts\">\n<h2 data-type=\"title\">Test Your Understanding<\/h2>\n<div class=\"textbox shaded\">\n<div id=\"h5p-160\">\n<div class=\"h5p-iframe-wrapper\"><iframe id=\"h5p-iframe-160\" class=\"h5p-iframe\" data-content-id=\"160\" style=\"height:1px\" src=\"about:blank\" frameBorder=\"0\" scrolling=\"no\" title=\"Operant Conditioning Reinforcement Schedules\"><\/iframe><\/div>\n<\/div>\n<\/div>\n<p>&nbsp;<\/p>\n<h2 class=\"title\" data-type=\"title\">Gambling and the Brain<\/h2>\n<p id=\"fs-idm87670320\">Skinner (1953) stated, \u201cIf the gambling establishment cannot persuade a patron to turn over money with no return, it may achieve the same effect by returning part of the patron&#8217;s money on a variable-ratio schedule\u201d (p. 397).<\/p>\n<p id=\"fs-idp3278368\">Skinner uses gambling as an example of the power and effectiveness of conditioning behavior based on a variable ratio reinforcement schedule. In fact, Skinner was so confident in his knowledge of gambling addiction that he even claimed he could turn a pigeon into a pathological gambler (\u201cSkinner\u2019s Utopia,\u201d 1971). Beyond the power of variable ratio reinforcement, gambling seems to work on the brain in the same way as some addictive drugs. The Illinois Institute for Addiction Recovery (n.d.) reports evidence suggesting that pathological gambling is an addiction similar to a chemical addiction. Specifically, gambling may activate the reward centers of the brain, much like cocaine does. Research has shown that some pathological gamblers have lower levels of the neurotransmitter (brain chemical) known as norepinephrine than do normal gamblers (Roy, et al., 1988). According to a study conducted by Alec Roy and colleagues, norepinephrine is secreted when a person feels stress, arousal, or thrill; pathological gamblers use gambling to increase their levels of this neurotransmitter. Another researcher, neuroscientist Hans Breiter, has done extensive research on gambling and its effects on the brain. Breiter (as cited in Franzen, 2001) reports that \u201cMonetary reward in a gambling-like experiment produces brain activation very similar to that observed in a cocaine addict receiving an infusion of cocaine\u201d (para. 1). Deficiencies in serotonin (another neurotransmitter) might also contribute to compulsive behavior, including a gambling addiction.<\/p>\n<p id=\"fs-idm32812400\">It may be that pathological gamblers\u2019 brains are different than those of other people, and perhaps this difference may somehow have led to their gambling addiction, as these studies seem to suggest. However, it is very difficult to ascertain the cause because it is impossible to conduct a true experiment (it would be unethical to try to turn randomly assigned participants into problem gamblers). Therefore, it may be that causation actually moves in the opposite direction\u2014perhaps the act of gambling somehow changes neurotransmitter levels in some gamblers\u2019 brains. It also is possible that some overlooked factor, or confounding variable, played a role in both the gambling addiction and the differences in brain chemistry.<\/p>\n<div id=\"Figure06_03_Gambling\" class=\"bc-figure figure\">\n<figure style=\"width: 488px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/s3-us-west-2.amazonaws.com\/courses-images\/wp-content\/uploads\/sites\/2293\/2017\/08\/01160111\/CNX_Psych_06_03_Gambling.jpg\" alt=\"A photograph shows four digital gaming machines.\" width=\"488\" height=\"325\" data-media-type=\"image\/jpeg\" \/><figcaption class=\"wp-caption-text\">Some research suggests that pathological gamblers use gambling to compensate for abnormally low levels of the hormone norepinephrine, which is associated with stress and is secreted in moments of arousal and thrill. (credit: Ted Murphy)<\/figcaption><\/figure>\n<\/div>\n<p>&nbsp;<\/p>\n<p><span style=\"font-family: Helvetica, Arial, 'GFS Neohellenic', sans-serif;font-size: 1.2em;font-weight: bold\">Cognition and Latent Learning<\/span><\/p>\n<\/div>\n<\/div>\n<div id=\"fs-idp21904336\" class=\"bc-section section\" data-depth=\"1\">\n<p id=\"fs-idp12478528\">Although strict behaviorists such as Skinner and Watson refused to believe that cognition (such as thoughts and expectations) plays a role in learning, another behaviorist, Edward C. <span class=\"no-emphasis\" data-type=\"term\">Tolman<\/span>, had a different opinion. Tolman\u2019s experiments with rats demonstrated that organisms can learn even if they do not receive immediate reinforcement (Tolman &amp; Honzik, 1930; Tolman, Ritchie, &amp; Kalish, 1946). This finding was in conflict with the prevailing idea at the time that reinforcement must be immediate in order for learning to occur, thus suggesting a cognitive aspect to learning.<\/p>\n<p id=\"fs-idp18878912\">In the experiments, Tolman placed hungry rats in a maze with no reward for finding their way through it. He also studied a comparison group that was rewarded with food at the end of the maze. As the unreinforced rats explored the maze, they developed a <span data-type=\"term\">cognitive map<\/span>: a mental picture of the layout of the maze. After 10 sessions in the maze without reinforcement, food was placed in a goal box at the end of the maze. As soon as the rats became aware of the food, they were able to find their way through the maze quickly, just as quickly as the comparison group, which had been rewarded with food all along. This is known as <span data-type=\"term\">latent learning<\/span>: learning that occurs but is not observable in behavior until there is a reason to demonstrate it.<\/p>\n<div id=\"Figure06_03_Ratmaze\" class=\"bc-figure figure\">\n<figure style=\"width: 975px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/s3-us-west-2.amazonaws.com\/courses-images\/wp-content\/uploads\/sites\/2293\/2017\/08\/01160115\/CNX_Psych_06_03_Ratmaze.jpg\" alt=\"An illustration shows three rats in a maze, with a starting point and food at the end.\" width=\"975\" height=\"700\" data-media-type=\"image\/jpeg\" \/><figcaption class=\"wp-caption-text\">Psychologist Edward Tolman found that rats use cognitive maps to navigate through a maze. Have you ever worked your way through various levels on a video game? You learned when to turn left or right, move up or down. In that case you were relying on a cognitive map, just like the rats in a maze. (credit: modification of work by &#8220;FutUndBeidl&#8221;\/Flickr)<\/figcaption><\/figure>\n<\/div>\n<p id=\"fs-idp90491328\">Latent learning also occurs in humans. Children may learn by watching the actions of their parents but only demonstrate it at a later date, when the learned material is needed. For example, suppose that Ravi\u2019s dad drives him to school every day. In this way, Ravi learns the route from his house to his school, but he\u2019s never driven there himself, so he has not had a chance to demonstrate that he\u2019s learned the way. One morning Ravi\u2019s dad has to leave early for a meeting, so he can\u2019t drive Ravi to school. Instead, Ravi follows the same route on his bike that his dad would have taken in the car. This demonstrates latent learning. Ravi had learned the route to school, but had no need to demonstrate this knowledge earlier.<\/p>\n<div id=\"fs-idm100264144\" class=\"note psychology everyday-connection\" data-type=\"note\" data-has-label=\"true\" data-label=\"Everyday Connection\">\n<div class=\"title\" data-type=\"title\">This Place Is Like a Maze<\/div>\n<p id=\"fs-idm40396976\">Have you ever gotten lost in a building and couldn\u2019t find your way back out? While that can be frustrating, you\u2019re not alone. At one time or another we\u2019ve all gotten lost in places like a museum, hospital, or university library. Whenever we go someplace new, we build a mental representation\u2014or cognitive map\u2014of the location, as Tolman\u2019s rats built a cognitive map of their maze. However, some buildings are confusing because they include many areas that look alike or have short lines of sight. Because of this, it\u2019s often difficult to predict what\u2019s around a corner or decide whether to turn left or right to get out of a building. Psychologist Laura Carlson (2010) suggests that what we place in our cognitive map can impact our success in navigating through the environment. She suggests that paying attention to specific features upon entering a building, such as a picture on the wall, a fountain, a statue, or an escalator, adds information to our cognitive map that can be used later to help find our way out of the building.<\/p>\n<\/div>\n<div id=\"fs-idm100862352\" class=\"note psychology link-to-learning\" data-type=\"note\" data-has-label=\"true\" data-label=\"Link to Learning\">\n<div class=\"textbox\">\n<p>Watch this video to learn more about Carlson\u2019s studies on cognitive maps and navigation in buildings: <a href=\"https:\/\/www.youtube.com\/watch?v=TU6tSkdbPh4\">Getting Lost in Buildings<\/a>.<\/p>\n<p><iframe loading=\"lazy\" id=\"oembed-3\" title=\"Getting Lost in Buildings\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/TU6tSkdbPh4?feature=oembed&#38;rel=0\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"><\/iframe><\/p>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"fs-idm44985792\" class=\"summary\" data-depth=\"1\">\n<h1 data-type=\"title\">Summary<\/h1>\n<p>Operant conditioning is based on the work of B. F. Skinner. Operant conditioning is a form of learning in which the motivation for a behavior happens <em data-effect=\"italics\">after<\/em> the behavior is demonstrated. An animal or a human receives a consequence after performing a specific behavior. The consequence is either a reinforcer or a punisher. All reinforcement (positive or negative) <em data-effect=\"italics\">increases<\/em> the likelihood of a behavioral response. All punishment (positive or negative) <em data-effect=\"italics\">decreases<\/em> the likelihood of a behavioral response. Several types of reinforcement schedules are used to reward behavior depending on either a set or variable period of time.<\/p>\n<\/div>\n<div id=\"fs-idm72743152\" class=\"review-questions\" data-depth=\"1\">\n<h1 data-type=\"title\">Review Questions<\/h1>\n<div id=\"fs-idm71327520\" class=\"exercise\" data-type=\"exercise\">\n<div id=\"fs-idm99950288\" class=\"solution\" data-type=\"solution\">\n<p id=\"fs-idp92528784\">\n<div id=\"h5p-161\">\n<div class=\"h5p-iframe-wrapper\"><iframe id=\"h5p-iframe-161\" class=\"h5p-iframe\" data-content-id=\"161\" style=\"height:1px\" src=\"about:blank\" frameBorder=\"0\" scrolling=\"no\" title=\"Review Questions Learning Chapter\"><\/iframe><\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"fs-idm13321104\" class=\"critical-thinking\" data-depth=\"1\">\n<h1 data-type=\"title\">Critical Thinking Questions<\/h1>\n<div id=\"fs-idp93582624\" class=\"exercise\" data-type=\"exercise\">\n<div id=\"fs-idm33747648\" class=\"solution\" data-type=\"solution\">\n<div class=\"textbox shaded\">\n<details>\n<summary><span style=\"font-size: 14pt\">\u00a0 \u00a0 What is a Skinner box and what is its purpose?<\/span><\/summary>\n<p>A Skinner box is an operant conditioning chamber used to train animals such as rats and pigeons to perform certain behaviors, like pressing a lever. When the animals perform the desired behavior, they receive a reward: food or water.<\/p>\n<\/details>\n<p>&nbsp;<\/p>\n<details>\n<summary><span style=\"font-size: 14pt\">\u00a0 \u00a0 What is the difference between negative reinforcement and punishment?<\/span><\/summary>\n<p>In negative reinforcement you are taking away an undesirable stimulus in order to increase the frequency of a certain behavior (e.g., buckling your seat belt stops the annoying beeping sound in your car and increases the likelihood that you will wear your seatbelt). Punishment is designed to reduce a behavior (e.g., you scold your child for running into the street in order to decrease the unsafe behavior.)<\/p>\n<\/details>\n<p>&nbsp;<\/p>\n<details>\n<summary><span style=\"font-size: 14pt\">\u00a0 \u00a0 What is shaping and how would you use shaping to teach a dog to roll over?<\/span><\/summary>\n<p>Shaping is an operant conditioning method in which you reward closer and closer approximations of the desired behavior. If you want to teach your dog to roll over, you might reward him first when he sits, then when he lies down, and then when he lies down and rolls onto his back. Finally, you would reward him only when he completes the entire sequence: lying down, rolling onto his back, and then continuing to roll over to his other side.<\/p>\n<\/details>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"fs-idm62771840\" class=\"personal-application\" data-depth=\"1\">\n<h1 data-type=\"title\">Personal Application Questions<\/h1>\n<div id=\"fs-idp12870480\" class=\"exercise\" data-type=\"exercise\">\n<div id=\"fs-idp20064432\" class=\"problem\" data-type=\"problem\">Explain the difference between negative reinforcement and punishment, and provide several examples of each based on your own experiences.<\/div>\n<p>&nbsp;<\/p>\n<\/div>\n<div data-type=\"problem\"><\/div>\n<div id=\"fs-idm62771840\" class=\"personal-application\" data-depth=\"1\">\n<div data-type=\"problem\">Think of a behavior that you have that you would like to change. How could you use behavior modification, specifically positive reinforcement, to change your behavior? What is your positive reinforcer?<\/div>\n<div id=\"fs-idm101364864\" class=\"exercise\" data-type=\"exercise\">\n<h1><span style=\"font-family: Helvetica, Arial, 'GFS Neohellenic', sans-serif;font-size: 1em\">Glossary<\/span><\/h1>\n<\/div>\n<\/div>\n<div id=\"h5p-164\">\n<div class=\"h5p-iframe-wrapper\"><iframe id=\"h5p-iframe-164\" class=\"h5p-iframe\" data-content-id=\"164\" style=\"height:1px\" src=\"about:blank\" frameBorder=\"0\" scrolling=\"no\" title=\"Learning Chapter Glossary\"><\/iframe><\/div>\n<\/div>\n<h3>Media Attributions<\/h3>\n<ul>\n<li>&#8220;<a href=\"https:\/\/www.youtube.com\/watch?v=I_ctJqjlrHA\">Operant conditioning<\/a>&#8221; by <a class=\"yt-simple-endpoint style-scope yt-formatted-string\" style=\"font-size: 14pt\" href=\"https:\/\/www.youtube.com\/channel\/UC3vmOr8rVhzZKYwhS_Nl6Xg\">jenningh<\/a>. Standard YouTube License.<\/li>\n<li>&#8220;<a href=\"https:\/\/www.youtube.com\/watch?v=vGazyH6fQQ4\">BF Skinner Foundation &#8211; Pigeon Ping Pong Clip<\/a>&#8221; by <a class=\"yt-simple-endpoint style-scope yt-formatted-string\" style=\"font-size: 14pt\" href=\"https:\/\/www.youtube.com\/channel\/UC-cO_UIkJYUacckkE7LPwTA\">bfskinnerfoundation<\/a>. Standard YouTube License.<\/li>\n<li>&#8220;<a style=\"text-align: initial;font-size: 14pt\" href=\"https:\/\/www.youtube.com\/watch?v=L0XuafyPwkg&amp;feature=emb_rel_pause\">Free Shaping with an Australian CattleDog | drsophiayin.com<\/a><span style=\"text-align: initial;font-size: 14pt\"><span style=\"text-align: initial;font-size: 14pt\">&#8221; by <\/span><\/span><a class=\"yt-simple-endpoint style-scope yt-formatted-string\" style=\"font-size: 14pt\" href=\"https:\/\/www.youtube.com\/channel\/UC33WtSzCCnRaY8kQqb3hDsQ\">Sophia Yin<\/a>. Standard YouTube License.<\/li>\n<li>&#8220;<a href=\"https:\/\/www.youtube.com\/watch?v=TU6tSkdbPh4\">Getting Lost in Buildings<\/a>&#8221; by <a class=\"yt-simple-endpoint style-scope yt-formatted-string\" style=\"font-size: 14pt\" href=\"https:\/\/www.youtube.com\/channel\/UCFxceKSCxIpEYlBKlryO6uw\">University of Notre Dame<\/a>. Standard YouTube License.<\/li>\n<\/ul>\n<\/div>\n","protected":false},"author":103,"menu_order":3,"template":"","meta":{"pb_show_title":"on","pb_short_title":"","pb_subtitle":"","pb_authors":[],"pb_section_license":""},"chapter-type":[],"contributor":[],"license":[],"class_list":["post-89","chapter","type-chapter","status-publish","hentry"],"part":82,"_links":{"self":[{"href":"https:\/\/pressbooks.bccampus.ca\/psychologyh5p\/wp-json\/pressbooks\/v2\/chapters\/89","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/pressbooks.bccampus.ca\/psychologyh5p\/wp-json\/pressbooks\/v2\/chapters"}],"about":[{"href":"https:\/\/pressbooks.bccampus.ca\/psychologyh5p\/wp-json\/wp\/v2\/types\/chapter"}],"author":[{"embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/psychologyh5p\/wp-json\/wp\/v2\/users\/103"}],"version-history":[{"count":18,"href":"https:\/\/pressbooks.bccampus.ca\/psychologyh5p\/wp-json\/pressbooks\/v2\/chapters\/89\/revisions"}],"predecessor-version":[{"id":1238,"href":"https:\/\/pressbooks.bccampus.ca\/psychologyh5p\/wp-json\/pressbooks\/v2\/chapters\/89\/revisions\/1238"}],"part":[{"href":"https:\/\/pressbooks.bccampus.ca\/psychologyh5p\/wp-json\/pressbooks\/v2\/parts\/82"}],"metadata":[{"href":"https:\/\/pressbooks.bccampus.ca\/psychologyh5p\/wp-json\/pressbooks\/v2\/chapters\/89\/metadata\/"}],"wp:attachment":[{"href":"https:\/\/pressbooks.bccampus.ca\/psychologyh5p\/wp-json\/wp\/v2\/media?parent=89"}],"wp:term":[{"taxonomy":"chapter-type","embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/psychologyh5p\/wp-json\/pressbooks\/v2\/chapter-type?post=89"},{"taxonomy":"contributor","embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/psychologyh5p\/wp-json\/wp\/v2\/contributor?post=89"},{"taxonomy":"license","embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/psychologyh5p\/wp-json\/wp\/v2\/license?post=89"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}