Part 4 - APPLIED BEHAVIOR ANALYSIS
Part 4 - APPLIED BEHAVIOR ANALYSIS
Part 4 - APPLIED BEHAVIOR ANALYSIS
Behavioral Analysis
The first step in applied behavior analysis is to analyze the problem. The analysis must be behavioral; that
is, one must state the problem in terms of behaviors, controlling stimuli, reinforcers, punishers, or
observational learning...the concepts we have covered in this chapter. Antecedent and consequent stimuli
must be identified. After this analysis, one can make an educated guess about which intervention strategy
or "treatment" might be best.
Baselining
The next step, after specifying a behavior to be changed, is to measure its frequency of occurrence before
attempting to change it. Baselining is keeping a careful record of how often a behavior occurs, without
trying to alter it. The purpose of baselining is to establish a point of reference, so one knows later if the
attempt to change behavior had any effect. A genuine behavior change (as opposed to a random variation
in the frequency of a behavior) should stand out sharply from the baseline rate of the behavior.
What is baselining? What is its purpose? How long should baselining continue, as a rule?
As a general rule, baselining should continue until there is a definite pattern. If the frequency of the
behavior varies a lot, baseline observations should continue for a long time. If the behavior is produced at
a steady rate, the baseline observation period can be short.
While taking baseline measurements of an operant rate-the frequency of some behavior-an applied
behavior analyst should pay careful attention to antecedents -stimuli that come before the behavior. As
we saw earlier, discriminative stimuli (both S+s and S-s) act as if they control behavior, turning it on or
off.
In what important respect was Lindsley's model incomplete?
During the baseline period, one should keep a record of potentially important antecedent stimuli, as well
as the record of the frequency of the behavior itself. One weakness of Lindsley's Simplified Precision
Model (previous page) was that it did not mention antecedents. It only mentioned changing consequences
of a behavior. Often the relevance of antecedents will be obvious, once they are noticed. For example, if a
child's misbehavior occurs every day when the child is dropped off at a nursery school, a good behavior
analyst will target this period of the day and try to arrange for a reinforcing event to occur if the child
remains calm following the departure of the parent at such a time.
5
Using Reinforcement
In some cases, mere measurement and informative feedback is not enough. A direct form of intervention
is required. Therefore the third part of Lindsley's method consists of arranging a contingency involving
reinforcement or punishment.
Green and Morrow (1974) offer the following example of Lindsley's Simplified Precision Model in
action.
Jay, a twenty-year-old spastic, retarded man urinated in his clothes. A physician had ruled out any organic
basis for this behavior. Recently Jay was transferred from a state hospital to a nursing home.... The
nursing home director said that Jay must leave if he continued to be enuretic. He agreed, with reservation,
to let [a student] try a program to eliminate the wetting behavior.
Conveniently, the nurses had been routinely recording the number of times Jay wet his clothes each
day....Jay's daily rate of wets was...plotted on a standardized graph form...
Questionable punishment procedures, insisted upon by the nursing home director, were used in the first
two modification phases. First, Jay was left wet for thirty minutes following each wet. Second, Jay was
6
left in his room for the remainder of the day after he wet once. Throughout both punishment phases the
median rate remained unchanged.
What to do? Lindsley's step #4 is "if at first you don't succeed, try and try again with revised procedures."
That is a rather strange sounding rule, but it proves important. Behavior analysts are not automatically
correct in their first assessment of a situation. The first attempt at behavior change may not work. Like
other scientists, they must guess and test and try again. In the case of Jay, behavior analysts decided to
stop the "questionable punishment procedures" and try positive reinforcement instead.
How does the story of "Jay" illustrate step #4 of Lindsley's procedure?
In a fourth phase, following consultation by the student with the junior author, and with the nursing home
director's reluctant consent, Jay was given verbal praise and a piece of candy each time he urinated in the
toilet. No punishment was used. Candy and praise were chosen as consequences after discussion with the
nursing home personnel disclosed what Jay seemed to "go for." The procedure essentially eliminated
wetting.
In an "after" phase (after reinforcements were discontinued), the rate remained at zero except for one
lapse. Presumably, approved toilet behavior and nonwetting were now maintained by natural
consequences, such as social approval and the comfort of staying dry. (Green & Morrow, 1974, p.47))
Note the "after" phase. Proper intervention procedures also include a follow-up to make sure the change is
permanent.
Later, he was returned to a special education classroom. Low probability behavior was classwork, and
high probability was escaping alone to a corner of the schoolyard. A contingency was established in
which Burton was allowed to leave the class after completion of his assignment. Later, school attendance
became a high probability behavior. At that point, he was allowed to attend school only contingent upon
more acceptable behavior at home. (Tharp and Wetzel, 1969, p.47)
became very meaningful to her. When Rena's behavior was not satisfactory at school, this reinforcement
did not occur.
The plan took effect rather rapidly, and before long Rena was no problem at school. And, as hoped, her
behavior at home also improved.
reinforcer on hand; you can give reinforcements immediately (in tokens) and the patient can "spend" the
tokens later at a store in the hospital.
the experimental situation with this behavior already in its repertoire. That is why it is called an entering
behavior.
Rule #3 says to devise a series of small steps leading from the entering behavior (holding the Frisbee in
his mouth) to the target behavior (snatching the Frisbee from the air). Finding such a sequence of steps is
the trickiest part of shaping. How can you get from "here" to "there"? One approach is to toss the Frisbee
about a foot in the air toward the dog, hoping it will perform the skill so you can reinforce it.
Unfortunately, this probably will not work. The dog does not know what to do when it sees the Frisbee
coming, even if the dog has chewed on it in the past. It hits the dog on the nose and falls to the ground.
This brings us to rule #4. If a step is too large (such as going directly from chewing the Frisbee to
snatching it out of the air) you must break it into smaller steps. In the Frisbee-catching project, a good
way to start is to hold the Frisbee in the air. The dog will probably rise up on his hind legs to bite it. You
let the dog grab it in his mouth, then you release it. That is a first, simple step. Next, you release the
Frisbee a split second before the dog grabs it. If you are holding the Frisbee above the dog, you might
drop it about an inch through the air, right into the dog's mouth.
Now the most critical part of the shaping procedure takes place. You gradually allow the Frisbee to fall a
greater and greater distance before the dog bites it. You might start one inch above the dogs mouth, work
up to two inches, then three, and so on, until finally the dog can grab the Frisbee when it falls a whole
foot from your hand to the dog's mouth. (For literate dogs outside the U.S. and Britain, use centimeters
and meters.) Keep rule #4 in mind; if the dog cannot grab the Frisbee when it falls 8 inches, you go back
to 6 inches for a while, then work back to 8, then 10, then a foot.
Eventually, if the dog gets into the spirit of the game, you should be able to work up to longer distances.
Once the dog is lunging for Frisbees that you flip toward it from a distance of a few feet, you are in
business. From there to a full-fledged Frisbee retrieval is only a matter of degree.
Rule #5 says to have reinforcers available in small quantities to avoid satiation. Satiation (pronounced
SAY-see-AY-shun) is "getting full" of a reinforcer—getting so much of it that the animal (or person) no
longer wants it. If satiation occurs, you lose your reinforcer, and your behavior modification project
grinds to a halt. Suppose you are using Dog Yummies to reinforce your Frisbee-catching dog. If you use
50 yummies getting it up to the point where it is catching a Frisbee that falls eight inches, you will
probably not get much further that day. The dog is likely to decide it has enough Dog Yummies and crawl
off to digest the food.
Why is satiation unlikely to be a problem in this situation?
Actually, dogs respond well to social reinforcement (praise and pats), and that never gets old to a loving
dog, so dog trainers usually reserve their most powerful reinforcers for occasional use. Retrieval games
are themselves reinforcing to many dogs. When I took a dog obedence course, the trainer used retrieval of
11
a tennis ball to reinforce her dog at the end of a training session. That was a fine example of the Premack
Principle in action because a preferred behavior (retrieving a tennis ball) was used to reinforce non-
preferred behaviors (demonstrating obedience techniques).
Differential Reinforcement
Differential reinforcement is selective reinforcement of one behavior from among others. Unlike shaping,
differential reinforcement is used when a behavior already occurs and has good form (does not need
shaping) but tends to get lost among other behaviors. The solution is to single out the desired behavior
and reinforce it.
12
What is differential reinforcement? How is it distinguished from shaping? What is a "response class"?
Differential reinforcement is commonly applied to a group of behaviors. For example, if one was working
in a day care center for children, one might reinforce any sort of cooperative play, while discouraging any
fighting. The "cooperative play" behaviors would form a group singled out for reinforcement. Such a
group is labeled a response class. A response class is a set of behaviors—a category of operants—singled
out for reinforcement while other behaviors are ignored or (if necessary) punished. The only limitation on
the response class is that the organism being reinforced must be able to discriminate it. In the case of
preschoolers at a day care center, the concept of cooperative play could be explained to them in simple
terms. Children observed to engage in cooperative play would then be reinforced in some way that
worked, for example, given praise or a star on a chart or more time to do what they wanted.
How did Pryor reinforce creative behavior?
Karen Pryor is a porpoise trainer who became famous when she discovered that porpoises could
discriminate the response class of novel behavior. Pryor reinforced two porpoises at the Sea Life Park in
Hawaii any time the animals did something new. The response class, in this case, was any behavior the
animal had never before performed. Pryor set up a contingency whereby the porpoise got fish only for
performing novel (new) behaviors. At first this amounted to an extinction period. The animals were
getting no fish.
How did the porpoises' natural reaction to extinction help Pryor?
As usual when an extinction period begins, the porpoises showed extinction-induced resurgence. In other
words, the variety of behavior increased, and the porpoises showed a higher level of activity than normal.
They tried their old tricks but got no fish. Then they tried variations of old tricks. These were reinforced
if the porpoise had never done them before. The porpoises caught on to the fact that they were being
encouraged to do new and different things. One porpoise "jumped from the water, skidded across 6 ft of
wet pavement, and tapped the trainer on the ankle with its rostrum or snout, a truly bizarre act for an
entirely aquatic animal" (Pryor, Haag, & O'Reilly, 1969). The animals also emitted four new behaviors—
the corkscrew, back flip, tailwave, and inverted leap—never before performed spontaneously by
porpoises.
Eventually a simplified form of this incentive did find a home in the auto industry, and it even became
part of union contracts. In an echo of the above story (which I heard as an undergraduate in the early
1970s) news articles in 1997 reported that workers in a stamping plant at Flint, Michigan were going
home after only 4 hours instead of the usual 8. Apparently the union had negotiated a production quota
based on the assumption that certain metal-stamping machines were capable of stamping out 5 parts per
hour, but actually the machines were capable of 10 per hour. The workers speeded them up to 10 parts per
hour and met the quota specified in their union contract within 4 hours. GM decided to eliminate the "go
home early" policy, and this was one issue in a 1998 strike against General Motors. In the end, a modified
version of the policy with a higher rate of production was re-affirmed in a new contract.
What is reinforced by an hourly wage? By piecework? What is the disadvantage of piecework?
If you think about it, an hourly wage reinforces slow behavior. The less energy a worker puts into the job,
the more money the worker receives per unit of energy expended. By contrast, when people are paid
according to how much they produce (so-called "piecework" systems) they work very quickly to
maximize their gain. The obvious disadvantage is that workers who are rushing to complete their quota
might produce a poor quality product, or endanger themselves, unless steps are taken to maintain quality
control and safety on the job.
Using Punishment
Punishment is the application of a stimulus after a behavior, with the consequence that the behavior
becomes less frequent or less likely. Most people assume the stimulus has to be unpleasant (aversive), but
that is not always the case. Any stimulus that has the effect of lowering the frequency of a behavior it
follows is a punisher, by definition, even if it does not seem like one.
Electric shock is often the most effective punishing stimulus. Perhaps because electricity is an unnatural
stimulus, or because it disrupts the activity of nerve cells, organisms never become accustomed to it, and
they will do almost anything to avoid it. Whatever the reason, electric shock "penetrates" when other
punishers fail to work.
What is punishment?
Treatment of head-banging
Whaley and Mallott (1971) tell of a nine-year-old, mentally retarded boy who caused himself serious
injury by head-banging. The boy had to be kept in a straitjacket or padded room to keep him from hurting
himself. This prevented normal development; he acted more like a three-year-old than a nine-year-old.
Left unrestrained in a padded room, the boy banged his head up to a thousand times in an hour.
Something had to be done.
How did punishment help the child who banged his head?
The researchers decided to try a punishment procedure. They placed shock leads (electrodes) on the boy's
leg, strapping them on so he could not remove them. Each time he banged his head, they delivered a mild
shock to his leg.
The first time he banged his head and was given a shock, Dickie stopped abruptly and looked about the
room in a puzzled manner. He did not bang his head for a full three minutes, and then made three contacts
with the floor in quick succession, receiving a mild shock after each one. He again stopped his head-
banging activity for three minutes. At the end of that time he made one more contact, received a shock,
and did not bang his head for the remainder of the one-hour session. On subsequent sessions, after a
shock was given the first time Dickie banged his head, he abandoned this behavior. Soon the head
banging had stopped completely and the mat was removed from the room. Later, a successful attempt was
made to prevent Dickie from banging his head in other areas of the ward.
Currently Dickie no longer needs to be restrained or confined and has not been observed to bang his head
since the treatment was terminated; therefore, in his case, punishment was a very effective technique for
eliminating undesirable behavior. The psychologist working with Dickie stressed that the shock used was
17
mild and, compared to the harm and possible danger involved in Dickie's head banging, was certainly
justified (Whaley & Mallott, 1971).
Twenty years after this technique was developed, it was still being debated. It worked and spared the
child further self-injury, plus it stopped a destructive habit that might have persisted for years if left
unchecked. However, people who regarded any use of electric shock with humans as unacceptable
attacked this technique as cruel.
What are some negative side effects of punishment? What typically happens when a human tries to
punish a cat?
Dunbar (1988) noted that if a cat owner sees a cat performing a forbidden act such as scratching the
furniture, punishment is not usually effective. If the human punishes the cat, the cat merely learns to
avoid the human (so the human becomes an S-). Typically the cat will continue to perform the same
forbidden act when the human is not present. If the human discovers evidence of a cat's forbidden
behavior upon coming home, and punishes the cat, the cat learns to hide when the human comes home.
This does not mean the cat feels "guilt." It means the cat has learned that the human does unpleasant
things when first arriving home. The cat does not associate punishment with the forbidden behavior,
which typically occurred much earlier.
What is "punishment from the environment" and how can it be used to keep cats off the kitchen counter?
If punishment from a human does not work very well, a good alternative is punishment from the
environment. It works with all animals, even cats. Dunbar points out, "A cat will only poke its nose into a
candle flame once." For similar reasons, "a well-designed booby trap usually results in one-trial learning."
For example, a cat can be discouraged from jumping on a kitchen counter by arranging cardboard strips
that stick out about 6 inches from the counter, weighted down on the counter with empty soda cans. When
the cat jumps to the counter it lands on the cardboard. The cans go flying up in the air, and the whole kit
and caboodle crashes to the floor. The cat quickly learns to stay off the counter. Meanwhile the cat does
not blame this event on humans, so the cat does not avoid humans, just the kitchen counter.
What conditional response bedevils cat owners? How do automatic gadgets help?
Sometimes cats get into the nasty habit of defecating or urinating on a carpet. Once the problem starts, it
is likely to continue, because even enzyme treatments designed to eliminate the odor do not eliminate all
traces of it, and the odor "sets off" the cat in the manner of a conditional response. The behavior occurs
when no human is present, and punishment by a human does not deter it, for reasons discussed above
(punishment comes too late and the animal fails to connect the punishment with the behavior).
What to do? The problem is urgent and motivates online buying, so entrepreneurs have responded.
Gadgets designed to deter this behvaior typically combine a motion sensor with a can of pressurized air or
a high-frequency audio alarm. The blast of air (or alarm) is triggered by the presence of the cat in the
forbidden area. According to reviews by troubled cat owners at places like amazon.com, these devices
sometimes work when all else has failed. They are also a good example of punishment from the
environment.
What are several reasons dog trainers recommend against harsh punishment?
Dog trainers also recommend not using harsh punishment. Some dogs will "take it," but some will
respond with an active defense reflex that could involve biting, even if the dog is normally friendly.
19
(Terrier breeds are particularly prone to this problem, and a usually-friendly dog can surprise a child with
a vicious response to being harassed.) Moreover, punishment is unnecessary with dogs. Dogs have been
bred to desire the approval of humans. They respond very well to positive reinforcement as simple as a
word of praise.
When punishment is used with any pet or domesticated animal, it should be as mild as possible. For
example, if cat owners have a kitty that likes to wake them up too early in the morning, the simplest and
gentlest approach is negative punishment or response cost. Simply put the kitty out of the room. If that
fails, a squirt gun works. Gentle methods are to be preferred with all animals. Trainers who handle wild
horses no longer "break" them, the way they did a century ago. Modern horse trainers win horses over
with gentle and consistent positive reinforcement. It works just as well and results in a horse that enjoys
human company.
How should cat owners respond to unwanted morning awakenings?
Is electric shock punishment ever justified? Some people argue against all use of electric shock, in
principle, as if shock is always inhumane. But electric shocks come in all sizes. Small shocks do not cause
physical injury, and they are very effective punishers that discourage a repetition of harmful behavior.
Sometimes this is necessary and desirable.
What is an example of "effective and humane" use of electric shock?
In the case of electric fences used by ranchers, for example, shock is effective and humane. You can
touch an electric fence yourself, and although you will get a jolt, you will not be harmed. But even large
animals like horses will not test an electric fence more than a few times. Then they avoid it. Avoidance
behaviors are self-reinforcing, so large animals will continue to avoid a thin electric fence wire, even if
the electricity is turned off. They do not "test" it the way they test non-electric fences (often bending or
breaking them in the process). Electric fences also allow farmers and ranchers to avoid using barbed wire,
which can injure animals severely.
How could such a pattern occur? Consider these facts. The average parent is very busy, due partly to
having children. The parent enjoys peace and quiet when children are being good or playing peacefully.
Therefore, when children are well behaved, parents tend to ignore them. By contrast, when children
misbehave, parents must give attention. Parents must break up fights, prevent damage to furniture or walls
or pets, and respond to screams or crying. Most children are reinforced by attention. So there you have all
the necessary ingredients for the punishment trap. Children learn to misbehave in order to get attention.
One student noticed the misbehavior-for-attention pattern while visiting a friend:
I was at my friend's trailer one weekend visiting with her and her small daughter. I played with Jessie, the
little girl, for a while. Then Dee-Ann and I sat down to talk, leaving Jessie to play with her toys. She
played quietly for a while, but all of a sudden she got up, stuck her hand in the potted plant, and threw dirt
on the floor. Dee-Ann quickly scolded her and got her to play with her toys again. Dee-Ann then sat back
down and continued with our conversation. In a few minutes, Jessie was throwing dirt again. Again, Dee-
Ann got her to play with her toys, and then sat back down. But in a few minutes Jessie was throwing dirt.
Why did little Jesse throw dirt?
Dee-Ann could not understand why Jessie was acting like that. I then remembered the story about the
parents hitting the kids for messing with things, but the kids wanting attention and doing it more often. So
I thought maybe Jessie was being reinforced for throwing dirt because each time she threw dirt, Dee-
Ann's attention reverted to her. I explained this to Dee-Ann, and the next time Jessie messed with the
plant, Dee-Ann simply ignored her, picked up the plant and sat it out of Jessie's reach. That ended the
dirt-throwing problems. [Author's files]
Little Jessie probably got lots of loving attention when her mother was not engrossed in conversation with
a friend. But some children receive almost no attention unless they are "bad." In such cases, serious
behavior problems may be established. One student remembers this from her childhood:
When I was a little girl, I always told lies, even if I did not do anything wrong... I think the only reason I
lied was to get attention, which my parents never gave me. But one thing puzzles me. Why would I lie
when I knew my dad was going to spank me with a belt? It really hurt. [Author's files]
How can a stimulus intended as punishment actually function as a reinforcer, in this type of situation?
The answer to this student's question, probably, is that she wanted attention more than she feared pain.
Any attention-even getting hit with a belt-is better than being totally ignored. Similar dynamics can occur
in a school classroom. One of my students told about a teacher in elementary school who wrote the names
of "bad" children on the board. Some children deliberately misbehaved in order to see their names on the
board.
How can a parent avoid the punishment trap?
21
The solution? It is contained in the title of a book (and video series) called Catch Them Being Good.
Parents should go out of their way to give sincere social reinforcement-love, attention, and appreciation-
when it is deserved. When children are playing quietly or working on some worthy project, a parent
should admire what they are doing. When they are creative, a parent should praise their products. When
they invent a game, a parent should let them demonstrate it. If you are a parent with a child in a grocery
store, and you observe other children misbehaving, point this out to your own children and tell them how
grateful you are that they know how to behave in public. Don't wait for them to misbehave. Point out how
"mature" they are, compared to those kids in the next aisle who are yelling and screaming.
Sincere social reinforcement of desirable behavior is a very positive form of differential reinforcement. It
encourages a set of behaviors that might be called sweetness. With such a child, the occasional reprimand
or angry word is genuinely punishing. This reduces the overall level of punishment considerably. A child
who loves you and trusts you and looks forward to your support may be genuinely stricken by harsh
words. A loving parent realizes this and adopts a gentler approach, which is usually adequate when a
child cares about pleasing the parent.
What did Tanzer write about in the book titled Your Pet Isn't Sick ?
Pets are also capable of something like the punishment trap. They, too, can learn to misbehave or pretend
to be ill, in order to get attention from humans.
One veterinarian saw so many malingering animals trying to get attention by acting ill, coughing, or
limping, that he wrote a book called Your Pet Isn't Sick [He Just Wants You to Think So] (Tanzer, 1977).
It explained how owners who accidentally reinforced symptoms of illness caused pet problems. Owners
will run over to a pet and comfort it, if it makes a funny noise like a cough. Soon the pet would be
coughing all the time. Tanzer found that if the animals were not reinforced for the symptoms (after a
thorough check to rule out genuine problems) the symptom would go away.
What did one vet call the "single most common problem" he encountered?
Another vet specialized in house calls so he could see a pet misbehave in context. He said unwitting
reinforcement of undesired behavior was the single most common problem he encountered. The solution
was the same as with many child behavior problems: "catch them being good." Praise the pet and give it
lots of love when it acts healthy; ignore it when it starts coughing or limping. Usually the problem goes
away. Of course, first you have to rule out genuine medical problems.
consists of removing a reinforcing stimulus. This has the effect of punishing a behavior, making the
behavior less frequent.
On the internet, several discussion groups cater to rabbit owners. Here is an example from one of them in
which the solution to a problem involved response cost.
[An American list member writes:]
I have a 1 1/2 year old French lop and for his entire 1 1/2 years he has been obsessed with peeing on the
bed. We discovered that if we kept the bed covered with a tarp, it would usually deter him from the bed,
though not always. Up until about a month ago, we thought we had him pretty well trained, with only a
few infrequent accidents. But then, my husband and I got married and Jordy (we call him Monster) and I
moved in with my husband. He seems to have adjusted quite well, with the exception of his bed habit...
How was response cost used with a rabbit?
...Please help us, we want to keep Jordy as happy as possible, but we can't keep washing bedding every
day.
[A British list member responded:]
We have two "outdoor" rabbits that come inside for about an hour a day. The older (male) rabbit used to
pee on the bed. Whenever he did this he would immediately be put back in the hutch outside. After about
10 times of peeing on our bed, he learnt that if he peed, quite simply, he wouldn't be able to play with
"Mum" and "Dad." We haven't had a problem since then. I imagine that if Jordy is put outside of the
bedroom and denied affection for the rest of the evening he'll learn pretty quickly. Good luck!
This is a fine example of response cost, because the rabbit's behavior was punished by removing a
reinforcing stimulus. In this case the reinforcing stimulus was being allowed in the house. This stimulus
was removed. Eventually, after about 10 repetitions, he learned the consequence of his behavior, and the
problem behavior was eliminated.
B.F. Skinner
The powers of daily habit can jumpstart important life activities. Books about studying in college
typically advise that students set aside a particular time and place for study. The familiar time and
location triggers the studying behavior. That is important with studying because getting started is half the
battle. Usually studying is not too painful once one gets started. Problems occur when a person never gets
started or procrastinates until there is too much work for the remaining time.
How did B.F. Skinner apply this principle to increase his writing productivity?
B.F. Skinner, whose research on operant conditioning underlies virtually all of the second half of this
chapter, used stimulus control to encourage his scholarly work. He followed a rigid daily schedule. At 4
a.m. he got up and ate breakfast, then he wrote for about five hours. Time and the environment of his
home office served as discriminative stimuli to get him started on his writing. Around 10 a.m. Skinner
took a walk down to campus (Harvard) to meet his morning classes. Walking probably became a stimulus
for mulling over his lectures for the day. In the afternoon he attended meetings and scheduled
appointments. With this routine he was always able to put in a few good hours of writing every day
during his prime time, early morning, while also scheduling adequate time for his other activities.
antecedent stimuli should also be observed; they are often important in behavior analysis. Baseline
measurement may itself produce behavior change. When this is done deliberately (for example, to help
people stop smoking) it is called self-monitoring.
The Premack principle suggests that a preferred behavior can be used to reinforce less likely behaviors.
Shaping is a technique that employs positive reinforcement to encourage small changes toward a target
behavior. Prompting and fading is a technique in which a behavior is helped to occur, then help is
gradually withdrawn or faded out until the organism is performing the desired behavior on its own.
Differential reinforcement is the technique of singling out some behaviors for reinforcement, while
ignoring other behaviors.
Negative reinforcement works wonders when employees are given "time off" as a reinforcer for good
work. Babies are master behavior modifiers who use negative reinforcement to encourage nurturing
behavior in parents.
Punishment is effective in certain situations. Electric fences are arguably more humane than alternatives
such as barbed wire for horses and other grazing animals. In human child-rearing, parents must beware of
the "punishment trap," which occurs when children are ignored until they misbehave. The solution is to
"catch them being good." Animals can also learn to misbehave or act ill, if it gets them attention. They,
too, respond better to kindness than punishment.
Analysis of antecedents can prove helpful in changing behavior. Dieters are often advised to avoid eating
in front of the TV, so television does not become an S+ for eating. Time of day can be used as a
discriminative stimulus for desirable behaviors such as studying. B.F. Skinner used this technique when
he set aside a certain time every morning for writing.