0% found this document useful (0 votes)
6 views

Prediction:Control Operant Behavior

The article discusses the prediction and control of operant behavior, emphasizing that understanding a behavior's history is crucial for accurate predictions. It differentiates between goal-directed actions, which are sensitive to the value of reinforcers, and habits, which operate automatically regardless of reinforcer value. The authors argue that behavior analysis must consider these historical factors to effectively manipulate and predict operant behavior outcomes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Prediction:Control Operant Behavior

The article discusses the prediction and control of operant behavior, emphasizing that understanding a behavior's history is crucial for accurate predictions. It differentiates between goal-directed actions, which are sensitive to the value of reinforcers, and habits, which operate automatically regardless of reinforcer value. The authors argue that behavior analysis must consider these historical factors to effectively manipulate and predict operant behavior outcomes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Behavior Analysis: Research and Practice © 2018 American Psychological Association

2019, Vol. 19, No. 2, 202–212 2372-9414/19/$12.00 http://dx.doi.org/10.1037/bar0000108

Prediction and Control of Operant Behavior:


What You See Is Not All There Is

Mark E. Bouton Bernard W. Balleine


University of Vermont University of New South Wales

Prediction and control of operant behavior are major goals of behavior analysis. We
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

suggest that achieving these goals can benefit from doing more than identifying the
This document is copyrighted by the American Psychological Association or one of its allied publishers.

3-term contingency among the behavior, its setting stimulus, and its consequences.
Basic research now underscores the idea that prediction and control require consider-
ation of the behavior’s history. As one example, if an operant is a goal-directed action,
it is controlled by the current value of the reinforcer, as illustrated by the so-called
reinforcer devaluation effect. In contrast, if the behavior is a habit, it occurs automat-
ically, without regard to the reinforcer’s value, as illustrated by its insensitivity to the
reinforcer devaluation effect. History variables that distinguish actions and habits
include the extent of their prior practice and their schedule of reinforcement. Other
operants can appear to have very low or zero strength. However, if the behavior has
reached that level through extinction or punishment, it may precipitously increase in
strength by changing the context, allowing time to pass, presenting the reinforcer
contingently or noncontingently, or extinguishing an alternative behavior. Behaviors
that are not suppressed by extinction or punishment are not affected the same way.
When predicting the strength of an operant behavior, what you see is not all there is.
The behavior’s history counts.

Keywords: action, habit, extinction, punishment, behavioral history

A major goal of behavior analysis is the the occasion within which the response oc-
prediction and control of operant behavior. curs, (2) the response itself, and (3) the rein-
Accordingly, since at least Skinner (1938, forcing consequences of the response. Various
1969), and extending through today’s use of terms for the elements of the contingency have
functional analysis (e.g., Hanley, Iwata, & been used over the years: for example, the occa-
McCord, 2003), it has been common to em- sion has been variously called a discriminative
phasize the three-term contingency in the stimulus, state, establishing operation, or context;
analysis of an operant response: that is, (1) the response an operant or action; and the rein-
forcer a reward, consequence, or outcome. But,
whatever terms are used, the approach has implied
that to predict and control a response it is suffi-
This article was published Online First June 18, 2018.
Mark E. Bouton, Department of Psychological Science, cient to identify its specific setting conditions—its
University of Vermont; Bernard W. Balleine, Decision Neu- specific occasion—and to develop the means to
roscience Laboratory, School of Psychology, University of selectively apply reinforcement. Furthermore, al-
New South Wales.
Preparation of this article was supported by National though it has been conceded that identifying and
Institutes of Health Grant RO1 DA 033123 to Mark E. applying these key events might prove challeng-
Bouton, and by a grant from the Australian Research Coun- ing in practice, from this perspective it is not a
cil (DP150104878) and a Senior Principle Research Fellow-
ship from the National Health and Medical Research Coun-
problem in principle: An important assumption of
cil of Australia (GNT1079561) to Bernard W. Balleine. We the behavior analytic tradition is that each term of
thank Eric Thrailkill for comments. the contingency is open to direct observation.
Correspondence concerning this article should be ad- Therefore, accurately describing the situation, the
dressed to Mark E. Bouton, Department of Psychological
Science, University of Vermont, 2 Colchester Avenue, Bur- response, and its consequences will be sufficient
lington, VT 05405-0134. E-mail: mark.bouton@uvm.edu to specify the factors that control any operant.
202
PREDICTION AND CONTROL OF OPERANT BEHAVIOR 203

Fundamentally, this means that if a behavior thing more than the simple situation–response
analyst wants to increase or decrease the strength association was controlling performance came
of some specific operant, she or he needs to find its from incentive contrast studies. Generally, these
antecedents or consequences and manipulate them studies demonstrated that rats were more con-
accordingly. But basic research from the animal cerned with the relative value of the conse-
learning laboratory now provides additional infor- quences of their actions than they should be if
mation that might help predict what will be effec- the consequence merely served a reinforcing
tive. Two operants that look very much alike may function. Early studies reported that suddenly
actually have very different properties; depending changing the value of an earned outcome, for
on their learning histories, they may be controlled example, changing the food reward from banana
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

by different kinds of event, or influenced quite to lettuce in monkeys (Tinklepaugh, 1928), caused
This document is copyrighted by the American Psychological Association or one of its allied publishers.

differently by the same event. In this article, we an immediate and powerful change in behavior
consider two sets of examples drawn from the from the pursuit of food to frustration and aggres-
current literature. In the first, we show that two sion. In one of the more famous studies involving
seemingly indistinguishable operants that occur at rats running in a straight runway, a sudden reduc-
a stable rate might have a different status as either tion in the amount of food earned by running to
goal-directed actions or habits. This influences the goal box caused an immediate reduction in
the variables or events that control them. In the performance of the response on subsequent trials
second, we consider what we call silent operants, to a level that was below that of animals whose
which have very low or zero strength. Depending responding was maintained by that reduced
on their actual reinforcement histories, they can amount of food from the outset, the so-called
respond quite differently to the same event. Both negative contrast effect (Crespi, 1942). Although
examples violate what Daniel Kahneman (2011) reinforcement theories could address the overall
has called the “what you see is all there is” heu- reduction in performance, the rapidity with which
ristic used in human decision making. When it the reduction occurred and the actual depression
comes to operant behavior, what you see is not all in responding relative to controls—the contrast
there is. Knowledge of a behavior’s history is effect—was difficult to explain in reinforcement
useful to predict it and control it accurately. terms. What these results suggested was that
rather than merely being controlled by the situa-
Actions Versus Habits tional cues— or the occasion—within which the
operant was performed, it was also influenced by
Historically, the three-term contingency was incentive motivation, which required mediation
developed to describe responses that, at that by some sort of representation or expectation of
time, were commonly called habits (e.g., Hull, the consequence that the operant produced (e.g.,
1943). In accord with the then-dominant neobe- compare Spence, 1956, and Tolman, 1932).
haviorist position, the performance of any op- There have been numerous similar findings in
erant was thought to reflect a stimulus–response mazes and runways supporting this general
association that was strengthened by the selec- claim (see Flaherty, 1996, for a review), as well
tive application of a reinforcer. In the paradigm as studies showing that even lever pressing in
case, a hungry rat trained to press a lever for food rats can be controlled by a representation of the
was argued to do so in the presence of various reward value of the food. Adams and Dickinson
situational, contextual, or discriminative cues (1981), for example, reported that hungry rats
(serving as the S) with which the lever-press re- trained to lever-press for sugar solution were
sponse (the R) became associated due to the de- strongly sensitive to changes in the value of the
livery of food (the reinforcer) after the response. sugar when it was altered offline by conditioned
Such a description accords with the argument that taste aversion. In this study, acquisition of lever
merely observing the rat lever pressing provides pressing for sugar and then taste aversion con-
the basis for an adequate description of the re- ditioning to sugar each occurred in separate
sponse and of the contingency controlling that phases. In a final test, the rats were returned to
response. the lever-press apparatus and lever-pressing
Almost immediately, a number of findings performance was assessed in extinction. (Note
began to question the sufficiency of this ac- that conducting the test in extinction prevented
count. Perhaps the earliest indication that some- the opportunity to pair the lever-press response
204 BOUTON AND BALLEINE

directly with the now-devalued sugar in expe- responding compared to the nondevalued con-
rience.) Importantly, the rats with the new taste trol group. In contrast, rats given extended train-
aversion to sugar nevertheless reduced their le- ing did not show this effect; although the rats
ver pressing immediately relative to a group that given the devaluation treatment had a strong
had received the sugar and the illness unpaired. taste aversion to the sucrose, they lever-pressed
This is the so-called reinforcer devaluation ef- on test in a similar manner to the unpaired control
fect. As a consequence of this finding, the au- group. As Adams observed, it appeared that with
thors concluded that lever pressing in rats, as moderate training the rats’ actions were goal-
implied by the performance observed in mazes directed, whereas when given extended training,
and runways, is goal-directed and influenced by they had become habits.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

the organism’s knowledge of the current value These early studies suggested that, depending
This document is copyrighted by the American Psychological Association or one of its allied publishers.

of the reinforcer. The organism learns about the on the amount of training, it is possible for lever
behavior and its reinforcer, and when the rein- pressing to become goal directed or habitual.
forcer’s value changes, the strength of the op- That is, the contingency controlling an operant
erant adjusts accordingly. can either be driven by a situation-response-
The benefit of encoding the actual conse- reinforcement contingency, in the case of hab-
quence of an action is the flexibility it provides its, or by the association between the action and
when the value of that consequence changes outcome, in the case of goal-directed actions.
even without any obvious shift in the context or Subsequent studies have established a number
occasion in which the response is produced. of other important differences between goal-
Rather than requiring an animal to perform a directed actions and habits. For example, (1)
response to learn that its consequences are no goal-directed actions develop and change very
longer valuable (and potentially noxious), inte- quickly and so can govern choice performance:
grating the relationship between the action and Colwill and Rescorla (1985) were the first to
its consequences with its experience of the al-
establish that the devaluation of an action alters
tered value of those consequences is sufficient
the rats’ willingness to choose its associated action
to alter the performance of the response. This is
without affecting the performance of other, simul-
not always true, however. Although the above
taneously available actions. Furthermore, (2)
studies suggest that operants can be goal di-
rected, their performance can also appear to be rather than being controlled by the context or
consistent with simpler reinforcement theory. situation, goal-directed actions are controlled by
For example, Adams (1982) trained two groups their association with a specific outcome (Colwill
of rats to lever press on a continuous reinforce- & Rescorla, 1986). As a consequence, goal-
ment schedule for sugar solution. One group directed actions are sensitive to degradation of the
was given the opportunity to make 100 rein- action-outcome contingency and selectively so; in
forced lever presses, whereas the other was a situation where two actions are trained with
given the opportunity to make 500 reinforced different reinforcers, adding deliveries of one or
lever presses. After this training, half of each other reinforcer that are not contiguous with its
group was given the taste aversion (reinforcer associated action reduces the performance of that
devaluation) treatment in which the sugar was action while leaving other actions unaffected (Bal-
paired with illness, whereas the other half were leine & Dickinson, 1998; Dickinson & Mulatero,
given the control treatment with the sugar un- 1989). Indeed, (3) when lengths are taken to make
paired with illness. Lever pressing was then the performance of a goal-directed action condi-
assessed in all animals in extinction. Despite the tional on a discriminative stimulus (SD), control
fact that the rats were all pressing the lever in a by the SD is not over the response itself but rather
seemingly identical manner before devaluation— over the response– outcome association (Bradfield
suggesting that, from a purely observational per- & Balleine, 2013; Rescorla, 1991; see also Trask
spective, similar contingencies should be con- & Bouton, 2014). Other studies have investigated
trolling their performance—the different the influence of different forms of devaluation and
amounts of training produced very different revaluation of the instrumental reinforcer using
forms of behavioral control. The rats given only shifts in primary motivation (Balleine, 1992;
a moderate amount of training were sensitive to Dickinson & Dawson, 1987) and incentive learn-
the outcome devaluation treatment and reduced ing manipulations (Balleine, 2001; Dickinson &
PREDICTION AND CONTROL OF OPERANT BEHAVIOR 205

Balleine, 1994), including sensory-specific satiety and humans can appear relatively normal com-
(Balleine & Dickinson, 1998). pared to healthy controls, tests have found that
In contrast, habits have been found to be drug exposure attenuates goal-directed action
difficult to develop in choice situations; despite control and causes downregulation of the neural
extensive overtraining, when an explicit choice circuitry associated with goal-directed action
between two actions is always made available, while causing increases in habitual control and
rats continue to show sensitivity to outcome in habit-related circuitry (Furlong et al., 2017;
devaluation (Colwill & Rescorla, 1988; Kosaki Hogarth, Balleine, Corbit, & Killcross, 2013).
& Dickinson, 2010). Furthermore, in contrast to Similarly, deficits in goal-directed action con-
goal-directed actions, habits are strongly and trol have been found in adolescents with depres-
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

immediately affected by shifts in motivational sion, social anxiety, or autism spectrum disor-
This document is copyrighted by the American Psychological Association or one of its allied publishers.

state (Dickinson, Balleine, Watt, Gonzalez, & der and in adults with chronic schizophrenia,
Boakes, 1995) and by shifts in context (Thrailkill based on devaluation tests (Alvares et al., 2014,
& Bouton, 2015). And as their strong relationship 2016; Morris, Quail, Griffiths, Green, & Bal-
to the context suggests, habits have been found to leine, 2015). Conversely, there is some evi-
be insensitive to treatments that would otherwise dence to suggest that disorders such as attention
degrade the response-outcome contingency (Dez- deficit/hyperactivity disorder and degenerative
fouli & Balleine, 2012; Dickinson et al., 1998). conditions, such as Parkinson’s and Hunting-
It is important to note that the distinction ton’s diseases, produce deficits in the habitual
between goal-directed actions and habits is not control of operant responding, causing deficits
unique to rats but is also found in humans. in operant performance due to the demands of
Relatively moderately trained actions in chil- multitasking (Griffiths et al., 2014; Redgrave et
dren above the age of 3 years (but not below) al., 2010).
show sensitivity to reinforcer devaluation
In summary, many studies have pointed to
(Klossek, Russell, & Dickinson, 2008), whereas
the fact that, depending on the training condi-
overtraining can render actions insensitive to out-
tions, the performance of an operant behavior
come devaluation and make them habitual (Tri-
can be controlled by very different contingen-
comi, Balleine, & O’Doherty, 2009). It should
cies. Goal-directed actions are malleable, flexi-
also be noted that actions and habits are not only
mediated by distinct contingencies and learning ble, rapidly acquired, and relatively readily sup-
rules but also by distinct neural systems: Balleine pressed or even eliminated from the response
and colleagues have established that a cortical repertoire. In contrast, habits are inflexible, au-
basal ganglia circuit involving the medial prefron- tomatic, and difficult to change or to suppress.
tal cortex and caudate/dorsomedial striatum in hu- Goal-directed actions are less dependent on the
mans and in rodents mediates goal-directed ac- situation or occasion for support than habits,
tions, whereas a parallel circuit involving the which are highly situationally dependent, and
sensorimotor cortices and the putamen/dorsolat- although goal-directed actions depend on the
eral striatum mediates habits (see Balleine & value of their specific consequences, habits are
O’Doherty, 2010, for a review). The dissociation not concerned with the specific properties or
of these circuits provides even more evidence for values of their consequences at all. Overall,
a division between goal-directed actions and hab- therefore, in operant or instrumental condition-
its as fundamentally distinct forms of behavioral ing, what you see can be ambiguous; an action
control. can appear to be controlled by situational cues,
Finally, the division between goal-directed but on closer inspection it may be found that
actions and habits extends into abnormal behav- shifts in context have little affect performance;
ior (Griffiths, Morris, & Balleine, 2014). For it may appear to be controlled by its conse-
example, addiction is often characterized both quences but subsequent changes in the value of
as a loss of behavioral control and the develop- those consequences may demonstrate that it is
ment of a drug-taking habit (e.g., Everitt & not. To specify the contingency controlling a
Robbins, 2005). Consistent with this character- response requires, therefore, knowledge of the
ization, although operant responding for every- operant’s history and/or additional testing and
day rewards (like chocolate or sugar) in animals assessment.
206 BOUTON AND BALLEINE

Operants With Low or Zero Strength gency, that is, negative punishment (Nakajima,
Urushihara, & Masaki, 2002). The renewal ef-
A related case may be made concerning op- fect suggests that neither extinction nor punish-
erant behaviors that have very low or even zero ment destroys the original operant learning. In-
rates. How they react to treatment in the future stead, the animal learns to refrain from making
depends crucially on how they achieved those the response (e.g., Bouton, Trask, & Carranza-
rates. As we describe next, if the response was Jasso, 2016), and this inhibition is relatively
once reinforced, but was then reduced or elim- specific to the context in which it is learned
inated through extinction or punishment, it may (Todd, Vurbic, & Bouton, 2014). Thus, a be-
return or relapse after some new, precipitating havior that has very low or apparently zero
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

event occurs. Importantly, the return of the re- strength can come alive again if the context is
This document is copyrighted by the American Psychological Association or one of its allied publishers.

sponse cannot be predicted from simple obser- changed. A suppressed or inhibited operant can
vations taken before the new event. Again, what be distinguished from a behavior with true zero
you see in behavior is not all there is. strength by testing the effect of changing the
Contemporary research has identified a num- context.
ber of triggering or precipitating events. One of There are other events that can cause the return
the best studied is a change in the background or relapse of extinguished or punished operants. In
context, typically the Skinner box or operant spontaneous recovery, the mere passage of time
chamber in which the animal performs, which after extinction (e.g., Rescorla, 2004) or punish-
can be individuated by different floor composi- ment (Estes, 1944) can cause the response to
tion, spatial locations, and scents. If the context return again. In our view (e.g., Bouton, 1988),
is changed after extinction, an extinguished op- spontaneous recovery is another renewal effect
erant can return or “renew.” The renewal effect in which the context change is provided by the
can take different forms. In the most-studied passage of time. Just as extinguished respond-
version, a behavior is reinforced in one context ing renews when it is tested in a new physical
(Context A) and then extinguished in another context, it recovers when it is tested in a new
(Context B). If the behavior is then tested temporal context. In fact, many different types
in Context A, the response can return—a phe- of stimuli can play the role of context (e.g.,
nomenon known as ABA renewal. Similarly, if the Bouton, 2002). In state-dependent learning,
response is tested in a third context (Context C) drug states provide the context (e.g., Overton,
after reinforcement and extinction in A then B, 1985); for example, when extinction occurs
responding also recovers (ABC renewal). And if when the organism is under the influence of a
responding is reinforced in A and then extin- drug like a benzodiazepine or alcohol, the re-
guished in the same context (Context A), the sponse renews when the animal is tested with-
response also renews when tested in a second out the drug, that is, sober again (e.g., Bouton,
context (AAB renewal). All three forms of re- Kenney, & Rosengard, 1990; see also Cunning-
newal have been tested and confirmed with op- ham, 1979; Lattal, 2007). And recent experi-
erant behavior (e.g., Bouton, Todd, Vurbic, & ments have suggested that hunger state can be a
Winterbauer, 2011; Todd, 2013). ABA renewal context: When rats lever pressed for sucrose or
also occurs after extinction in children with sweet-fatty pellets (rodent junk food) while they
developmental disabilities (Kelley, Liddon, Ri- were satiated and then received extinction while
beiro, Greif, & Podlesnik, 2015; see Podlesnik, they were hungry, the response recovered when
Kelley, Jimenez-Gomez, & Bouton, 2017, for a it was tested while the rats were satiated again
recent review). And it is also the case that, in (Schepers & Bouton, 2017). Such renewal may
animals at least, renewal also occurs if the op- explain the difficulties faced by dieters who
erant has been suppressed by punishment in- might eat when they do not need food and then
stead of extinction. ABA and ABC renewal starve themselves on a diet—the inhibition of
have both been observed after positive punish- eating when dieting may not transfer too well to
ment, where the response is suppressed by con- the satiated state. Other recent research has
tingent footshock (Bouton & Schepers, 2015); shown that when the organism must make a
and ABA renewal occurs if the response has sequence of two separate responses to earn a
been suppressed by an omission (differential reinforcer (specifically, in a discriminated het-
reinforcement of other behavior [DRO]) contin- erogeneous chain), the first response provides a
PREDICTION AND CONTROL OF OPERANT BEHAVIOR 207

kind of context for the second response tion, the first behavior can return (e.g., Leiten-
(Thrailkill & Bouton, 2016). For example, if the berg, Rawson, & Bath, 1970). This resurgence
second response is extinguished alone, apart effect is now being studied in several laborato-
from the chain, it is renewed when it is returned ries. Although several explanations have been
to the chain and tested after the rat has made the suggested (Leitenberg et al., 1970; Shahan &
first response (Thrailkill et al., 2016). The first Craig, 2017; Shahan & Sweeney, 2011), most
response is more affected by changing the phys- evidence favors a view that is based on the
ical (Skinner box) context than is the second principles described above: Removal of rein-
response. The point is that many kinds of stim- forcement for R2 during testing changes the
uli or events can play the role of context. reinforcer context, so the inhibited R1 renews
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

Another precipitating event that causes the (e.g., Winterbauer & Bouton, 2010). A number
This document is copyrighted by the American Psychological Association or one of its allied publishers.

return of extinguished or punished behavior is of findings support this hypothesis. Perhaps the
free presentations of the reinforcer. In the rat most straightforward is that when R1 is rein-
laboratory, when a few reinforcers are presented forced with one reinforcer and R2 is reinforced
freely after an operant response has been extin- with a different one while R1 is extinguished,
guished, the behavior will return (e.g., Ostlund free (noncontingent) presentations of the second
& Balleine, 2007; Reid, 1958; Rescorla & reinforcer during testing eliminates resurgence,
Skucy, 1969). We recently found that even re- whereas presentation of the first reinforcer does
inforcers presented contingent on a second re- not (Bouton & Trask, 2016). Thus, the second
sponse (instead of freely) were also effective at reinforcer is demonstrably a cue or context that
reinstating an extinguished target response (Win- in this case controls R1’s extinction perfor-
terbauer & Bouton, 2011). One explanation is that mance (see also Trask & Bouton, 2016). When
reinforcer presentations are merely part of the the reinforcer is removed, the response returns;
background (or context) that sets the occasion for when it is maintained, the response remains
more responding (e.g., Ostlund & Balleine, 2007). inhibited and suppressed. The key to preventing
Another is that the reinforcer presentations may resurgence is thus to encourage generalization
condition the background context (e.g., the Skin- between treatment and testing. For a more com-
ner box), which might itself reinvigorate the ex- plete discussion of resurgence and the evidence
tinguished behavior (Baker, Steinwald, & Bouton, supporting the context explanation, see the re-
1991). Presentations of the reinforcer after extinc- view by Trask, Schepers, and Bouton (2015).
tion are also known to reinstate operant behaviors It is worth mentioning that much of the early
that are reinforced by drugs (e.g., de Wit & Stew- research on renewal, reinstatement, and sponta-
art, 1981, 1983). And reinstatement effects have neous recovery was actually done in Pavlovian
been reported with operants that have been sup- (respondent) conditioning (e.g., Bouton, 1988,
pressed by punishment (Panlilio, Thorndike, & 2004, 2017). (Resurgence has been studied ex-
Schindler, 2003). Once again, low- or zero-rate clusively in the operant domain, although we
operants that have achieved that level through expect that an analogous effect would occur in
extinction or punishment can return through the respondent conditioning; e.g., see Lindblom &
occurrence of a precipitating event. Jenkins, 1981.) The characteristics of extinguished
Recent research on extinction has focused on respondents and operants are thus highly similar.
another “relapse” effect that may be especially We know that a conditional stimulus (CS) that
relevant to behavior analysis. In differential re- elicits a very weak response or even no response
inforcement of alternative behavior (DRA), a at all can elicit the response again with a change of
new operant (R2) is reinforced at the same time context (renewal), presentation of the Pavlovian
a previously reinforced target operant (R1) is unconditional stimulus or reinforcer (reinstate-
extinguished. The DRA procedure is widely ment), or the passage of time (spontaneous recov-
used by behavior analysts in treating problem ery; see Bouton, 2017, for one recent review).
behavior in children with developmental dis- Renewal, reinstatement, and spontaneous recov-
abilities or autism spectrum disorder, and it ery have also been shown after countercondition-
seems reasonable to think that in nature extin- ing, in which, for example, CS–shock pairings are
guished responses are also often replaced by followed by CS–food pairings or CS–food is fol-
reinforced alternative responses. However, if lowed by CS–shock (e.g., Bouton & Peck, 1992;
the alternative behavior is now put on extinc- Brooks, Hale, Nelson, & Bouton, 1995; Holmes,
208 BOUTON AND BALLEINE

Leung, & Westbrook, 2016; Peck & Bouton, value of the reinforcing outcome—as shown in
1990). The effects of the precipitating events in the reinforcer devaluation effect. In contrast, hab-
turn depend on the CS’s conditioning history. its are operants that are relatively immune to the
In a clear illustration of this, Bouton (1984, immediate effects of reinforcer devaluation and in
Experiment 5) paired a CS with a strong foot- reducing the action– outcome contingency. Ac-
shock in an initial phase. He then extinguished tions and habits (like silent and nonsilent operants)
the response (“fear”) that the CS elicited by can be distinguished by the kinds of events that
presenting the CS several times without shock. control them. But knowing ahead of time whether
Importantly, extinction was stopped before the a behavior is an action or habit will again depend
response completely disappeared. A second set on understanding the behavior’s history: Actions
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

of rats received only a small number of pairings have usually had little practice, have developed in
This document is copyrighted by the American Psychological Association or one of its allied publishers.

of the CS with a weak shock—without extinc- choice situations, or have been reinforced on ratio
tion. They received just enough conditioning schedules. Habits have contrastingly had larger
trials so that the CS elicited a low level of amounts of practice, and have often been rein-
responding that was indistinguishable from that forced on interval schedules.
in the conditioned-then-partially-extinguished We already know that a behavior’s history is
group. Then both groups received footshock important. For example, it is widely understood
presentations (a reinstatement treatment). When that a behavior that has been intermittently re-
the CS was then tested, there was a robust inforced can be more resistant to extinction than
increase in responding to the conditioned-then- one that has been consistently reinforced (e.g.,
extinguished CS, but no increase to the condi- Capaldi, 1967). However, the concepts of ac-
tioned-only CS. Thus, the effects of the reinstat- tions, habits, and inhibited or silent operants may
ing shocks (and the conditioning of the context have unique implications for the control and treat-
they demonstrably produced) depended cru- ment of problem behavior. For one, if a problem
cially on the conditioning history of the CS. behavior is a habit (rather than an action), current
One could not have predicted the effect of the thinking suggests that it might be difficult to
reinstatement shocks from prior responding to change. And of course, extinction (and punish-
the CS alone. In respondent conditioning, like ment) treatments, including DRA or DRO, can be
operant conditioning, behavioral silence can be vulnerable to lapse and relapse effects (e.g., Bou-
misleading. What you see is not all there is. ton, 2014; Podlesnik et al., 2017). On the more
positive side, extinction of an undesirable behav-
Conclusions ior might in principle allow a more desirable one
(that was previously learned and replaced by an
To summarize the argument, it may be easiest undesirable one) to resurge. And reinforcement of
to predict what events will control a particular a positive new operant to the point of becoming a
behavior by first considering the behavior’s his- habit might make it especially resistant to change.
tory. Silent operants (those that have been in- These ideas and applications beg for more re-
hibited, e.g., by extinction or punishment) may search to give them more richness and detail.
“inexplicably” seem to increase in strength after More generally, it is proposed that the accu-
context change, the passage of time, reinforcer racy and precision of prediction and control, the
presentation, or the extinction of an alternative behavior analyst’s two main desiderata, will be
behavior that replaced it. Nonextinguished or enhanced by considering the basic research and
nonpunished behaviors would not change in principles just reviewed. Moreover, prediction
response to these events in the same way. To and control are also crucially enhanced by un-
predict or control future behavior, as well as derstanding a behavior’s history. What you see
predict the effects of context change, the pas- is not necessarily all there is.
sage of time, reinforcer presentation, or the ex-
tinction of an alternative, one would need to
know if the behavior has been inhibited or sup- References
pressed below some earlier rate. Goal-directed Adams, C. D. (1982). Variations in the sensitivity of
actions, on the other hand, can be strengthened instrumental responding to reinforcer devaluation.
or weakened by degradation of the action– Quarterly Journal of Experimental Psychology,
outcome contingency or by offline changes in the 33B, 109–122.
PREDICTION AND CONTROL OF OPERANT BEHAVIOR 209

Adams, C. D., & Dickinson, A. (1981). Instrumental Bouton, M. E. (2014). Why behavior change is dif-
responding following reinforcer devaluation. ficult to sustain. Preventive Medicine, 68, 29–36.
Quarterly Journal of Experiment Psychology B: http://dx.doi.org/10.1016/j.ypmed.2014.06.010
Comparative and Physiological Psychology, 33, Bouton, M. E. (2017). Extinction: Behavioral mecha-
109–121. http://dx.doi.org/10.1080/146407481 nisms and their implications. In J. H. Byrne (Series
08400816 Ed.) & R. Menzel (Vol. Ed.), Learning and mem-
Alvares, G. A., Balleine, B. W., & Guastella, A. J. ory: A comprehensive reference: Learning theory
(2014). Impairments in goal-directed actions pre- and behavior (Vol. 1, 2nd ed., pp. 61– 83). Oxford,
dict treatment response to cognitive-behavioral UK: Academic Press.
therapy in social anxiety disorder. PLoS ONE, 9, Bouton, M. E., Kenney, F. A., & Rosengard, C. (1990).
e94778. http://dx.doi.org/10.1371/journal.pone State-dependent fear extinction with two benzodiaz-
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

.0094778 epine tranquilizers. Behavioral Neuroscience, 104,


This document is copyrighted by the American Psychological Association or one of its allied publishers.

Alvares, G. A., Balleine, B. W., Whittle, L., & Guas- 44–55. http://dx.doi.org/10.1037/0735-7044.104
tella, A. J. (2016). Reduced goal-directed action con- .1.44
trol in autism spectrum disorder. Autism Research, 9, Bouton, M. E., & Peck, C. A. (1992). Spontaneous
1285–1293. http://dx.doi.org/10.1002/aur.1613 recovery in cross-motivational transfer (counter-
Baker, A. G., Steinwald, H., & Bouton, M. E. (1991). conditioning). Animal Learning & Behavior, 20,
Contextual conditioning and reinstatement of ex- 313–321. http://dx.doi.org/10.3758/BF03197954
tinguished instrumental responding. Quarterly Bouton, M. E., & Schepers, S. T. (2015). Renewal
Journal of Experimental Psychology B: Compar- after the punishment of free operant behavior.
ative and Physiological Psychology, 43, 199–218. Journal of Experimental Psychology: Animal
Balleine, B. W. (1992). Instrumental performance Learning and Cognition, 41, 81–90. http://dx.doi
following a shift in primary motivation depends on .org/10.1037/xan0000051
incentive learning. Journal of Experimental Psy- Bouton, M. E., Todd, T. P., Vurbic, D., & Winter-
chology: Animal Behavior Processes, 18, 236– bauer, N. E. (2011). Renewal after the extinction
250. of free operant behavior. Learning & Behavior, 39,
Balleine, B. W. (2001). Incentive processes in instru- 57– 67. http://dx.doi.org/10.3758/s13420-011-
mental conditioning. In R. Mowrer & S. Klein 0018-6
(Eds.), Handbook of contemporary learning theo- Bouton, M. E., & Trask, S. (2016). Role of the
ries (pp. 307–366). Hillsdale, NJ: LEA. discriminative properties of the reinforcer in resur-
Balleine, B. W., & Dickinson, A. (1998). Goal- gence. Learning & Behavior, 44, 137–150. http://
directed instrumental action: Contingency and in- dx.doi.org/10.3758/s13420-015-0197-7
centive learning and their cortical substrates. Neu- Bouton, M. E., Trask, S., & Carranza-Jasso, R.
ropharmacology, 37, 407– 419. http://dx.doi.org/ (2016). Learning to inhibit the response during
10.1016/S0028-3908(98)00033-1 instrumental (operant) extinction. Journal of Ex-
Balleine, B. W., & O’Doherty, J. P. (2010). Human perimental Psychology: Animal Learning and
and rodent homologies in action control: Cortico- Cognition, 42, 246–258. http://dx.doi.org/10.1037/
striatal determinants of goal-directed and habitual xan0000102
action. Neuropsychopharmacology, 35, 48– 69. Bradfield, L. A., & Balleine, B. W. (2013). Hierar-
http://dx.doi.org/10.1038/npp.2009.131 chical and binary associations compete for behav-
Bouton, M. E. (1984). Differential control by context in ioral control during instrumental biconditional dis-
the inflation and reinstatement paradigms. Journal of crimination. Journal of Experimental Psychology:
Experimental Psychology: Animal Behavior Pro- Animal Behavior Processes, 39, 2–13. http://dx.doi
cesses, 10, 56–74. http://dx.doi.org/10.1037/0097- .org/10.1037/a0030941
7403.10.1.56 Brooks, D. C., Hale, B., Nelson, J. B., & Bouton,
Bouton, M. E. (1988). Context and ambiguity in the M. E. (1995). Reinstatement after countercondi-
extinction of emotional learning: Implications for tioning. Animal Learning & Behavior, 23, 383–
exposure therapy. Behaviour Research and Ther- 390. http://dx.doi.org/10.3758/BF03198938
apy, 26, 137–149. http://dx.doi.org/10.1016/0005- Capaldi, E. J. (1967). A sequential hypothesis of
7967(88)90113-1 instrumental learning. In K. W. Spence & J. T.
Bouton, M. E. (2002). Context, ambiguity, and un- Spence (Eds.), Psychology of learning and moti-
learning: Sources of relapse after behavioral ex- vation (Vol. 1, pp. 67–156). New York, NY: Ac-
tinction. Biological Psychiatry, 52, 976–986. ademic Press.
http://dx.doi.org/10.1016/S0006-3223(02)01546-9 Colwill, R. M., & Rescorla, R. A. (1985). Postcon-
Bouton, M. E. (2004). Context and behavioral pro- ditioning devaluation of a reinforcer affects instru-
cesses in extinction. Learning & Memory, 11, mental responding. Journal of Experimental Psy-
485– 494. http://dx.doi.org/10.1101/lm.78804 chology: Animal Behavior Processes, 11, 120–
210 BOUTON AND BALLEINE

132. http://dx.doi.org/10.1037/0097-7403.11.1 Everitt, B. J., & Robbins, T. W. (2005). Neural systems


.120 of reinforcement for drug addiction: From actions to
Colwill, R. C., & Rescorla, R. A. (1986). Associative habits to compulsion. Nature Neuroscience, 8, 1481–
structures in instrumental learning. In G. H. Bower 1489. http://dx.doi.org/10.1038/nn1579
(Ed.), The psychology of learning and motivation Flaherty, C. F. (1996). Incentive relativity. Cam-
(Vol. 20, pp. 55–104). New York, NY: Academic bridge, UK: Cambridge University Press.
Press. Furlong, T. M., Supit, A. S. A., Corbit, L. H., Killcross,
Colwill, R. M., & Rescorla, R. A. (1988). The role of S., & Balleine, B. W. (2017). Pulling habits out of
response-reinforcer associations increases rats: Adenosine 2A receptor antagonism in dorsome-
throughout extended instrumental training. Animal dial striatum rescues meth-amphetamine-induced
Learning & Behavior, 16, 105–111. http://dx.doi deficits in goal-directed action. Addiction Biology,
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

.org/10.3758/BF03209051 22, 172–183. http://dx.doi.org/10.1111/adb.12316


This document is copyrighted by the American Psychological Association or one of its allied publishers.

Crespi, L. P. (1942). Quantitative variation in incen- Griffiths, K. R., Morris, R. W., & Balleine, B. W.
tive and performance in the white rat. American (2014). Translational studies of goal-directed ac-
Journal of Psychology, 55, 467–517. http://dx.doi tion as a framework for classifying deficits across
.org/10.2307/1417120 psychiatric disorders. Frontiers in Systems Neuro-
Cunningham, C. L. (1979). Alcohol as a cue for science, 8, 101.
extinction: State dependency produced by condi- Hanley, G. P., Iwata, B. A., & McCord, B. E. (2003).
tioned inhibition. Animal Learning & Behavior, 7, Functional analysis of problem behavior: A re-
45–52. http://dx.doi.org/10.3758/BF03209656 view. Journal of Applied Behavior Analysis, 36,
de Wit, H., & Stewart, J. (1981). Reinstatement of 147–185. http://dx.doi.org/10.1901/jaba.2003.36-
cocaine-reinforced responding in the rat. Psychop- 147
harmacology, 75, 134–143. http://dx.doi.org/10 Hogarth, L., Balleine, B. W., Corbit, L. H., & Kill-
.1007/BF00432175 cross, S. (2013). Associative learning mechanisms
de Wit, H., & Stewart, J. (1983). Drug reinstatement of underpinning the transition from recreational drug
heroin-reinforced responding in the rat. Psychophar- use to addiction. Annals of the New York Academy
macology, 79, 29–31. http://dx.doi.org/10.1007/ of Sciences, 1282, 12–24. http://dx.doi.org/10
BF00433012 .1111/j.1749-6632.2012.06768.x
Dezfouli, A., & Balleine, B. W. (2012). Habits, action Holmes, N. M., Leung, H. T., & Westbrook, R. F.
sequences and reinforcement learning. European (2016). Counterconditioned fear responses exhibit
Journal of Neuroscience, 35, 1036–1051. http://dx greater renewal than extinguished fear responses.
.doi.org/10.1111/j.1460-9568.2012.08050.x Learning & Memory, 23, 141–150. http://dx.doi
Dickinson, A., & Balleine, B. W. (1994). Motiva- .org/10.1101/lm.040659.115
tional control of goal-directed action. Animal Hull, C. L. (1943). Principles of behavior: An intro-
Learning & Behavior, 22, 1–18. http://dx.doi.org/ duction to behavior theory. New York, NY: Ap-
10.3758/BF03199951 pelton-Century-Crofts.
Dickinson, A., Balleine, B. W., Watt, A., Gonzalez, Kahneman, D. (2011). Thinking fast and slow. New
F., & Boakes, R. A. (1995). Motivational control York, NY: Farrar, Straus & Groux.
after extended instrumental training. Animal Kelley, M. E., Liddon, C. J., Ribeiro, A., Greif, A. E.,
Learning & Behavior, 23, 197–206. http://dx.doi & Podlesnik, C. A. (2015). Basic and translational
.org/10.3758/BF03199935 evaluation of renewal of operant responding. Jour-
Dickinson, A., & Dawson, G. R. (1987). The role of nal of Applied Behavior Analysis, 48, 390– 401.
the instrumental contingency in the motivational http://dx.doi.org/10.1002/jaba.209
control of performance. Quarterly Journal of Ex- Klossek, U. M., Russell, J., & Dickinson, A. (2008).
perimental Psychology, 398, 77–93. The control of instrumental action following out-
Dickinson, A., & Mulatero, C. W. (1989). Reinforcer come devaluation in young children aged between 1
specificity of the suppression of instrumental per- and 4 years. Journal of Experimental Psychology:
formance on a non-contingent schedule. Behav- General, 137, 39–51. http://dx.doi.org/10.1037/
ioural Processes, 19, 167–180. http://dx.doi.org/ 0096-3445.137.1.39
10.1016/0376-6357(89)90039-9 Kosaki, Y., & Dickinson, A. (2010). Choice and
Dickinson, A., Squire, S., Varga, Z., & Smith, J. W. contingency in the development of behavioral au-
(1998). Omission learning after instrumental pre- tonomy during instrumental conditioning. Journal
training. Quarterly Journal of Experimental Psy- of Experimental Psychology: Animal Behavior
chology B: Comparative and Physiological Psy- Processes, 36, 334–342. http://dx.doi.org/10.1037/
chology, 51, 271–286. a0016887
Estes, W. K. (1944). An experimental study of pun- Lattal, K. M. (2007). Effects of ethanol on encoding,
ishment. Psychological Monographs, 57, i-40. consolidation, and expression of extinction following
http://dx.doi.org/10.1037/h0093550 contextual fear conditioning. Behavioral Neurosci-
PREDICTION AND CONTROL OF OPERANT BEHAVIOR 211

ence, 121, 1280–1292. http://dx.doi.org/10.1037/ 209. http://dx.doi.org/10.1111/j.2044-8295.1958


0735-7044.121.6.1280 .tb00658.x
Leitenberg, H., Rawson, R. A., & Bath, K. (1970). Rescorla, R. A. (1991). Associative relations in In-
Reinforcement of competing behavior during ex- strumental learning: The Eighteenth Bartlet Me-
tinction. Science, 169, 301–303. http://dx.doi.org/ morial Lecture. Quarterly Journal of Experimental
10.1126/science.169.3942.301 Psychology B: Comparative and Physiological
Lindblom, L. L., & Jenkins, H. M. (1981). Responses Psychology, 43, 1–23.
eliminated by noncontingent or negatively contin- Rescorla, R. A. (2004). Spontaneous recovery.
gent reinforcement recover in extinction. Journal Learning & Memory, 11, 501–509. http://dx.doi
of Experimental Psychology: Animal Behavior .org/10.1101/lm.77504
Processes, 7, 175–190. http://dx.doi.org/10.1037/ Rescorla, R. A., & Skucy, J. C. (1969). Effect of re-
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

0097-7403.7.2.175 sponse-independent reinforcers during extinction.


This document is copyrighted by the American Psychological Association or one of its allied publishers.

Morris, R. W., Quail, S., Griffiths, K. R., Green, Journal of Comparative and Physiological Psychol-
M. J., & Balleine, B. W. (2015). Corticostriatal ogy, 67, 381–389. http://dx.doi.org/10.1037/h002
control of goal-directed action is impaired in 6793
schizophrenia. Biological Psychiatry, 77, 187– Schepers, S. T., & Bouton, M. E. (2017). Hunger as
195. http://dx.doi.org/10.1016/j.biopsych.2014.06 a context: Food seeking that is inhibited during
.005 hunger can renew in the context of satiety. Psy-
Nakajima, S., Urushihara, K., & Masaki, T. (2002). chological Science, 28, 1640–1648. http://dx.doi
Renewal of operant performance formerly elimi- .org/10.1177/0956797617719084
nated by omission or non-contingency training Shahan, T. A., & Craig, A. R. (2017). Resurgence as
upon return to the acquisition context. Learning choice. Behavioural Processes, 141, 100–127.
and Motivation, 33, 510–525. http://dx.doi.org/10 http://dx.doi.org/10.1016/j.beproc.2016.10.006
.1016/S0023-9690(02)00009-7 Shahan, T. A., & Sweeney, M. M. (2011). A model
Ostlund, S. B., & Balleine, B. W. (2007). Selective of resurgence based on behavioral momentum the-
reinstatement of instrumental performance de- ory. Journal of the Experimental Analysis of Be-
pends on the discriminative stimulus properties of havior, 95, 91–108. http://dx.doi.org/10.1901/jeab
the mediating outcome. Learning & Behavior, 35, .2011.95-91
43–52. http://dx.doi.org/10.3758/BF03196073 Skinner, B. F. (1938). The behavior of organisms.
Overton, D. A. (1985). Contextual stimulus effects of New York, NY: Appleton.
drugs and internal states. In P. D. Balsam & A. Skinner, B. F. (1969). Contingencies of reinforce-
Tomie (Eds.), Context and learning (pp. 357–384). ment. New York, NY: Appleton.
Hillsdale, NJ: Erlbaum. Spence, K. W. (1956). Behavior theory and condi-
Panlilio, L. V., Thorndike, E. B., & Schindler, C. W. tioning. New Haven, CT: Yale University Press.
(2003). Reinstatement of punishment-suppressed http://dx.doi.org/10.1037/10029-000
opioid self-administration in rats: An alternative Thrailkill, E. A., & Bouton, M. E. (2015). Contextual
model of relapse to drug abuse. Psychopharmacol- control of instrumental actions and habits. Journal
ogy, 168, 229–235. http://dx.doi.org/10.1007/s00 of Experimental Psychology: Animal Learning and
213-002-1193-0 Cognition, 41, 69– 80. http://dx.doi.org/10.1037/
Peck, C. A., & Bouton, M. E. (1990). Context and xan0000045
performance in aversive-to-appetitive and appeti- Thrailkill, E. A., & Bouton, M. E. (2016). Extinction
tive-to-aversive transfer. Learning and Motivation, and the associative structure of heterogeneous in-
21, 1–31. http://dx.doi.org/10.1016/0023-9690(90) strumental chains. Neurobiology of Learning and
90002-6 Memory, 133, 61– 68. http://dx.doi.org/10.1016/j
Podlesnik, C. A., Kelley, M. E., Jimenez-Gomez, C., .nlm.2016.06.005
& Bouton, M. E. (2017). Renewed behavior pro- Thrailkill, E. A., Trott, J. M., Zerr, C. L., & Bouton,
duced by context change and its implications for M. E. (2016). Contextual control of chained instru-
treatment maintenance: A review. Journal of Ap- mental behaviors. Journal of Experimental Psy-
plied Behavior Analysis, 50, 675– 697. http://dx chology: Animal Learning and Cognition, 42,
.doi.org/10.1002/jaba.400 401– 414. http://dx.doi.org/10.1037/xan0000112
Redgrave, P., Rodriguez, M., Smith, Y., Rodriguez- Tinklepaugh, O. L. (1928). An experimental study of
Oroz, M. C., Lehericy, S., Bergman, H., . . . Obeso, representative factors in monkeys. Journal of
J. A. (2010). Goal-directed and habitual control in Comparative Psychology, 8, 197–236. http://dx
the basal ganglia: Implications for Parkinson’s dis- .doi.org/10.1037/h0075798
ease. Nature Reviews Neuroscience, 11, 760–772. Todd, T. P. (2013). Mechanisms of renewal after the
http://dx.doi.org/10.1038/nrn2915 extinction of instrumental behavior. Journal of Ex-
Reid, R. L. (1958). The role of the reinforcer as a perimental Psychology: Animal Behavior Processes,
stimulus. British Journal of Psychology, 49, 202– 39, 193–207. http://dx.doi.org/10.1037/a0032236
212 BOUTON AND BALLEINE

Todd, T. P., Vurbic, D., & Bouton, M. E. (2014). Tricomi, E., Balleine, B. W., & O’Doherty, J. P.
Mechanisms of renewal after the extinction of dis- (2009). A specific role for posterior dorsolateral
criminated operant behavior. Journal of Experimen- striatum in human habit learning. European Jour-
tal Psychology: Animal Learning and Cognition, 40, nal of Neuroscience, 29, 2225–2232. http://dx.doi
355–368. http://dx.doi.org/10.1037/xan0000021 .org/10.1111/j.1460-9568.2009.06796.x
Tolman, E. C. (1932). Purposive behavior in animals Winterbauer, N. E., & Bouton, M. E. (2010). Mech-
and men. New York, NY: Century. anisms of resurgence of an extinguished instru-
Trask, S., & Bouton, M. E. (2014). Contextual con- mental behavior. Journal of Experimental Psy-
trol of operant behavior: Evidence for hierarchical chology: Animal Behavior Processes, 36, 343–
associations in instrumental learning. Learning &
353. http://dx.doi.org/10.1037/a0017365
Behavior, 42, 281–288. http://dx.doi.org/10.3758/
Winterbauer, N. E., & Bouton, M. E. (2011). Mech-
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

s13420-014-0145-y
anisms of resurgence: II. Response-contingent re-
This document is copyrighted by the American Psychological Association or one of its allied publishers.

Trask, S., & Bouton, M. E. (2016). Discriminative


properties of the reinforcer can be used to attenuate inforcers can reinstate a second extinguished be-
the renewal of extinguished operant behavior. havior. Learning and Motivation, 42, 154–164.
Learning & Behavior, 44, 151–161. http://dx.doi http://dx.doi.org/10.1016/j.lmot.2011.01.002
.org/10.3758/s13420-015-0195-9
Trask, S., Schepers, S. T., & Bouton, M. E. (2015).
Context change explains resurgence after the ex-
tinction of operant behavior. Mexican Journal of Received August 10, 2017
Behavior Analysis, 41, 187–210. Accepted October 27, 2017 䡲

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy