Learning 3. Learning

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 16

LEARNING

3. LEARNING
a. Definition of Learning
b. Classical Conditioning: Unconditional Stimulus (US), Conditioned Stimulus (CS),
Unconditional Response (UR) and Conditioned response (CR)
i. Extinction
ii. Stimulus Discrimination
iii. Stimulus Generalization
c. Operant Conditioning
i. Reinforcement: Positive and Negative
ii. Punishment
iii. Schedules of reinforcement: Only definitions
d. Observational Learning
i. Definition and Basic Principle

DEFINITION:

Learning is permanent or relatively permanent change in behaviour (or behaviour potential)


resulting from experience.

Several aspects of this definition are noteworthy.

1. The term learning does not apply to temporary changes in behavior such as those stemming
from fatigue, drugs or illness.

2. It does not refer to changes resulting from maturation- the fact that you change in many ways
as you grow and develop.

3. Learning can result from vicarious as well as from direct experiences; in other words, you can
be affected by observing events and behavior in your environment as well as by participating in
them.

4. Finally, the changes produced by learning are not always positive in nature.

___________________________________________________________

CLASSICAL CONDITIONING:

Introduction

Behaviorism as a movement in psychology appeared in 1913 when John Broadus Watson


published the classic article 'Psychology as the behaviorist views it'.

John Watson proposed that the process of classical conditioning (based on Pavlov’s
observations) was able to explain all aspects of human psychology.
Everything from speech to emotional responses was simply patterns of stimulus and response.
Watson denied completely the existence of the mind or consciousness.

Watson believed that all individual differences in behavior were due to different experiences of
learning. He famously said:

"Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in
and I'll guarantee to take any one at random and train him to become any type of specialist I
might select - doctor, lawyer, artist, merchant-chief and, yes, even beggar-man and thief,
regardless of his talents, penchants, tendencies, abilities, vocations and the race of his
ancestors”.

WHAT IS CLASSICAL CONDITIONING?

Classical conditioning is a type of learning that had a major influence on the school of thought in
psychology known as behaviorism. Discovered by Russian physiologist Ivan Pavlov, classical
conditioning is a learning process that occurs through associations between an environmental
stimulus and a naturally occurring stimulus.

Behaviorism is based on the assumption that learning occurs through interactions with the
environment. Two other assumptions of this theory are that the environment shapes behavior and
that taking internal mental states such as thoughts, feelings, and emotions into consideration is
useless in explaining behavior.

It's important to note that classical conditioning involves placing a neutral signal before a
naturally occurring reflex. In Pavlov's classic experiment with dogs, the neutral signal was the
sound of a tone and the naturally occurring reflex was salivating in response to food. By
associating the neutral stimulus with the environmental stimulus (the presentation of food), the
sound of the tone alone could produce the salivation response.

In order to understand how more about how classical conditioning works, it is important to be
familiar with the basic principles of the process.

DEFINITION:

Classical conditioning is a technique used in behavioral training. A naturally occurring stimulus


is paired with a response. Then, a previously neutral stimulus is paired with the naturally
occurring stimulus. Eventually, the previously neutral stimulus comes to evoke the response
without the presence of the naturally occurring stimulus. The two elements are then known as the
conditioned stimulus and the conditioned response.

The Unconditioned Stimulus


The unconditioned stimulus is one that unconditionally, naturally, and automatically triggers a
response. For example, when you smell one of your favorite foods, you may immediately feel
very hungry. In this example, the smell of the food is the unconditioned stimulus.

The Unconditioned Response

The unconditioned response is the unlearned response that occurs naturally in response to the
unconditioned stimulus. In our example, the feeling of hunger in response to the smell of food is
the unconditioned response.

The Conditional Stimulus

The conditioned stimulus is previously neutral stimulus that, after becoming associated with the
unconditioned stimulus, eventually comes to trigger a conditioned response. In our earlier
example, suppose that when you smelled your favorite food, you also heard the sound of a
whistle. While the whistle is unrelated to the smell of the food, if the sound of the whistle was
paired multiple times with the smell, the sound would eventually trigger the conditioned
response. In this case, the sound of the whistle is the conditioned stimulus.

The Conditioned Response

The conditioned response is the learned response to the previously neutral stimulus. In our
example, the conditioned response would be feeling hungry when you heard the sound of the
whistle.

Examples of Classical Conditioning

It can be helpful to look at a few examples of how the classical conditioning process operates
both in experimental and real-worlds settings:

Classical Conditioning a Fear Response

One of the most famous examples of classical conditioning was John B. Watson's experiment in
which a fear response was conditioned in a young boy known as Little Albert. The child initially
showed no fear of a white rat, but after the presentation of the rat was paired repeatedly with
loud, scary sounds, the child would cry when the rat was present. The child's fear also
generalized to other fuzzy white objects.

Let's examine the elements of this classic experiment. Prior to the conditioning, the white rat was
a neutral stimulus. The unconditioned stimulus was the loud, clanging sounds and the
unconditioned response was the fear response created by the noise. By repeatedly pairing the rat
with the unconditioned stimulus, the white rat (now the conditioned stimulus) came to evoke the
fear response (now the conditioned response).
This experiment illustrates how phobias can form through classical conditioning. In many cases,
a single pairing of a neutral stimulus (a dog, for example) and a frightening experience (being
bitten by the dog) can lead to a lasting phobia (being afraid of dogs).

Classical Conditioning Examples

Classical conditioning theory involves learning a new behavior via the process of association. In
simple terms two stimuli are linked together to produce a new learned

response in a person or animal. There are three stages to classical conditioning. In each stage the
stimuli and responses are given special scientific terms:

Stage 1: Before Conditioning:

In this stage, the unconditioned stimulus (UCS) produces an unconditioned response (UCR) in an
organism. In basic terms this means that a stimulus in the environment has produced a behavior /
response which is unlearned (i.e. unconditioned) and therefore is a natural response which has
not been taught. In this respect no new behavior has been learned yet.

For example, a stomach virus (UCS) would produce a response of nausea (UCR). In another
example a perfume (UCS) could create a response of happiness or desire (UCR).

This stage also involves another stimulus which has no affect on a person and is called the
neutral stimulus (NS). The NS could be a person, object, place etc. The neutral stimulus in
classical conditioning does not produce a response until it is paired with the unconditioned
stimulus.

Stage 2: During Conditioning:

During this stage a stimulus which produces no response (i.e. neutral) is associated with the
unconditioned stimulus at which point it now becomes known as theconditioned stimulus (CS).

For example a stomach virus (UCS) might be associated with eating a certain food such as
chocolate (CS). Also perfume (UCS) might be associated with a specific person (CS).

Often during this stage the UCS must be associated with the CS on a number of occasions, or
trials, for learning to take place. However, one trail learning can happen on certain occasions
when it is not necessary for an association to be strengthened over time (such as being sick after
food poisoning or drinking too much alcohol).

Stage 3: After Conditioning:


Now the conditioned stimulus (CS) has been associated with the unconditioned stimulus (UCS)
to create a new conditioned response (CR).

For example a person (CS) who has been associated with nice perfume (UCS) is now found
attractive (CR). Also chocolate (CS) which was eaten before a person was sick with a virus
(UCS) is now produces a response of nausea (CR).

Little Albert Experiment (Phobias)

Ivan Pavlov showed that classical conditioning applied to animals. Did it also apply to humans?
In a famous (though ethically dubious) experiment, Watson and Rayner (1920) showed that it
did.

Little Albert was a 9-month-old infant who was tested on his reactions to various stimuli. He was
shown a white rat, a rabbit, a monkey and various masks. Albert described as "on the whole
stolid and unemotional" showed no fear of any of these stimuli. However what did startle him
and cause him to be afraid was if a hammer was struck against a steel bar behind his head. The
sudden loud noise would cause "little Albert to burst into tears.

When "Little Albert" was just over 11 months old the white rat was presented and seconds later
the hammer was struck against the steel bar. This was done 7 times over the next 7 weeks and
each time "little Albert" burst into tears. By now "little Albert only had to see the rat and he
immediately showed every sign of fear. He would cry (whether or not the hammer was hit
against the steel bar) and he would attempt to crawl away.

Watson and Rayner had shown that classical conditioning could be used to create a phobia. A
phobia is an irrational fear, i.e. a fear that is out of proportion to the danger. Over the next few
weeks and months "Little Albert" was observed and 10 days after conditioning his fear of the rat
was much less marked. This dying out of a learned response is called extinction. However even
after a full month it was still evident.

EXTINCTION:

In psychology, extinction refers to the gradual weakening of a conditioned response that results
in the behavior decreasing or disappearing.

In classical conditioning, this happens when a conditioned stimulus is no longer paired with an
unconditioned stimulus.

In classical conditioning, when a conditioned stimulus is presented alone without an


unconditioned stimulus, the conditioned response will eventually cease. For example, in Pavlov's
classic research, a dog was conditioned to salivate to the sound of a bell. When the bell was
presented repeatedly without the presentation of food, the salivation response eventually became
extinct.
Extinction is when the occurrences of a conditioned response decrease or disappear. In classical
conditioning, this happens when a conditioned stimulus is no longer paired with an
unconditioned stimulus. For example, if the smell of food (the unconditioned stimulus) had been
paired with the sound of a whistle (the conditioned stimulus), it would eventually come to evoke
the conditioned response of hunger. However, if the unconditioned stimulus (the smell of food)
were no longer paired with the conditioned stimulus (the whistle), eventually the conditioned
response (hunger) would disappear.

Stimulus Generalization & Discrimination

In classical conditioning, individuals learn an association between the CS and UCS. For
example, A dog who is treated cruelly by its male owner learns to be afraid of that man. The CS,
in this example, is the sight, sound, etc. of the man, the UCS is the cruel treatment, the UCR is
the distress elicited by the cruel treatment, and the CR is fear of the man. Sometimes, the dog not
only becomes afraid of the man who treats it cruelly but it also may become afraid of all men.
This shows that the dog has learned something about the characteristics of men, in general —
such as their smell, body shape, walking gait, height, deepness of voice, etc. — and has learned
to be afraid of any human with these characteristics. The dog has generalized its fear of its male
owner to other men. The tendency for stimuli similar to a CS to also elicit a CR is referred to as
stimulus generalization. It occurs in virtually all cases of classical conditioning since there
always are other stimuli that share similarities with the CS.

Let’s look at an example of stimulus generalization from a classic experiment on classical


conditioning. Shenger-Krestovnika (1921; see Windholz, 1989) demonstrated that dogs that
experienced the taste of meat (UCS) whenever they saw a circle (CS) learned to salivate to the
circle (CR) just as they salivated reflexively (UCR) to the taste of meat (see Figure 1).

Figure 1. Design of the classical-conditioning study of Shenger-Krestovnika (1921)

Shenger-Krestovnika then found that dogs also would salivate to the sight of an ellipse. Thus, for
these dogs, the CR of salivation to the sight of a circle showed stimulus generalization to the
ellipse (see Figure 2).

Figure 2. Stimulus generalization from a circle to an ellipse

Stimulus Generalization and Phobias

In the case of Little Albert, the baby who was classically conditioned to fear the sight of a white
rat after the rat had been paired with an unexpected loud noise, Watson and Rayner (1920)
reported that Albert showed a “transfer” of his learned anxiety to a rabbit, a dog, a seal-fur coat,
a Santa Claus mask, and perhaps even Watson’s hair (although his reactions to these objects
were not always consistent, and the study did not include adequate controls for extraneous
variables). The classical conditioning theory of phobic disorder states that the learned fear to a
CS generalizes (transfers) to other stimuli, with the greatest amount of transfer occurring to
stimuli that are most similar to the CS.

What is Stimulus Discrimination?

As described above, Shenger-Krestovnika (1921) found that dogs showed stimulus


generalization to the sight of an ellipse when they had been classically conditioned to salivate to
the sight of a circle. In the next part of her study, Shenger-Krestovnika continued to pair the
circle with meat but never paired the ellipse with meat. She found that, over time, the dogs
stopped salivating to the ellipse but continued to salivate to the

circle. That is, the dogs were able to discriminate between the ellipse and the circle, and learned
that they received meat only after seeing the circle (see Figure 3).

Figure 3. Stimulus discrimination between a circle and an ellipse

The tendency for stimuli similar to a CS to stop eliciting a CR when they are not followed by a
UCS is referred to as stimulus discrimination. In other words, with the stimulus-discrimination
procedure (illustrated in the work of Shenger-Krestovnika, 1921), the CR extinguishes to the
stimulus that is similar to the CS. You may have learned, for example, to respond with anxiety
(CR) to a particular tone of voice (CS) used by your parent when that tone, in the past, had
repeatedly been followed by an outburst of anger (UCS). On the other hand, your parent may
have used a slightly different tone of voice when expressing mock anger. In this case, you
probably learned to discriminate between the two and to not become anxious when hearing the
tone associated with mock anger.

Stimulus Discrimination and Phobias

The classical conditioning theory of phobic disorder states that individuals learn to discriminate
between a CS that is followed reliably by a fear-inducing UCS and stimuli that, although similar,
are rarely or never followed by the UCS. For example, in the case of the dog that is fearful of all
men because it has been treated cruelly by a particular man, it probably will learn to feel fear
only to the man who abused it if most other men the dog meets treat it kindly.

OPERANT CONDITIONING:

What Is Operant Conditioning?

Operant conditioning (sometimes referred to as instrumental conditioning) is a method of


learning that occurs through rewards and punishments for behavior. Through operant
conditioning, an association is made between a behavior and a consequence for that behavior.

Operant conditioning was coined by behaviorist B.F. Skinner, which is why you may
occasionally hear it referred to as Skinnerian conditioning. As a behaviorist, Skinner believed
that internal thoughts and motivations could not be used to explain behavior. Instead, he
suggested, we should look only at the external, observable causes of human behavior.

Skinner used the term operant to refer to any "active behavior that operates upon the
environment to generate consequences" (1953). In other words, Skinner's theory explained how
we acquire the range of learned behaviors we exhibit each and every day.

Examples of Operant Conditioning

We can find examples of operant conditioning at work all around us. Consider the case of
children completing homework to earn a reward from a parent or teacher, or employees finishing
projects to receive praise or promotions.

In these examples, the promise or possibility of rewards causes an increase in behavior, but
operant conditioning can also be used to decrease a behavior. The removal of a desirable
outcome or the application of a negative outcome can be used to decrease or prevent undesirable
behaviors. For example, a child may be told they will lose recess privileges if they talk out of
turn in class. This potential for punishment may lead to a decrease in disruptiv

Components of Operant Conditioning

Some key concepts in operant conditioning:

Reinforcement is any event that strengthens or increases the behavior it follows. There are two
kinds of reinforcers:

1.Positive reinforcers are favorable events or outcomes that are presented after the behavior. In
situations that reflect positive reinforcement, a response or behavior is strengthened by the
addition of something, such as praise or a direct reward.

2.Negative reinforcers involve the removal of an unfavorable events or outcomes after the
display of a behavior. In these situations, a response is strengthened by the removal of something
considered unpleasant.

In both of these cases of reinforcement, the behavior increases.

Punishment, on the other hand, is the presentation of an adverse event or outcome that causes a
decrease in the behavior it follows.

In operant conditioning, positive reinforcement involves the addition of a reinforcing stimulus


following a behavior that makes it more likely that the behavior will occur again in the future.
When a favorable outcome, event, or reward occurs after an action, that particular response or
behavior will be strengthened.
One of the easiest ways to remember positive reinforcement is to think of it as something
beingadded. By thinking of it in these terms, you may find it easier to identify real-world
examples of positive reinforcement.

Examples of Positive Reinforcement

Consider the following examples:

After you execute a turn during a skiing lesson, your instructor shouts out, "Great job!"

At work, you exceed this month's sales quota so your boss gives you a bonus.

For your psychology class, you watch a video about the human brain and write a paper about
what you learned. Your instructor gives you 20 extra credit points for your work.

Can you identify the positive reinforcement in each of these examples? The ski instructor
offering praise, the employer giving a bonus, and the teacher providing bonus points are all
examples of positive reinforcers. In each of these situations, the reinforcement is an additional
stimulus occurring after the behavior that increases the likelihood that the behavior will occur
again in the future.

An important thing to note is that positive reinforcement is not always a good thing. For
example, when a child misbehaves in a store, some parents might give them extra attention or
even buy the child a toy. Children quickly learn that by acting out, they can gain attention from
the parent or even acquire objects that they want. Essentially, parents are actually reinforcing the
misbehavior. In this case, the better solution would be to use positive reinforcement when the
child is actually displaying good behavior.

Different Types of Positive Reinforcers

There are many different types of reinforcers that can be used to increase behaviors, but it is
important to note that the type of reinforcer used depends upon the individual and the situation.
While gold stars and tokens might be very effective reinforcement for a second-grader, they are
not going to have the same effect with a high school orcollege student.

Natural reinforcers are those that occur directly as a result of the behavior. For example, a girl
studies hard, pays attention in class, and does her homework. As a result, she gets excellent
grades.

Token reinforcers are points or tokens that are awarded for performing certain actions. These
tokens can then be exchanged for something of value.
Social reinforcers involve expressing approval of a behavior, such as a teacher, parent, or
employer saying or writing "Good job" or "Excellent work."

Tangible reinforcers involve the presentation of an actual, physical reward such as candy,
treats, toys, money, and other desired objects. While these types of rewards can be powerfully
motivating, they should be used sparingly and with caution.

When Is Positive Reinforcement Most Effective?

When used correctly, positive reinforcement can be very effective. According to a behavioral
guidelines checklist published by Utah State University, positive reinforcement is most effective
when it occurs immediately after the behavior. The guidelines also recommend the reinforcement
should be presented enthusiastically and should occur frequently.

The shorter the amount of time between a behavior and the presentation of positive
reinforcement, the stronger the connection will be. If a long period of time elapses between the
behavior and the reinforcement, the weaker the connection will be. It also becomes more likely
that an intervening behavior might accidentally be reinforced.

NEGATIVE REINFORCEMENT

Negative reinforcement is a term described by B. F. Skinner in his theory of operant


conditioning. In negative reinforcement, a response or behavior is strengthened by stopping,
removing or avoiding a negative outcome or aversive stimulus.

Aversive stimuli tend to involve some type of discomfort, either physical or psychological.
Behaviors are negatively reinforced when they allow you to escape from aversive stimuli that are
already present or allow you to completely avoid the aversive stimuli before they happen.

One of the best ways to remember negative reinforcement is to think of it as something being
subtracted from the situation. When you look at it in this way, it may be easier to identify
examples of negative reinforcement in the real-world.

Examples of Negative Reinforcement

Learn more by looking at the following examples:

Before heading out for a day at the beach, you slather on sunscreen in order to avoid getting
sunburned.

You decide to clean up your mess in the kitchen in order to avoid getting in a fight with your
roommate.
On Monday morning, you leave the house early in order to avoid getting stuck in traffic and
being late for class.

Can you identify the negative reinforcer in each of these examples? Sunburn, a fight with your
roommate and being late for work are all negative outcomes that were avoided by performing a
specific behavior. By eliminating these undesirable outcomes, the preventative behaviors become
more likely to occur again in the future.

Negative Reinforcement versus Punishment

One mistake that people often make is confusing negative reinforcement with punishment.
Remember, however, that negative reinforcement involves the removal of a negative condition in
order to strengthen a behavior. Punishment, on the other hand, involves either presenting or
taking away a stimulus in order to weaken a behavior.

Consider the following example and determine whether you think it is an example of negative
reinforcement or punishment:

Timmy is supposed to clean his room every Saturday morning. Last weekend, he went out to
play with his friend without cleaning his room. As a result, his father made him spend the rest of
the weekend doing other chores like cleaning out the garage, mowing the lawn and weeding the
garden, in addition to cleaning his room.

If you said that this was an example of punishment, then you are correct. Because Timmy didn't
clean his room, his father assigned a punishment of having to do extra chores.

When Is Negative Reinforcement Most Effective?

Negative reinforcement can be an effective way to strengthen a desired behavior. However, it is


most effective when reinforcers are presented immediately following a behavior. When a long
period of time elapses between the behavior and the reinforcer, the response is likely to be
weaker. In some cases, behaviors that occur in the intervening time between the initial action and
the reinforcer are may also be inadvertently strengthened as well.

According to Wolfgang (2001), negative reinforcement should be used sparingly in classroom


settings, while positive reinforcement should be emphasized. While negative reinforcement can
produce immediate results, he suggests that it is best suited for short-term use.

PUNISHMENT:

Punishment is used to help decrease the probability that a specific undesired behavior will occur
with the delivery of a consequence immediately after the undesired response/behavior is
exhibited. When people hear that punishment procedures are being used, they typically think that
something wrong or harmful is being done but that is
not necessarily the case. The use of punishment procedures have been used with both typical and
atypical developing children, teenagers, elderly persons, animals, and people exhibiting different
psychological disorders. There are two types of punishment: positive and negative, and it can be
difficult to tell the difference between the two. Below are some examples to help clear up the
confusion.

What is Positive Punishment:

Positive punishment works by presenting a negative consequence after an undesired behavior is


exhibited, making the behavior less likely to happen in the future. The following are some
examples of positive punishment:

 A child picks his nose during class and the teacher reprimands him in front of his
classmates.
 A child wears his favorite hat to church or at dinner, his parents scold him for wearing it
and make him remove the hat.
 During a meeting or while in class, your cell phone starts ringing, you are lectured on
why it is not okay to have your phone on.
What is Negative Punishment:
Negative punishment happens when a certain desired stimulus/item is removed after a particular
undesired behavior is exhibited, resulting in the behavior happening less often in the future. The
following are some examples of negative punishment:

 For a child that really enjoys a specific class, such as gym or music classes at school,
negative punishment can happen if they are removed from that class and sent to the
principal’s office because they were acting out/misbehaving.
 If a child does not follow directions or acts inappropriately, he loses a token for good
behavior that can later be cashed in for a prize.
 Siblings get in a fight over who gets to go first in a game or who gets to play with a new
toy, the parent takes the game/toy away.

When thinking about punishment, always remember that the end result is to try to decrease the
undesired behavior. For positive punishment, try to think of it as adding a negative consequence
after an undesired behavior is emitted to decrease future responses. As for negative punishment,
try to think of it as taking away a certain desired item after the undesired behavior happens in
order to decrease future responses.

SCHEDULES OF REINFORCEMENT:

There are four schedules of reinforcement:

1. Fixed-ratio schedules are those where a response is reinforced only after a specified number
of responses. This schedule produces a high, steady rate of responding with only a brief pause
after the delivery of the reinforcer. An example of a fixed-ratio schedule would be delivering a
food pellet to a rat after it presses a bar five times.
E.g. Most people enjoy getting paid, so the first example will focus on money. People get
paid for work in all types of different ways. You can probably think of several different
ways off the top of your head. Chances are that you had not really thought about them in
terms of schedules of reinforcement. The most obvious example of this is piecework.
Some people get paid on a salary or hourly basis, but for those who get paid based on the
number of finished products they create, they are being paid on a fixed-ratio schedule of
reinforcement.
Let's say that you are a carpenter and you own your own cabinet business. You spend a
lot of time, energy, and money designing, building, and installing the cabinets that you
create. Most people are not going to want to pay you until you deliver and install a
finished product. Every time you deliver and install a finished set of cabinets you get
paid. This is a pretty simple example, but the behavior (making and installing cabinets) is
reinforced ($) each time it is performed.
2. Variable-ratio schedules occur when a response is reinforced after an unpredictable number
of responses. This schedule creates a high steady rate of responding. Gambling and lottery
games are good examples of a reward based on a variable ratio schedule. In a lab setting, this
might involved delivering food pellets to a rat after one bar press, again after four bar presses,
and a third pellet after two bar presses.
e.g: Let's look at a couple of examples of variable ratio schedules of reinforcement in everyday
life.
Slot Machines
It's pretty safe to say that slot machines can be used to successfully alter human behavior. Go
into any casino across the US and you will see people repeatedly pulling the handle or pushing
the button over and over again believing that the next pull or button push could result in a big
payout. Thanks to variable ratio schedules of reinforcement people will continue to put money in
the machine even if they don't initially get rewarded.
Slot machines are pre-set to payout after an average number of responses (handle pulls or button
pushes) have been delivered. For instance, if it was set up to payout after an average of 10
responses, you might win some money on the 5th pull, the 12th pull, and the 13th pull (the
average of 5, 12, and 13 is 10). This variable ratio schedule of reinforcement results in an
exciting experience, since you never really know when the reinforcer is coming. For many
people not knowing is what keeps them playing.

3. Fixed-interval schedules are those where the first response is rewarded only after a specified
amount of time has elapsed. This schedule causes high amounts of responding near the end of
the interval, but much slower responding immediately after the delivery of the reinforcer. An
example of this in a lab setting would be reinforcing a rat with a lab pellet for the first bar
press after a 30 second interval has elapsed.
e.g. In the Real World: A weekly paycheck is a good example of a fixed-interval schedule.
The employee receives reinforcement every seven days, which may result in a higher response
rate as payday approaches

4. Variable-interval schedules occur when a response is rewarded after an unpredictable


amount of time has passed. This schedule produces a slow, steady rate of response. An
example of this would be delivering a food pellet to a rat after the first bar press following a
one minute interval, another pellet for the first response following a five minute interval, and a
third food pellet for the first response following a three minute interval.

e.g. Your Employer Checking Your Work: Does your boss drop by your office a few times
throughout the day to check your progress? This is an example of a variable-interval schedule.
These check-ins occur at unpredictable times, so you never know when they might happen

A parent attending to the cries of a child. Parents will not typically attend to the child each time it
cries, but will leave he or she to fuss for a period before attending.

OBSERVATIONAL LEARNING:

n his famous Bobo doll experiment, Bandura demonstrated that young children would imitate the
violent and aggressive actions of an adult model. In the experiment, children observed a film in
which an adult repeatedly hit a large, inflatable balloon doll. After viewing the film clip, children
were allowed to play in a room with a real Bobo doll just like the one they saw in the film. What
Bandura found was that children were more likely to imitate the adult's violent actions when the
adult either received no consequences or when the adult was actually rewarded for their violent
actions. Children who saw film clips in which the adult was punished for this aggressive
behavior were less likely to repeat the behaviors later on.

Examples of Observational Learning in Action

 A child watches his mother folding the laundry. He later picks up some clothing and imitates
folding the clothes.

 A young couple goes on a date to a Chinese restaurant. They watch other diners in the
restaurant eating with chopsticks and copy their actions in order to learn out to use these
utensils.

 A young boy watches another boy on the playground get in trouble for hitting another child.
He learns from observing this interaction that he should not hit others.

 A group of children play hide and seek at recess. One child joins the group, but has never
played before and is not sure what to do. After observing the other children play, she quickly
learns the basic rules of the game and joins in.

Factors That Influence Observational Learning


According to Bandura's research, there are a number of factors that increase the likelihood that a
behavior will be imitated. We are more likely to imitate:

 People we perceive as warm and nurturing

 People who receive rewards for their behavior

 When you have been rewarded for imitating the behavior in the past

 When we lack confidence in our own knowledge or abilities

 People who are in a position of authority over our lives

 People who are similar to us in age, sex, and interests

 People who we admire or who are of a higher social status

 When the situation is confusing, ambiguous, or unfamiliar

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy