Learning: Powerpoint® Presentation

Download as pdf or txt
Download as pdf or txt
You are on page 1of 67

Learning

PowerPoint®
Presentation
by Jim Foley

© 2013 Worth Publishers


Overview: Topics in this Chapter

▪ Definitions What do we mean


▪ Classical conditioning by “learning”?
▪ Operant conditioning
Learning is the
▪ Biological and process of
cognitive components acquiring new
of learning and relatively
▪ Observational enduring
learning information or
behaviors.
How does learning happen other
than through language/words?
We learn from We learn by
experience: association:
1. when we learn to 1. when two stimuli
predict events we (events or sensations)
already like or don’t tend to occur together
like by noticing other or in sequence.
events or sensations 2. when actions become
that happen first. associated with
2. when our actions have pleasant or aversive
consequences. results.
3. when we watch what 3. when two pieces of
other people do. information are linked.
Types of Learning
Classical
conditioning: Operant
learning to link two conditioning:
stimuli in a way that changing behavior
helps us anticipate choices in response
an event to which to consequences
we have a reaction
Cognitive learning:
acquiring new
behaviors and
information through
observation and
information, rather
than by direct
experience
Associative Learning:
Classical Conditioning Stimulus 1: See
lightning
How it works: after repeated
exposure to two stimuli Stimulus 2: Hear
occurring in sequence, we thunder
associate those stimuli with each Here, our response to
other. thunder becomes
Result: our natural response to associated with
one stimulus now can be lightning.
triggered by the new, predictive
stimulus.
After Repetition
Stimulus: See lightning
Response: Cover ears to avoid sound
Associative Learning:
Operant Conditioning
▪ Child associates his “response” (behavior) with consequences.
▪ Child learns to repeat behaviors (saying “please”) which were
followed by desirable results (cookie).
▪ Child learns to avoid behaviors (yelling “gimme!”) which were
followed by undesirable results (scolding or loss of dessert).
Cognitive Learning
Cognitive learning refers to acquiring new behaviors
and information mentally, rather than by direct
experience.
Cognitive learning occurs:
1. by observing events and the behavior of others.
2. by using language to acquire information about
events experienced by others.
Behaviorism
▪ The term behaviorism was used by John B. Watson
(1878-1958), a proponent of classical conditioning,
as well as by B.F. Skinner (1904-1990), a leader in
research about operant conditioning.
▪ Both scientists believed the mental life was much
less important than behavior as a foundation for
psychological science.
▪ Both foresaw applications in controlling human
behavior:
Skinner conceived of
utopian communities.
Watson went into
advertising.
Ivan Pavlov’s Discovery
While studying salivation in
dogs, Ivan Pavlov found that
salivation from eating food
was eventually triggered by
what should have been
neutral stimuli such as:
▪ just seeing the food.
▪ seeing the dish.
▪ seeing the person who
brought the food.
▪ just hearing that person’s
footsteps.
Before Conditioning
Neutral stimulus:
a stimulus which does not trigger a response

Neutral
stimulus
(NS)
No response
Before Conditioning
Unconditioned stimulus and response:
a stimulus which triggers a response naturally,
before/without any conditioning

Unconditioned
response (UR):
Unconditioned dog salivates
stimulus (US):
yummy dog food
During Conditioning
The bell/tone (N.S.) is repeatedly presented with
the food (U.S.).

Neutral Unconditioned
stimulus Unconditioned response (UR):
(NS) stimulus (US) dog salivates
After Conditioning
The dog begins to salivate upon hearing the tone
(neutral stimulus becomes conditioned stimulus).

Did you follow the changes? Conditioned


The UR and the CR are the response:
Conditioned same response, triggered by
(formerly different events. dog salivates
neutral) The difference is
whether conditioning
stimulus was necessary for the
response to happen.
The NS and the CS are the
same stimulus.
The difference is
whether the stimulus
triggers the conditioned
response.
Find the US, UR, NS, CS, CR in the following:

Your romantic partner always uses the same


shampoo. Soon, the smell of that shampoo makes
you feel happy.

The door to your house squeaks loudly when you


open it. Soon, your dog begins wagging its tail when
the door squeaks.

The nurse says, “This won’t hurt a bit,” just before


stabbing you with a needle. The next time you hear
“This won’t hurt,” you cringe in fear.

You have a meal at a fast food restaurant that causes


food poisoning. The next time you see a sign for that
restaurant, you feel nauseated.
Higher-Order Conditioning
▪ If the dog becomes conditioned to salivate at
the sound of a bell, can the dog be
conditioned to salivate when a light
flashes…by associating it with the BELL
instead of with food?
▪ Yes! The conditioned response can be
transferred from the US to a CS, then from
there to another CS.
▪ This is higher-order conditioning: turning a
NS into a CS by associating it with another
CS.
→A man who was conditioned to associate joy
with coffee, could then learn to associate joy
with a restaurant if he was served coffee
there every time he walked in to the
restaurant.
Acquisition refers to the initial
Acquisition stage of learning/conditioning.

What gets “acquired”?


→ The association between a neutral
stimulus (NS) and an unconditioned
stimulus (US).
How can we tell that acquisition has
occurred?
→ The UR now gets triggered by a CS
(drooling now gets triggered by a bell).
Timing
For the association to be acquired,
the neutral stimulus (NS) needs to
repeatedly appear before the
unconditioned stimulus (US)…about a
half-second before, in most cases. The
bell must come right before the food.
16
Acquisition and Extinction
▪ The strength of a CR grows with conditioning.
▪ Extinction refers to the diminishing of a conditioned response. If
the US (food) stops appearing with the CS (bell), the CR decreases.
Spontaneous Recovery [Return of the CR]
After a CR (salivation) has been conditioned and then extinguished:
•following a rest period, presenting the tone alone might lead to a
spontaneous recovery (a return of the conditioned response despite a
lack of further conditioning).
•if the CS (tone) is again presented repeatedly without the US, the CR
becomes extinct again.
Generalization and Discrimination
Please notice the narrow, psychological definition .

Ivan Pavlov conditioned Ivan Pavlov conditioned dogs


dogs to drool when to drool at bells of a certain
rubbed; they then also pitch; slightly different
drooled when scratched. pitches did not trigger
drooling.
Generalization refers to the Discrimination refers to the
tendency to have learned ability to only
conditioned responses respond to a specific stimuli,
triggered by related stimuli. preventing generalization.

MORE stuff makes you drool. LESS stuff makes you drool.
Ivan Pavlov’s Legacy
Insights from
specific
applications
Insights about
science • Substance abuse
involves
Insights about • Learning can be conditioned
conditioning in studied triggers, and
general objectively, by these triggers
quantifying (certain places,
• It occurs in all actions and
creatures. events) can be
isolating avoided or
• It is related to elements of
biological drives associated with
behavior. new responses.
and responses.
John B. Watson and Classical
Conditioning: Playing with Fear
▪ In 1920, 9-month-old Little Albert was not afraid
of rats.
▪ John B. Watson and Rosalie Rayner then clanged
a steel bar every time a rat was presented to
Albert.
▪ Albert acquired a fear of rats, and generalized
this fear to other soft and furry things.
▪ Watson prided
himself in his ability
to shape people’s
emotions. He later
went into
advertising.
Before Little Albert Experiment
Conditioning

No fear

NS: rat

UCS: steel bar hit


with hammer

Natural reflex:
fear
Little Albert Experiment

UCS: steel bar hit


NS: rat with hammer

Natural reflex:
fear
During
Conditioning
Little Albert Experiment

NS: rat

Conditioned
reflex:
fear

After
Conditioning
Operant Conditioning
Operant conditioning involves How it works:
adjusting to the consequences of our An act of chosen behavior (a
behaviors, so we can easily learn to “response”) is followed by a
do more of what works, and less of reward or punitive feedback
what doesn’t work. Examples → from the environment.
▪ We may smile more at work after Results:
this repeatedly gets us bigger tips.
▪ Reinforced behavior is more
▪ We learn how to ride a bike using likely to be tried again.
the strategies that don’t make us
crash. ▪ Punished behavior is less
likely to be chosen in the
future.

Response: Consequence: Behavior


balancing a ball receiving food strengthened
Operant and Classical Conditioning are
Different Forms of Associative Learning
Classical conditioning: Operant conditioning:
▪ involves respondent behavior, ▪ involves operant behavior,
reflexive, automatic reactions chosen behaviors which
such as fear or craving “operate” on the environment
▪ these reactions to ▪ these behaviors become
unconditioned stimuli (US) associated with consequences
become associated with which punish (decrease) or
neutral (then→conditioned) reinforce (increase) the
stimuli operant behavior
There is a contrast in the process of
conditioning.
The experimental (neutral) The experimental (consequence)
stimulus repeatedly precedes the stimulus repeatedly follows the
respondent behavior, and operant behavior, and eventually
eventually triggers that behavior. punishes or reinforces that
behavior.
Thorndike’s Law of Effect

Edward Thorndike placed cats in a puzzle box;


they were rewarded with food (and freedom)
when they solved the puzzle.
Thorndike noted that the cats took less time
to escape after repeated trials and rewards.
The law of effect states that behaviors
followed by favorable consequences become
more likely, and behaviors followed by
unfavorable consequences become less likely.
B.F. Skinner: Behavioral Control
B. F. Skinner saw potential for
exploring and using Edward
Thorndike’s principles much more
broadly. He wondered:
▪ how can we more carefully
measure the effect of
consequences on chosen
behavior?
▪ what else can creatures be taught
to do by controlling B.F. Skinner
consequences? trained pigeons to
play ping pong,
▪ what happens when we change and guide a video
the timing of reinforcement? game missile.
B.F. Skinner: The Operant Chamber
▪ B. F. Skinner, like Ivan Pavlov, pioneered more controlled
methods of studying conditioning.
▪ The operant chamber, often called “the Skinner box,”
allowed detailed tracking of rates of behavior change in
response to different rates of reinforcement.
Recording
device

Bar or lever
that an animal
presses,
randomly at
first, later for
reward

Food/water dispenser
to provide the reward
Reinforcement
▪ Reinforcement refers to This meerkat has just
any feedback from the completed a task out
environment that makes in the cold
a behavior more likely
to recur.
▪ Positive (adding)
reinforcement:
adding something
desirable (e.g., For the meerkat,
warmth) this warm light is
▪ Negative (taking desirable.
away) reinforcement:
ending something
unpleasant (e.g., the
cold)
Shaping Behavior as teaching a baby to walk

Reinforcing Successive Approximations


When a creature is not likely to randomly perform
exactly the behavior you are trying to teach, you can
reward any behavior that comes close to the desired
behavior.

Students could smile


and nod more when the
instructor moves left,
until the instructor stays
pinned to the left wall.
A cycle of mutual
reinforcement
Children who have a temper tantrum
when they are frustrated may get
positively reinforced for this behavior
when parents occasionally respond by
giving in to a child’s demands.
Result: stronger, more frequent
tantrums
Parents who occasionally give in to
tantrums may get negatively
reinforced when the child responds by
ending the tantrum.
Result: parents giving-in behavior
is strengthened (giving in sooner
and more often)
32
Discrimination
▪ Discrimination refers to the ability
to become more and more specific
in what situations trigger a
response.
▪ Shaping can increase
discrimination, if reinforcement
only comes for certain
discriminative stimuli.
▪ For examples, dogs, rats, and even Bomb-finding rat
spiders can be trained to search for
very specific smells, from drugs to
explosives.
▪ Pigeons, seals, and manatees have
been trained to respond to specific
shapes, colors, and categories. Manatee that
selects shapes
Why we might
work for money
▪ If we repeatedly introduce a
neutral stimulus before a
reinforcer, this stimulus acquires
the power to be used as a
reinforcer.
▪ A primary reinforcer is a stimulus
that meets a basic need or
otherwise is intrinsically desirable,
such as food, sex, fun, attention,
or power.
▪ A secondary/conditioned
reinforcer is a stimulus, such as a
rectangle of paper with numbers
on it (money) which has become
associated with a primary
reinforcer (money buys food,
builds power).
A Human Talent:
Responding to Delayed Reinforcers
▪ If you give a dog a treat ten minutes after
they did a trick, you’ll be reinforcing
whatever they did right before the treat
(sniffing?). Dogs respond to immediate
reinforcement.
▪ Humans have the ability to link a
consequence to a behavior even if they
aren’t linked sequentially in time. The
piece of paper (money) can be a delayed
reinforcer, paid a month later, yet still
reinforcing if we link it to our
performance.
▪ Delaying gratification, a skill related to
impulse control, enables longer-term goal
setting.
How often should we reinforce?

▪ Do we need to give a reward every single time? Or is


that even best?
▪ B.F. Skinner experimented with the effects of giving
reinforcements in different patterns or “schedules”
to determine what worked best to establish and
maintain a target behavior.
▪ In continuous reinforcement (giving a reward after
the target every single time), the subject acquires the
desired behavior quickly.
▪ In partial/intermittent reinforcement (giving
rewards part of the time), the target behavior takes
longer to be acquired/established but persists longer
without reward.
Different Schedules of
Partial/Intermittent Reinforcement
We may schedule ▪ Fixed interval schedule: reward
our reinforcements every hour
based on an ▪ Variable interval schedule:
interval of time reward after a changing/random
that has gone by. amount of time passes

We may plan for a


▪ Fixed ratio schedule: reward
certain ratio of every five targeted behaviors
rewards per
number of ▪ Variable ratio schedule: reward
after a randomly chosen instance
instances of the of the target behavior
desired behavior.
Which Schedule of Reinforcement is This?
Ratio or Interval?
Fixed or Variable?
1. Rat gets food every third time it presses the lever FR
2. Getting paid weekly no matter how much work is done FI
3. Getting paid for every ten boxes you make FR
4. Hitting a jackpot sometimes on the slot machine VR
5. Winning sometimes on the lottery you play once a day VI/VR
6. Checking cell phone all day; sometimes getting a text VI
7. Buy eight pizzas, get the next one free FR
8. Fundraiser averages one donation for every eight houses VR
visited
9. Kid has tantrum, parents sometimes give in VR
10. Repeatedly checking mail until paycheck arrives FI
Results of the different schedules of reinforcement
Which reinforcements produce more
“responding” (more target behavior)?

▪ Fixed interval: slow,


unsustained responding Rapid Fixed interval
responding
Rapid responding
Fixed interval
If I’m only paid for my near time
near time forfor
reinforcement
Saturday work, I’m not reinforcement
going to work as hard on
the other days.
▪ Variable interval: slow, Variable interval
Steady
consistent responding responding
If I never know which day
my lucky lottery number
will pay off, I better play it
every day.
Effectiveness of the ratio schedules of
Reinforcement
Fixed ratio
▪ Fixed ratio: high rate of
responding
Buy two drinks, get one Reinforcers
free? I’ll buy a lot of them!
▪ Variable ratio: high,
consistent responding,
even if reinforcement
stops (resists extinction) Variable ratio

If the slot machine


sometimes pays, I’ll pull
the lever as many times as
possible because it may
pay this time!
Operant Effect: Punishment
Punishments have the opposite effects of reinforcement.
These consequences make the target behavior less likely
to occur in the future.
- Negative
Punishment
+ Positive
Punishment You TAKE AWAY
You ADD something something pleasant/
desired (ex: no TV
unpleasant/aversive time, no attention)--
(ex: spank the child)
MINUS is the
“negative” here

→Positive does not mean “good” or “desirable” and


negative does not mean “bad” or “undesirable.”
When is punishment
effective?
▪ Punishment works best in natural
settings when we encounter
punishing consequences from
actions such as reaching into a fire;
in that case, operant conditioning
helps us to avoid dangers.
▪ Punishment is effective when we
try to artificially create punishing
consequences for other’s choices;
these work best when
consequences happen as they do
in nature.
→Severity of punishments is not
as helpful as making the
punishments immediate and
certain.
Applying operant conditioning to parenting
Problems with Physical Punishment
▪ Punished behaviors may restart when
the punishment is over; learning is not
lasting.
▪ Instead of learning behaviors, the child
may learn to discriminate among
situations, and avoid those in which
punishment might occur.
▪ Instead of behaviors, the child might
learn an attitude of fear or hatred,
which can interfere with learning. This
can generalize to a fear/hatred of all
adults or many settings.
▪ Physical punishment models aggression
and control as a method of dealing
with problems.
Don’t think about the beach

Don’t think about the waves, the


sand, the towels and sunscreen,
the sailboats and surfboards.
Don’t think about the beach.
Are you obeying the
instruction? Would you obey
this instruction more if you
were punished for thinking
about the beach?
Problem:
Punishing focuses on what NOT to do, which does not
guide people to a desired behavior.
▪ Even if undesirable behaviors do stop, another
problem behavior may emerge that serves the same
purpose, especially if no replacement behaviors are
taught and reinforced.

Lesson:
In order to teach desired
behavior, reinforce what’s
right more often than
punishing what’s wrong.
More effective forms of operant conditioning
The Power of Rephrasing
▪ Positive punishment: “You’re
playing video games instead of
practicing the piano, so I am
justified in YELLING at you.”
▪ Negative punishment: “You’re
avoiding practicing, so I’m turning
off your game.”
▪ Negative reinforcement: “I will
stop staring at you and bugging
you as soon as I see that you are
practicing.”
▪ Positive reinforcement: “After
you practice, we’ll play a game!”
Summary: Types of Consequences
Adding stimuli Subtract stimuli Outcome
Positive + Negative – Strengthens
Reinforcement Reinforcement target behavior
(You get candy) (I stop yelling) (You do chores)
Positive + Negative – Reduces target
Punishment Punishment behavior
(You get spanked) (No cell phone) (cursing)

= uses desirable = uses unpleasant


stimuli stimuli
B.F. Skinner’s
Legacy
B.F. Skinner’s View Critique
▪ The way to modify behavior is ▪ This leaves out the value of
through consequences. instruction and modeling.
▪ Behavior is influenced only by ▪ Adult humans have the ability
external feedback, not by to use thinking to make choices
thoughts and feelings. and plans
▪ We should intentionally create ▪ Natural consequences are more
consequences to shape the justifiable than manipulation of
behavior of others. others.
▪ Humanity improves through ▪ Humanity improves through
conscious reinforcement of free choice guided by wisdom,
positive behavior and the conscience, and responsibility.
punishment of bad behavior.
Applications of Operant Conditioning

School: long before Sports: athletes Work: some


tablet computers, B.F. improve most in the companies make
Skinner proposed shaping approach in pay a function of
machines that would which they are performance or
reinforce students for reinforced for company profit
correct responses, performance that rather than
allowing students to comes closer and seniority; they
improve at different closer to the target target more
rates and work on skill (e.g., hitting specific behaviors
different learning pitches that are to reinforce.
goals. progressively faster).
More Operant Conditioning Applications
Parenting
1. Rewarding small improvements toward desired behaviors works
better than expecting complete success, and also works better
than punishing problem behaviors.
2. Giving in to temper tantrums stops them in the short run but
increases them in the long run.
Self-Improvement
Reward yourself for steps you
take toward your goals. As you
establish good habits, then
make your rewards more
infrequent (intermittent).
Contrasting Types of Conditioning
Classical Conditioning Operant Conditioning
Associating events/stimuli Associating chosen behaviors
Basic Idea with each other with resulting events
Organism associates events.
Involuntary, automatic Voluntary actions “operating”
Response reactions such as salivating on our environment
NS linked to US by repeatedly Behavior is associated with
Acquisition presenting NS before US punishment or reinforcement
CR decreases when CS is Target behavior decreases
Extinction repeatedly presented alone when reinforcement stops
Spontaneous Extinguished CR starts again Extinguished response starts
Recovery after a rest period (no CS) again after a rest (no reward)
When CR is triggered by Response behavior similar to
Generalization stimuli similar to the CS the reinforced behavior.
Distinguishing between a CS Distinguishing what will get
Discrimination and NS not linked to U.S. reinforced and what will not
If the organism is Operant vs. Classical
learning associations Conditioning
between its behavior
and the resulting
events, it is...
operant conditioning

If the organism is
learning associations
between events that it
does not control, it is...

classical conditioning
Role of Biology in Conditioning
Classical Conditioning
▪ John Garcia and others found it was easier
to learn associations that make sense for
survival.
▪ Food aversions can be acquired even if the
UR (nausea) does NOT immediately follow
the NS. When acquiring food aversions
during pregnancy or illness, the body
associates nausea with whatever food was
eaten.
▪ Males in one study were more likely to see
a pictured woman as attractive if the
picture had a red border.
▪ Quail can have a sexual response linked to a
fake quail more readily and strongly than to
a red light.
Role of Biology in Conditioning

Operant Conditioning
▪ Can a monkey be trained to peck with
its nose? No, but a pigeon can.
▪ Can a pigeon be trained to dive
underwater? No, but a dolphin can.
▪ Operant conditioning encounters
biological tendencies and limits that
are difficult to override.
▪ What can we most easily train a dog to
do based on natural tendencies?
▪ detecting scents?
▪ climbing and balancing?
▪ putting on clothes?
Cognitive Processes
In classical conditioning In operant conditioning
▪ When the dog salivates at the ▪ In fixed-interval
bell, it may be due to cognition reinforcement, animals do
(learning to predict, even more target
expect, the food). behaviors/responses around
▪ Conditioned responses can the time that the reward is
alter attitudes, even when we more likely, as if expecting the
know the change is caused by reward.
conditioning. ▪ Expectation as a cognitive skill
▪ However, knowing that our is even more evident in the
reactions are caused by ability of humans to respond
conditioning gives us the to delayed reinforcers such as
option of mentally breaking the a paycheck.
association, e.g. deciding that ▪ Higher-order conditioning can
nausea associated with a food be enabled with cognition;
aversion was actually caused by e.g., seeing something such as
an illness. money as a reward because of
▪ Higher-order conditioning its indirect value.
involves some cognition; the ▪ Humans can set behavioral
name of a food may trigger goals for self and others, and
salivation. plan their own reinforcers.
Latent Learning
▪ Rats appear to form cognitive
maps. They can learn a maze just
by wandering, with no cheese to
reinforce their learning.
▪ Evidence of these maps is revealed
once the cheese is placed
somewhere in the maze. After only
a few trials, these rats quickly catch
up in maze-solving to rats who
were rewarded with cheese all
along.
▪ Latent learning refers to skills or
knowledge gained from experience,
but not apparent in behavior until
rewards are given.
Learning, Rewards, and Motivation
▪ Intrinsic motivation refers to
the desire to perform a
behavior well for its own sake.
The reward is internalized as a
feeling of satisfaction.
▪ Extrinsic motivation refers to
doing a behavior to receive
rewards from others.
▪ Intrinsic motivation can
sometimes be reduced by
external rewards, and can be
prevented by using What might happen
continuous reinforcement. if we begin to
▪ One principle for maintaining reward a behavior
behavior is to use as few someone was
rewards as possible, and fade already doing and
the rewards over time. enjoying?
Summary of
factors
affecting
learning
Learning by Observation
▪ Can we, like the rats exploring the maze with no reward,
learn new behaviors and skills without a direct experience of
conditioning?
▪ Yes, and one of the ways we do so is by observational
learning: watching what happens when other people do a
behavior and learning from their experience.
▪ Skills required: mirroring, being able to picture ourselves
doing the same action, and cognition, noticing consequences
and associations.
Observational Learning Processes
The behavior of others serves as a model, an
Modeling example of how to respond to a situation; we may try
this model regardless of reinforcement.
Vicarious ▪▪ Vicarious: experienced indirectly, through others
Vicarious reinforcement and punishment means
Conditioning our choices are affected as we see others get
consequences for their behaviors.
Albert Bandura’s Bobo Doll Experiment (1961)
▪ Kids saw adults punching an inflated doll while narrating
their aggressive behaviors such as “kick him.”
▪ These kids were then put in a toy-deprived situation…
and acted out the same behaviors they had seen.
Mirroring in the Brain
▪ When we watch others doing or feeling something,
neurons fire in patterns that would fire if we were
doing the action or having the feeling ourselves.
▪ These neurons are referred to as mirror neurons,
and they fire only to reflect the actions or feelings of
others.
From Mirroring to Imitation
▪ Humans are prone to spontaneous imitation of both
behaviors and emotions (“emotional contagion”).
▪ This includes even overimitating, that is, copying adult
behaviors that have no function and no reward.
▪ Children with autism are less likely to cognitively “mirror,”
and less likely to follow someone else’s gaze as a
neurotypical toddler (left) is doing below.
Mirroring Plus Vicarious Reinforcement
▪ Mirroring enables observational learning; we cognitively
practice a behavior just by watching it.
▪ If you combine this with vicarious reinforcement, we are
even more likely to get imitation.
▪ Monkey A saw Monkey B getting a banana after pressing
four symbols. Monkey A then pressed the same four symbols
(even though the symbols were in different locations).
Prosocial Effects of Observational Learning
▪ Prosocial behavior
refers to actions
which benefit others,
contribute value to
groups, and follow
moral codes and
social norms.
▪ Parents try to teach
this behavior through
lectures, but it may
be taught best
through modeling…
especially if kids can
see the benefits of
the behavior to
oneself or others.
Antisocial Effects of Observational Learning
▪ What happens when we learn
from models who demonstrate
antisocial behavior, actions that
are harmful to individuals and
society?
▪ Children who witness violence in
their homes, but are not physically
harmed themselves, may hate
violence but still may become
violent more often than the
average child.
▪ Perhaps this is a result of “the
Bobo doll effect”? Under stress,
we do what has been modeled for
us.
Media Models of Violence
Do we learn
antisocial
behavior
such as
violence
from indirect
observations
of others in
the media?

Research shows that viewing media violence leads to


increased aggression (fights) and reduced prosocial behavior
(such as helping an injured person).
This violence-viewing effect might be explained by imitation,
and also by desensitization toward pain in others.
Summary
▪ Classical conditioning: Ivan Pavlov’s salivating dogs
▪ New triggers for automatic responses
▪ Operant conditioning: B.F. Skinner’s boxes and his
pecking pigeons
▪ Consequences influencing chosen behaviors
▪ Biological components: constraints, neurons
▪ Observational learning: Albert Bandura’s Bobo
dolls, mirroring, prosocial and antisocial modeling

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy