How Digital Technology Shapes Us

Download as pdf or txt
Download as pdf or txt
You are on page 1of 230

Learn how to understand and better utilize the technologies that are rapidly Topic Subtopic

transforming life in the 21st century. Science Neuroscience & Psychology

How Digital Technology Shapes Us


“Pure intellectual stimulation that can be popped into
the [audio or video player] anytime.”
—Harvard Magazine
How Digital Technology
“Passionate, erudite, living legend lecturers. Academia’s
best lecturers are being captured on tape.”
Shapes Us
—The Los Angeles Times Course Guidebook
“A serious force in American education.”
—The Wall Street Journal Professor Indre Viskontas
University of San Francisco;
San Francisco Conservatory of Music

Indre Viskontas is an Assistant Professor of Psychology at


the University of San Francisco and a Professor of Humanities
and Sciences at the San Francisco Conservatory of Music.
She holds a PhD in Cognitive Neuroscience from the
University of California, Los Angeles. Professor Viskontas has
published more than 50 original papers and book chapters
related to the neural basis of memory, reasoning, and
creativity in top scientific journals. A sought-after science
communicator, she cocreated and hosts the popular science
podcast Inquiring Minds.

THE GREAT COURSES ®


Corporate Headquarters
4840 Westfields Boulevard, Suite 500
Chantilly, VA 20151-2299
USA
Guidebook

Phone: 1-800-832-2412
www.thegreatcourses.com

Cover Image: © peterschreiber.media/iStock/Getty Images Plus.

Course No. 9764 © 2020 The Teaching Company. PB9764A


4840 Westfields Boulevard | Suite 500 | Chantilly, Virginia | 20151‑2299
[phone] 1.800.832.2412 | [fax] 703.378.3819 | [web] www.thegreatcourses.com

LEADERSHIP
PAUL SUIJK President & CEO
BRUCE G. WILLIS Chief Financial Officer
JOSEPH PECKL SVP, Marketing
JASON SMIGEL VP, Product Development
CALE PRITCHETT VP, Marketing
MARK LEONARD VP, Technology Services
DEBRA STORMS VP, General Counsel
KEVIN MANZEL Sr. Director, Content Development
ANDREAS BURGSTALLER Sr. Director, Brand Marketing & Innovation
KEVIN BARNHILL Director of Creative
GAIL GLEESON Director, Business Operations & Planning

PRODUCTION TEAM
ALI FELIX Producer
JAMES BLANDFORD Post-Production Producer
GINA DALFONZO Content Developers
BRANDON HOPKINS
SAM BARDLEY Associate Producer
TRISA BARNHILL Graphic Artists
PETER DW YER
OWEN YOUNG Managing Editor
MILES MCNAMEE Sr. Editor
MAHER AHMED Editor
CHARLES GRAHAM Assistant Editor
CHRIS HOOTH Audio Engineer
RICK FLOWE Camera Operators
VALERIE WELCH
JIM M. ALLEN Production Assistants
RICK FLOWE
ROBERTO DE MORAES Director

PUBLICATIONS TEAM
FARHAD HOSSAIN Publications Manager
BLAKELY SWAIN Sr. Copywriter
RHOCHELLE MUNSAYAC Graphic Designer
JESSICA MULLINS Proofreader
ERIK A ROBERTS Publications Assistant
ELIZABETH BURNS Fact-Checker
WILLIAM DOMANSKI Transcript Editor

Copyright © The Teaching Company, 2020


Printed in the United States of America
This book is in copyright. All rights reserved. Without limiting the rights under copyright reserved
above, no part of this publication may be reproduced, stored in or introduced into a retrieval
system, or transmitted, in any form, or by any means (electronic, mechanical, photocopying,
recording, or otherwise), without the prior written permission of The Teaching Company.
Indre Viskontas, PhD
Assistant Professor of Psychology,
University of San Francisco
Professor of Humanities and Sciences,
San Francisco Conservatory of Music

I ndre Viskontas is an Assistant


Professor of Psychology at the
University of San Francisco (USF)
and a Professor of Humanities
and Sciences at the San Francisco
Conservatory of Music (SFCM).
She received her bachelor of
science degree with a specialty in
Psychology and a minor in French
Literature from Trinity College in
the University of Toronto. Professor
Viskontas also holds a master of
music degree in Vocal Performance
from SFCM. She completed her
doctorate degree in Cognitive
Neuroscience at the University of
California, Los Angeles (UCLA),
where she studied the neural basis of memory and reasoning. Her postdoctoral
work at the University of California, San Francisco, explored the paradoxical
facilitation of creativity in patients with neurodegenerative diseases.

Professor Viskontas’s research is characterized by innovation and a focus on


the big questions in neuroscience: How do brain cells code memory? What
brain changes foster creativity? How can neuroscience help us train musicians
more effectively? Defying traditional career boundaries, Professor Viskontas
spends much of her time performing in and directing operas, with favorite
roles including both Susanna and the Countess in Mozart’s Le Nozze di
Figaro, the title role in Floyd’s Susannah, Micaëla in Bizet’s Carmen, Musetta

i
Professor Biography

in La Bohème, and Beth in Adamo’s Little Women. She directed Missy


Mazzoli’s opera Proving Up and Michael Nyman’s The Man Who Mistook
His Wife for a Hat, which is based on a case study by her mentor and friend,
Oliver Sacks. She often works with composers and has created roles in 3
contemporary operas. She is also the creative director of Pasadena Opera.

Professor Viskontas’s dissertation was recognized as the best of her class.


She has also been the recipient of numerous fellowships, including a
4-year Julie Payette-NSERC Research Scholarship (awarded to the top 10
Canadian graduate students in the life sciences), the Dr. Ursula Mandel
Scholarship, a UCLA dissertation fellowship, the Charles E. and Sue K.
Young Award for the top graduate students at UCLA, a McBean Family
Foundation Fellowship, and the prestigious Laird Cermak Award from the
Memory Disorders Research Society. Professor Viskontas also received the
Distinguished Teaching Assistant Award at UCLA and served as a teaching
assistant consultant in the Department of Psychology. In her first term at
USF, her students chose her to be the professor of the month. She has also
received several grants from the Germanacos Foundation for her work on
music and the brain.

Professor Viskontas has published more than 50 original papers and book
chapters related to the neural basis of memory, reasoning, and creativity in
top scientific journals, such as American Scientist, Proceedings of the National
Academy of Sciences, The Journal of Neuroscience, Neuropsychologia, Current
Opinion in Neurology, and Nature Clinical Practice. Her work was featured
in Oliver Sacks’s book Musicophilia: Tales of Music and the Brain as well
as in other publications, such as Nautilus, Nature, and Discover. She is a
sought-after science communicator who cocreated and hosts the popular
science podcast Inquiring Minds, which has been downloaded more than 8
million times. Professor Viskontas cohosted the 6-episode docuseries Miracle
Detectives on the Oprah Winfrey Network and has appeared on The Oprah
Winfrey Show, PBS NewsHour, NPR’s City Arts & Lectures, the TED Radio
Hour, and CBC Radio’s Sunday Edition. She also regularly gives keynote talks
for conferences and organizations as diverse as Ogilvy & Mather, Genentech,
TEDx, and the Dallas Symphony Orchestra.

Professor Viskontas’s other Great Courses include 12 Essential Scientific


Concepts and Brain Myths Exploded: Lessons from Neuroscience.

ii
Table of Contents

Introduction
Professor Biography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i

Course Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

Lessons
1 How Experience Alters the Brain . . . . . . . . . . . . . . 3

2 Are New Media Shortening Attention Spans? . . . . . . 10

3 Does the Internet Make Us Shallow Thinkers? . . . . . 18

4 Outsourcing Our Memory . . . . . . . . . . . . . . . . 26

5 Human versus Digital Content Curators . . . . . . . . . 36

6 Virtual Realities and Our Sense of Self . . . . . . . . . . 45

7 Screen Time’s Impact on Kids . . . . . . . . . . . . . . 54

8 Video Games and Violence . . . . . . . . . . . . . . . . 63

9 Is Digital Technology Ruining Sleep? . . . . . . . . . . 73

10 How “Dr. Google” Is Changing Medicine . . . . . . . . 82

11 The Virtual Therapist . . . . . . . . . . . . . . . . . . . 90

12 How Big Data Can Predict the Future . . . . . . . . . . 99

13 Is Privacy Dead in the Information Age? . . . . . . . . . 107

14 The Emotional Effects of Social Media . . . . . . . . . 116

15 How Online Dating Transforms Relationships . . . . . 124

iii
Table of Contents

16 Technology and Addiction . . . . . . . . . . . . . . . . 133

17 Is the Internet Hurting Democracy? . . . . . . . . . . . 142

18 The Arts in the Digital Era . . . . . . . . . . . . . . . . 151

19 How AI Can Enhance Creativity . . . . . . . . . . . . . 159

20 Do We Trust Algorithms over Humans? . . . . . . . . . 167

21 Could Blockchain Revolutionize Society? . . . . . . . . 176

22 Effects of Technological Metaphors on Science . . . . . 185

23 Robots and the Future of Work . . . . . . . . . . . . . . 194

24 Redefining What It Means to Be Human . . . . . . . . 203

Supplementary Material
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212

Image Credits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224

iv
How Digital
Technology
Shapes Us

T his course explores the myriad ways that our interactions with technology
are shaping or have the potential to shape our thoughts, feelings, and
social interactions. The course starts with topics directly related to how we
think, including whether our attention spans are dwindling, how reading on
screens is different from reading a physical book, and what the accessibility of
information is doing to our memory.

Next, you’ll learn about how current technologies are limiting our options
and pushing us toward similar content rather than encouraging us to explore
uncharted waters. You’ll discover what virtual reality can tell us about
ourselves and how screen time is affecting our children. And you’ll consider
the controversial question of whether playing violent video games leads to
aggressive behavior and what effect technology is having on our ability to
sleep. But sleep isn’t the only aspect of our health that might be affected by
technology; you’ll also examine how the internet is changing our relationships
with our physicians and therapists.

Then, the course will move away from a focus on our own selves and toward
a view of how technology is changing how we as a society behave. You’ll
consider the implications of selling our data to large tech companies, which
many of us do every day as we use “free” social media apps, among other
tools. You’ll also be prompted to think deeply about the issues of privacy that
these acts bring up.

1
Course Scope

The course will then move more directly to how technology is shaping our
relationships, from online dating to pornography and our political groupings.
You’ll consider how smart computers—artificial intelligence—might make
us more creative. Finally, you’ll look to the future, examining how emergent
technologies like blockchain, self-driving cars, and other innovations are likely
to affect how we spend our time and even what it will ultimately mean to be
human in the digital age.

2
ts
en

How
nt

o
Table of C
1
Experience

to
o
Click to g

Alters the
Brain

3
Lesson 1 How Experience Alters the Brain

T hough much of who we are is written into our


genes, how these genes are expressed depends on
the environment in which they act. This is no less true
for our brains. The very essence of biology—of being
alive—means that we are open to adaptation and
change. That’s what separates a physical machine from
a biological being. Thus, as our environment changes, so
do we, and so do our brains.

Stimulating and Shaping the Brain

● Your brain requires stimulation to function—to be you. This law applies to


every brain cell in your head: Without another cell to talk to, it dies. That
fact is quite poetic: It underscores the importance of connections, all the way
down to the cellular level. Because the brain is an active organ, its essence
is in its activity, in the exchange of electricity and chemicals between cells.
Take away that activity and you remove the mind from the brain.

● But we can change that activity, and even the very anatomy of the brain,
by our behavior—by our experience.

● Anything we do for a significant part of our life changes


our brains. The adaptive nature of our brains, after
all, is arguably our greatest asset as human beings.
So if we went from not having access to the
internet to browsing it for hours a day, it would
Your brain is
certainly leave its mark on our brains, just like a repository
spending hours a day deliberately practicing of your
the piano reshapes the parts of our brain that
control our fingers and hands, enable us to hear, experiences.
and even, with enough practice, affect many
other regions and circuits.

4
● Does technology shape how we think? Like the brain itself, the answer to
this question is complicated. Our brain changes our behavior—but our
behavior can also change our brain. The question is no longer whether
technology is shaping our minds, but how. And that depends on how we
use the technology.

Lecture 24 of the Great Course Brain Myths Exploded: Lessons


from Neuroscience busts the myth that technology makes us
stupid. In fact, in many ways, we’ve actually become smarter as a
result of using technology.

● It seems that every 5 years, a new technological innovation changes society


in profound and unexpected ways. Futurist Roy Amara famously noted
that we tend to overestimate the effect of a technology in the short run and
underestimate its effect in the long run. And now we’re in the midst of a
technological revolution, in which digital technologies are changing how
we access and use information—the very domain of the brain.

● With the whole internet at our fingertips, why should we commit


information to memory? After all, our brain is a biological organ, with all
the imperfections that come with organic materials. Digital information
doesn’t change with time or context but is perfectly rendered every time we
access it, whereas this not at all the case with our brains.

● This raises the question, If computers are better at storing information


and retrieving it perfectly every time, what’s the best use of our brains?
This course explores how technology is shaping our inner mental lives but
also how we might use it most effectively to capitalize on what our brains
are best at: outsourcing the things we’re not so good at to increasingly
powerful computers and other technological innovations.

How the Brain Works

● Arguably the first technological revolution occurred when single-celled


organisms developed ways of exchanging information. One bacterium, for

5
Lesson 1 How Experience Alters the Brain

example, released a chemical compound, and another received it, using


a special receptor that matched the shape of the chemical, like a lock
matches a key. Then, that key unlocked other signals within the receiving
bacterium that changed its behavior, making it more likely to invade a
host, for instance, or to avoid one.

● This type of signaling can change how a cell expresses its genes. In your
gut, signaling between bacteria can lead them to work as a group—a
collective—rather than as individuals. Under the right circumstances,
bacteria in your gut express genes that make them stick to the mucus
lining more easily, helping them proliferate and protecting you from other,
less desirable microorganisms.

● Neurons, which are just evolved versions of single-celled organisms, also


send signals to each other that coordinate their behavior and give rise to
every one of your thoughts. Signals can be chemical, electrical, or both, as
chemical signals trigger a cascade of voltage changes in the cell.

● And how easily these signals are sent or received, and what effect they have
on downstream cells, changes with use. Imagine a front door lock. When
you first get a new key, it’s often harder to open the door; the key gets a little
stuck and you need to wear it down a bit until it can slide into the lock more
smoothly. In your brain, a new signal is also harder for a cell to accept, and
with repeated signaling, the cascade of voltage changes becomes smoother.
In other words, cells that fire together wire together, to use an axiom that
summarizes Donald Hebb’s influential theory of neural networks.

● But imagine now that you’ve used that lock-and-key combination tens of
thousands of times. Eventually, the key or the locking mechanism gets
worn down. It doesn’t work as well. Now you have to add some jiggling or
force. Eventually, cells habituate to stimulation and are less sensitive to it.

● Experience, then, can be written into the biology of cells by turning on


genes that change the number and types of receptors a cell has, or how
quickly the electrical signal is generated, or which chemicals the cell
releases when activated. In doing so, experience changes the wiring of the
brain, as well as the relative sizes of different brain regions, as cells grow
more spines or projections, like the leaves and branches on a tree, and the
synchronization of activity in ensembles of neurons.

6
● There are mammals who can grow new neurons in adulthood, in regions
of the brain that drive new learning. Whether adult humans grow new
neurons is still a scientific puzzle, but if we do grow new neurons, one
thing is clear: In order to keep those baby cells, you need to learn a new
skill. This is because neurons in the brain must communicate with each
other; socially isolated cells die more quickly, just like their human hosts.
And learning new skills forges new connections, incorporating the new
cells into the already-active mind.

The Mind-Brain Connection

● While the brain is just a collection of cells, albeit a highly organized one,
the mind emerges from the activity of those cells—what they do. We’re
still in the early stages of deciphering exactly how neuronal and other brain
cell activity enables thinking, but what we do know is that we can see
traces of experience and behavior at many different levels of analysis, from
the number and kind of receptors on a cell’s surface to the oscillations of
millions of cells across the brain.

● For example, experience can change the firing rates of cells: how quickly
and in what kind of pattern they send out their electrical messages. It can
also change the synchronicity of activity across millions of cells, what
we call brain waves. It can change which brain regions show a surge in
activity, what many functional MRI studies show, using blood flow as a
proxy for measuring brain activity.

● The results from these studies are often depicted in the media as images
of a brain lighting up in certain places, which is not an entirely accurate
representation, since most of these studies are showing where activity is
greater or more intense, not where it is present or absent. The truth is that
even when you’re just lying in the scanner, ostensibly doing nothing, your
brain is working—maybe thinking about your to-do list or feeling slightly
anxious about all the loud noises. And as long as you’re alive, your brain
is active.

● Experience can even change how these brain regions interact with each
other when we’re really not doing anything specific—what we call the

7
Lesson 1 How Experience Alters the Brain

default mode network, which has become a hot area of study as we’ve
realized that lying in the scanner, waiting to do something, is a fertile time
for thinking and therefore brain activity.

● And every one of these changes is susceptible, to a larger or smaller extent,


to alterations that result from our use of technology.

● That isn’t to say, though, that our brains are limitless in their ability to
adapt or change. There are strict biological limitations to neuroplasticity—
the term used to describe anatomical changes that happen with experience.
That’s why it’s harder to learn a new language, and to think in it, when
you’re 25 years old compared to when you’re 5. Neuroplasticity slows down
as our brains fully develop, and that’s good news, since we want to retain
what we’ve worked hard to learn.

● But our access to new experiences also slows down. When you’re 5, every
day you have the potential to discover something new: to taste a new food,
play a new game, feel a new emotion, or connect with a new person. Can

8
you say the same when you’re 55? Our lives become more routinized as we
get older, and many of us become less interested in, or even annoyed by,
new experiences. We gravitate toward the familiar, so our brains have fewer
opportunities to change.

● But then comes a revolutionary device like the smartphone. With the
internet and millions of apps at our fingertips, we no longer have to endure
long hours of waiting with no entertainment. It also means that we carry
temptation around with us all day long. Since the ways in which we use
our brains on a day-to-day basis are changing, our brains must change,
too. That’s both a blessing and a curse. The fact that our species has been
able to colonize much of this world, and even survive off of it, speaks to
the adaptability of our brains.

● It might take more effort or greater openness to new experiences to see


significant changes to our neuroanatomy as we get older, but those changes
will happen if we change our behavior, whether we like it or not.

● The mind-brain connection is a 2-way street: The neurophysiology and


anatomy of our brains give rise to our minds, and our minds can shape the
neurophysiology and anatomy of our brains.

● As technology changes our behavior, it also changes our brains. That’s


why this course focuses on behavioral changes that have accompanied
technological innovations and their ultimate effects on our minds.

Questions to Consider

1 How long does it take for our brains to change?

2 In what ways does the adult brain change?

3 What are the key drivers of changes in the brain?

9
Are New
2
Media
Shortening
Attention
Spans?

10
T echnological advances have brought to our fingertips
an infinite number of ways to distract ourselves, and
what we pay attention to shapes how we live our lives.
But we often sacrifice our attention in exchange for
seemingly free entertainment in the form of social media,
streaming videos, and internet articles. Is technology
eroding our ability to focus? And if so, can we retrain our
brains to think deeply, or is there some bigger benefit to
skimming the surface?

Teachers claim that students’ attention spans have drastically


diminished in the last decade, and even bastions of traditional
media like The New York Times now provide daily digests and
90-second videos for people who don’t take the time to read
entire articles.

Attention and Focus

● The term paying attention suggests that attention is a limited resource,


like cash. And indeed it is. The more you focus on one sensation, like
your grumbling stomach, the harder it is to focus on something else, like
what the person you’re talking to is saying. But it’s not all or nothing. To
a certain extent, you can pay some attention to driving, for example, and
some attention to a conversation with a passenger and still arrive at your
destination safely.

● Now if you suddenly started feeling the symptoms of appendicitis, or


a truck began to swerve into your lane, you could quickly shift your
attention to where it’s needed more urgently. In order to be able to do that,
your brain must have been paying some attention to your surroundings or
your internal state without you being fully aware of it.

11
Lesson 2 Are New Media Shortening Attention Spans?

● That means that your brain has at least 2


tracks: what you’re consciously aware
of and everything else. And what’s When we’re not
amazing is that the “everything else” paying conscious
part takes up a lot more of your attention to what’s
brain’s real estate and function than
the conscious part. Most of what our going on, some part
brains are doing is unavailable to our of our brain is still
conscious minds, even if it feels like we
are in control. keeping track.
● Although it feels as though attention is like
an on-off switch, it’s more like a spectrum, with
different levels of attention, or consciousness, allocated to different aspects
of the internal and external environments at a given moment in time. We
have some control over our experience by allocating attention, but we’re
also easily distracted and unwittingly, or even unconsciously, pulled away
from what we really want to focus on.

● There’s also no guarantee that your brain is tracking all the important
things that you think it should. We do often miss the forest for the trees.
Just because you’re reading an impressive tome doesn’t mean that your
brain is doing the work necessary to store or synthesize that information
for the long term.

● This state of affairs—in which much of what our brains do is implicit,


or not conscious—allows our brains to accomplish all that they do, but
it comes with a downside: We’re not always aware of just how unaware
we are.

● Here’s where digital media comes into the picture. We often feel as though
we’re making deliberate decisions in terms of what we’re focusing on, but
we’ve also allowed technology companies to buy our attention. Instead of
paying for many apps and media services, we let them sell our data to show
us advertisements or influence us in subtle ways, capitalizing on the fact that
our attention is easily shifted. And many tech companies have shifted their
business models accordingly—creating content that is designed to keep us
scrolling or skimming, rather than losing ourselves in deep thought.

12
● Sustained attention is critical for what professor and author Cal Newport
calls deep work: the ability to focus without distraction on a cognitively
demanding task. And we needed this new term for it because our modern
lives have obliterated this ability for most of us.

● But what if this isn’t in our hands anymore? What if our brains are being
shaped by the internet and other technology to shift focus rather than
sustain it? What if it’s becoming harder and harder for us to control our
attention and easier and easier for tech companies to manipulate it? Are we
selling off too much of ourselves for the fleeting pleasures of social media
and streaming entertainment? The evidence that this is the case is mounting.

Cognition and the Internet

● Your brain is a creature of habit; how you think shapes how your brain
works. So if you spend a lot of time jumping from one website or idea to
the next, the pathways that enable this type of thinking will strengthen—
at the expense of the pathways that you’re no longer using as much, such as
those required for deep reading comprehension or sustained attention.

● What are we actually doing when we flit from one web link to the next? To
a large extent, we’re not directing our focus; we’re letting it be led by what
we’re exposed to. So it becomes harder and harder to keep it on track—
harder to return focus to the task at hand.

● And this exacerbates a problem that we face every minute of every day:
How can we navigate a world of such unfathomable complexity? We think
of ourselves as being fairly good observers of reality: We can see, hear,
smell, touch, taste, and consider our environment. But each of these senses,
and the cognitive skills we need to interpret them, represents just a sliver of
the available information.

● Our brains work hard to fill in the gaps, making sense of the impoverished
input and alerting our consciousness to what’s important. But we still need
to decide what to pay attention to, because even if our senses can register
the information, our brains can’t handle it all. Just think about what
happens when 2 people are trying to talk to you at the same time!

13
Lesson 2 Are New Media Shortening Attention Spans?

● The bottom line is that we are limited in terms of how many things we can
consciously attend to at the same time. And it takes work to keep focus
when our brain is busy tracking the environment for salient information.
When that environment includes the rapid-fire, eye candy–laden world of
digital media, paying attention for extended periods of time becomes next
to impossible.

Cognitive Multitasking

● The term multitasking, meaning performing multiple actions at the same


time, was co-opted by computer programmers who built software that can
work on information in parallel. But when this meaning of the word was
later applied to human cognition, people began to make the mistake of
thinking that we’re like computers.

14
● We can’t consciously do 2 cognitively demanding tasks at the same time.
Instead, we quickly switch our attention between tasks, sometimes so
quickly we barely notice it. But this switching comes at a cost. It takes time
and energy—cognitive resources—to switch tasks.

● There are 2 costs, actually. One is the time it takes to switch, which can be
short or long, depending on how similar and how demanding the tasks are.
The other is the challenge of inhibiting your brain from continuing work
on the previous task, keeping intrusive thoughts at bay.

● It takes mental effort to prevent your attention from being pulled in


another direction—to ignore the signals that your unconscious mental
processes, working hard to fill in the gaps in your meager sensory
information, are allowing to bubble up into consciousness, just in case
there’s something important there. This feat becomes even harder if
you’ve just spent some time working on a problem that you haven’t quite
solved yet.

● Think about what happens when you check your email before delving
into a more cognitively demanding task, such as reading a technically
difficult article. If one of the emails you respond to requires some conflict
resolution or some real decision-making, it’s likely that you’ll find yourself
ruminating about the problem during a lull in your reading, instead of
synthesizing what you’re reading.

● These kinds of intrusive thoughts prevent you from devoting all of your
cognitive resources to the new task at hand. Now multiply this situation by
the hundreds of times you task-switch while using digital media and these
intrusions add up to a significant amount of lost time. Put another way, the
amount of time you spend focusing deeply on the difficult task becomes
astonishingly small, preventing you from making much headway.

● Unfortunately, task switching is exactly what digital media encourages you


to do. And we are much more limited in this ability than we might think.

● Most of us don’t realize what the effects of choosing how much time we
spend on different tasks might be on our ability to get things done. There’s
evidence that we tend to linger on the more-immediately-rewarding task,
such as checking email or online shopping, and to switch tasks only once

15
Lesson 2 Are New Media Shortening Attention Spans?

we’ve accomplished a subgoal. The energy it takes to keep pulling our


attention to the harder and often-less-immediately-rewarding task can be
exhausting. That’s why when we multitask, deep work suffers.

● In addition to paying switch costs, multitasking also encourages us to


pay partial attention to each of the tasks, monitoring progress on one task
while focusing on another. That means that you might, as author Steven
Berlin Johnson has said, cast a wide net but fail to actually haul in the fish
in any meaningful way.

● And this is exactly what seems to be happening as we spend more and more
time on the internet or with digital media: We are getting more efficient
at scanning and skimming information but less skilled at reflecting on it,
thinking critically, and developing original lines of thought.

● One might argue, though, that this adaptation is inevitable and desirable, given
where society is headed: toward more and more screen time and information
overload. And you could say that people who multitask more often get better at
it, reducing switch costs and becoming more efficient thinkers.

● But evidence is mounting that that’s not necessarily the case. In a 2009
study from Stanford University, Anthony Wagner and his colleagues found
that heavy media multitaskers were actually more distractible—they had
more trouble ignoring irrelevant environmental distractors and intrusive
thoughts—compared to light media multitaskers. And they actually
performed less well on a task-switching test. It’s not entirely fair, though, to
take this one study and extrapolate the results across all of humanity.

● Our ability to focus attention changes across our life spans, too. Contrary
to popular belief, we are actually more distractible as we age. Our brains
slow down; the speed with which we can process thoughts diminishes.
The older we get, the longer we take to work through a problem. We make
more errors and fall prey to more distractions.

● Part of this age effect might be attenuated by the fact that the older we
are, the less time we presumably spend on digital media. This means
that this age effect might very well become more pronounced with every
generation, as younger people today are more likely to report multitasking

16
than previous generations.* And it’s not just kids in the US; it’s a worldwide
problem. But is it a problem? Or is it just a shift in how young people
spend their time?

● There’s plenty of evidence that media multitasking while studying or


working has a negative effect on learning: Learning while even partially
attending to multiple media streams takes longer and leads to more
mistakes and shallower comprehension.

● Media multitasking isn’t going away anytime soon. And it does seem to be
affecting how deeply we process information. We tend to allocate partial
attention to most tasks, instead of giving them our full attention for hours
at a time.

● When we eat sugary foods, we get short-term pleasure, but many of us


eventually learn that such foods are ultimately just not that satisfying and
not worth the price we pay in our energy levels and waist spans. The same
is starting to ring true for media multitasking: Scrolling and streaming
might be tempting, but many of us are coming to realize that we’re not
satisfied and we seek out more meaningful ways to engage our brains.

Questions to Consider

1 What is attention?

2 What is the difference between a cognitive habit and an ability or


inability?

3 What are the costs of multitasking?

* Today’s young people spend on average a staggering 7.5 hours engaged


with digital media every day. That’s more time than they spend doing anything
else, with the possible exception of sleeping. About a third of that time is spent
dividing attention between 2 or more forms of media—for example, texting
while watching videos.

17
Does the
3
Internet Make
Us Shallow
Thinkers?

18
H as your brain already begun showing the effects
of long-term digital media use on your ability to
read? That’s a tough question to answer, given all the
unknowns. But we can use reading as an example of how
we’re often not very good at judging our own mental
strengths and weaknesses and how we develop complex
skills that become automatic.

How We Read

● Reading is not a natural thing for humans. Unlike spoken language,


there isn’t a part of our brain that evolved largely to support reading. It’s
not an innate skill that children acquire passively, like spoken language.
Instead, we shape the connections in children’s brains as we teach them
how to read. Children without the right education can remain illiterate for
a lifetime, even when their spoken language abilities are fully developed.
Those with reading difficulties can find themselves disadvantaged for life.

Not all reading is created equal, and how we develop the skill
can have lifelong implications that spill over to many different
aspects of our mental lives.

● Humans have only been reading for about 6000 years, a time period that
represents barely an eyeblink in the history of the world and the path of our
evolution. And in that short time, we have repurposed parts of our brains
to enable reading, which in turn supports other fundamentally human
cognitive abilities, such as critical thinking, imagination, and even empathy.

● We do have specific reading circuits in the brain, but they are developed
through learning and not written into our genetic code the way that other
aspects of language are. Because reading circuits are created as we learn to read,
how and what we learn to read will shape the way they are ultimately formed.

19
Lesson 3 Does the Internet Make Us Shallow Thinkers?

● We tend to think that everyone reads just like we do, but that’s not the
case. The more Proust we read, the easier it becomes. That’s because the
reading circuits are being optimized for long sentences and flowery prose.
Ultimately, our reading circuitry can encompass vast swaths of the brain,
involving cells in all the layers of our cortex, across both hemispheres
and in each of the 4 lobes—but only if we spend the time necessary to
adequately build and train these networks. Once they become entrenched,
it’s hard to turn them off; we automatically read text and it feels effortless.
However, if we only ever read comic books or short internet articles or
children’s books, then we won’t develop the networks necessary for long-
form reading.

Deep Reading and Empathy

● By losing ourselves in a book, we get a sample of the inner mental world of


the author. Have you had the experience of beginning to think in sentences
that match the style and cadence of an author whose book has captivated
your attention? Deep reading allows you to expand your thinking abilities,
and together with the author, you can build a rich imaginary universe.

● This process of imagining is another powerful feature of reading. A great


novelist succeeds in giving you all the information you need to create a
rich representation of a series of events that a memorable character has to
endure. But your ability to read goes hand in hand with how detailed an
image you can conjure up in response to the written word.

● In addition to vivid imagery, deep reading teaches empathy. It does this in


several ways: by bringing us into the mental world of the central character,
particularly if the book is written in the first person; by allowing us to
experience events from another person’s point of view; and by laying out
the perspectives of others and their reactions to various events in the story.
By empathizing with a character, putting ourselves in their shoes, we
receive the great gift of being able to become someone else.

● Deep reading can give everyone that gift, but it’s not the only way.
Immersive digital experiences, such as virtual reality, can arguably be an
even more powerful conduit to recreating yourself. But in virtual reality,

20
so much of the imagery is dictated by the programmers. During deep
reading, you’re as much a creator of that world as the author is. Film and
TV, being passive media, are not as effective at teaching us empathy as
media in which we are active participants.

● Will empathy be a casualty of the decline in deep reading?

● Digital media has the power to connect us, but the current trend is to
reinforce tribalism rather than to expand the tribe. In 2010, Sara Konrath
and her colleagues published a meta-analysis of studies of empathy
in college students over 4 decades, from 1979 to 2009. They found a
consistent decline in empathic concern and perspective taking—40%!—
with the steepest drop happening since 2000.

● We don’t know what is causing the decline, though mobile phones are likely
at least partly to blame. But what if another factor is shrinking the amount
of time that kids spend lost in novels, or worse, an inability to read deeply?

21
Lesson 3 Does the Internet Make Us Shallow Thinkers?

● There is some evidence that the neural networks involved in deep reading
overlap with those that support empathy. Specifically, after about age 3
or 4, most humans have developed a theory of mind, with which they
perceive, understand, and consider the thoughts, feelings, and beliefs of
others—a necessary step toward feeling empathy. A person’s theory of
mind continues to develop throughout his or her life span, with some
people becoming quite exceptional at reading another person’s thoughts by
observing his or her behavior.

● The neural networks thought to underlie this ability include large regions
connected via the insula and cingulate cortex, where information from
different senses and association areas in the brain converge. There are large
cells here called von Economo neurons.* The size of these cells allows them
to transfer messages across large brain areas very quickly.

● This network is activated by deep reading, especially of novels. When we


read metaphors about texture, our somatosensory cortex—the part of the
brain that gets sensory information from touch receptors in the body—is
activated. When we read about movement, our motor regions light up. Our
brains are simulating what the characters are experiencing. What and how
we read will influence which brain regions are activated in the process, and
there are also individual differences.

Psychological researchers have found that people who read


fiction score higher on tests of empathy, even when controlling for
personality differences.

● Reading isn’t just about recognizing letters and words and stringing them
into meaningful ideas; it’s about predicting and anticipating what will
come next, something the human brain has been shaped by evolution to
become fairly proficient at.

* These neurons are seen in other animals who are highly social and who have
big brains, such as dolphins and elephants.

22
● Reading takes advantage of the brain’s ability to work in parallel: We can
consider what’s on the page—which constitutes bottom-up influences
on our thinking—while at the same time using our prior knowledge to
interpret what we’re seeing—so-called top-down influences.

● It’s the top-down processes that give deep reading its magic and that allow
us to find multiple meanings in the same words. But sifting through such
various meanings takes time, and much of digital media is designed to save
us time, encouraging us to speed through the reading, rather than linger
and let it blossom.

Digital Reading

● While sales of e-books are steadily growing, the technology needed to


digitize books has been around for a long time, and there are still many
brick-and-mortar bookstores selling physical books. CDs have largely
been replaced by MP3s and streaming music. So why do we still cling to
physical books? The answer lies at least in part in the way that a physical
book encourages quiet contemplation, while even the most book-like
digital tools do not.

● Some of the problems with early versions of e-readers have been almost
solved, with technology now available that reduces the eyestrain caused by
most screens. And they can be a boon for people with disabilities because
font sizes and contrast are easily adjusted.

● But there is another, perhaps more nefarious change that is building as the
e-book grows its market share: E-readers socialize reading, with their note-
taking, highlighting, and hyperlink functions. With these distractions, our
eyes can become restless; constant interruptions leave us craving more and
make it harder to lose ourselves in the content. And while we can ignore
the functions on our Kindles if we choose to, their mere presence is still
shifting our relationship with other readers, authors, and the text itself.

● And this socialization of reading, nudged along by the digitization of


books, will also change how writers write. It’s no longer the norm to hole
up in a hut and bang out the great American novel. Writers often have to
build a social media following to be considered by publishers, and they

23
Lesson 3 Does the Internet Make Us Shallow Thinkers?

must interact in real time with their readers.


There are even authors who write entire There is a
books on social media, such as Behrouz culture of cell phone
Boochani, who wrote his memoir* via
novels in Japan that are
the social messaging tool WhatsApp.
designed to be consumed
● What will be the consequence of in bite-size pieces rather
digitizing novels and other long-form than in long, lazy hours
content? How might our brains be
curled up on a chaise
changed? While we don’t yet know what
the long-term effects will be or how our longue.
brains will ultimately adapt, we do know that
change will come.

● In 2009, researchers at UCLA conducted a study measuring brain activity


while surfing the web. They found that we don’t read on the web like
we do a book. We use larger swaths of brain and, in particular, our left
dorsolateral prefrontal cortex, which is necessary for decision-making and
problem-solving. When we read a book, we rely more on language and
memory circuits, which is why reading material in a book makes it easier
to remember later. And the internet encourages us to use our brains this
way: We go from trying to read the internet as we would a book to treating
it more as a series of decisions to be made, such as where to look, what to
click on, and how long to spend on each page.

● Without the quiet contemplation that deep reading provides us—without


the opportunity to be by ourselves and thereby get to know and craft our
own inner monologue—will we ultimately have a weaker sense of identity?
Are we sacrificing our inner selves for continuous shallow distraction?

● Despite the convenience of digital media, we often find ourselves


unsatisfied by shallow reading. In an early study of internet use and
socialization, Robert Kraut and his colleagues at Carnegie Mellon
University’s Human-Computer Interaction Institute assessed the effect
of bringing modems into the homes of dozens of households in 1998.

* No Friend but the Mountains won the Victorian Prize for Literature in 2019,
Australia’s highest-paying literary award.

24
They found that the more the families used the internet, the less they
communicated with each other and the lonelier and more depressed they
felt. Kraut dubbed this effect the internet paradox, since the internet was
invented to connect us, yet it seems more effective at isolating us.

● Then, in 2002, Kraut published a follow-up study with the same families,
plus a few more, who now had used the internet at home for about 3
years. It turns out that the people recognized the ill effects of internet use
and changed their habits. Now internet use was associated with better
communication and improved social lives. We don’t just passively accept
tools that make us miserable; we adapt our use.

● So now we find ourselves in a time when long-form journalism is once


again being revered, though it has also fundamentally changed. TV shows
are being adapted to ensure they are binge-worthy. We are recognizing the
unique joy of total immersion in a story or set of ideas.

The physical book is not quite dead yet. In fact, small bookstores
are on the rise.

Questions to Consider

1 How is reading online different from reading a physical book?

2 What is the difference between deep and shallow work?

3 How might reading enable creativity and imagination?

25
Outsourcing
4
Our Memory

26
H uman memory is not, nor has it ever been, about the
past. It’s not stable, but rather biological. And like any
part of a biological organism, memory is differentiated
from nonbiological things by its ability to change—or,
more to the point, its inability to resist change.

Types of Memory

● Memory is not just one function. There are many ways that our
experiences can change us, which arguably is the broadest definition of
memory there is: how our past can affect our current behavior.

● At the molecular level, we can find traces of learning in the ways that our
brain cells send and receive signals. If you poke a sea slug, it will retract
its siphon, but if you keep poking it, eventually it will ignore you. And its
nervous system will have changed; it will have habituated to the stimulation.

● Your brain does this, too. When you move into a new home, you can often
spend a few restless nights woken up by the various new sounds your home
emits. But eventually, you tune them out. That’s a very basic form
of memory.

● Then there are more complex habits that we form, such as brushing
our teeth in the same pattern each day. These, too, leave a mark on our
nervous systems, as we automate sequences of actions through repetition.
In some ways, this kind of habit learning is more stable than our conscious
memories, because they are built into our automatic, inflexible, and slow-
learning brain system.

● But the memories that we wish would stay intact are the kind that we can
summon consciously from the deep recesses of our minds, such as the way
your grandmother smiled when she saw you. It’s these memories, though,
that are also the most dynamic.

● Cognitive neuroscientists distinguish memory for facts and events—


the kind of remembering that involves conscious recollection—from

27
Lesson 4 Outsourcing Our Memory

nonconscious forms of memory like skill and habit learning, called


episodic memory. Fact and event memory is called declarative because of
this relationship with consciousness.

● Declarative memory is both qualitatively and neuroanatomically different


from episodic memory. We know this from a vast literature of patient
and animal studies that shows that you can damage one part of the brain,
obliterating the ability to learn new facts and events, while leaving skill and
habit learning intact—or vice versa.

Episodic remembering is an extraordinary human trait. It allows


us to turn back time and relive important moments by simply
thinking.

● When we evaluate our memory, it’s the ability to remember details of past
events or facts we’ve learned that we generally use as a measuring stick.
But this type of declarative remembering is only one part of the multiple
memory systems and processes of which our brains are capable. And
remembering is the last step in a long line of
memory-related activities.

There’s some ● Memory failures can occur for a number


of reasons, including not paying
evidence that in attention during the learning event,
patients with Alzheimer’s called encoding failure; fading
disease, the actual over time, which is a storage issue;
or failing to retrieve the desired
memories remain, but information during remembering.
access to them Nevertheless, it’s declarative memory
failure that we tend to point to when
is lost. we are disappointed by our inability to
remember a particular thing or event.

● And given the ease with which we can often find the answer with a quick
Google search, we can’t help but compare our shortcomings with the
superior abilities of even the simplest computer. Of course, our ability to

28
retrieve information pales in comparison with what a search algorithm can
do. And since we’re smart and adaptable creatures, we’ve been outsourcing
this kind of remembering more and more.

The Google Effect

● Already in 2011, we were seeing a change in how we search for information


that we used to store only in our minds. Over the course of 4 studies, Betsy
Sparrow and her colleagues found that when asked tough trivia questions,
we tend to turn to the internet for answers rather than some other source,
including the deep recesses of our minds.

● What’s more, if we think that we’ll always have access to the information,
we tend to remember where to find it rather than what it is. If we’re told
that the information we’re learning will be stored in a file on a computer,
we’re worse at remembering that information than if we think it will be

29
Lesson 4 Outsourcing Our Memory

unavailable to us later. And we’re better at remembering which folder we


stored it in even if we don’t recall the information itself.

● While this seems logical, this study yielded all kinds of alarming headlines,
suggesting that computer use and the internet were eroding our ability
to remember things. But instead of signaling the end of human memory,
this preliminary work underlined how our brains are adapting, arguably
appropriately, to the ubiquitous presence of the internet in our lives.

● Is this so-called Google effect, or digital amnesia, as dire a problem as it


might seem?

● Digital security companies like Kaspersky will point out that even in 2015,
people in the US could not remember the telephone numbers of their
loved ones or other personal information that used to be easily accessible
to them.

A study published in Psychological Science in 2014 showed that


people are less likely to remember museum artifacts if they take a
photo of them than if they simply look at them.

This effect might be explained by the attentional disengagement


hypothesis: When we’re busy taking a photo, we pull ourselves out
of the experience and thus encode it less elaboratively or deeply.

● Does this mean that the very ability to remember is fading or that we are
replacing the content of memory, moving from remembering information
to remembering how to find it? It’s the latter that the Google effect seems
to suggest: We are adapting to the new technologically rich environment,
limiting the expenditure of cognitive resources on things that we can
access anywhere, such as phone numbers, and saving these resources for
search strategies.

30
● Encoding is the first step in creating a memory,* but what items get included
depends on what we were paying attention to. In addition, emotion can
modulate memory by increasing the likelihood that we store experiences
that move us. And the internet encourages shallow thinking, or attending to
superficial features rather than taking a deep dive into the material.

● Superficial, or shallow, encoding has long been known to be a suboptimal


entry point for to-be-remembered information. More precisely, the more
deeply or elaborately we encode information—making associations with
what we already know, using imagery or finding personal relevance, or
feeling emotions tied to the information—the more likely we are to be able
to remember it later. This is called the levels of processing effect, described
in the 1970s by Fergus Craik and Endel Tulving.

● The internet discourages us from thinking deeply about what we’re reading
there, and as a result, our memory for what we do on the internet is
handicapped from the beginning.

Is the Internet Destroying Our Memory?

● Given the adaptive nature of the brain and the pace at which technology
use is shifting, things are not as simple as they might seem. In 2018, the
Google effect could not be replicated in a large-scale effort to replicate
previously published psychological findings. Some researchers are
suggesting that we no longer see a decrease in memory retention for things
we think will be erased because we’ve become aware of the fact that erasing
something from the internet is remarkably difficult. So have our brains
already adapted?

* For facts and events, encoding is largely driven by a part of the brain called
the hippocampus, which gets information from all of our senses, creates a mini
index of co-occurring details, and makes associations between items that make
up the memory. Then, when it’s time to remember, the hippocampus drives
cortical reinstatement, essentially reactivating the brain networks that were
active when we first laid down the memory.

31
Lesson 4 Outsourcing Our Memory

● Our memory for the kinds of things that we store on the internet has never
been great. If we just stick to memory for facts and events—declarative
memory—which lends itself to conscious remembering, or explicit
processing, as opposed to implicit processing, then we can diagnose
memory failures in a number of ways, include encoding, storage, and
retrieval failures.

● But the truth is that the majority of


the details that are stored initially in People who have highly
declarative memory are lost within 24
superior autobiographical
hours, usually after the first longish
sleep period, when your brain engages
memory are rare, but if you
in active and adaptive forgetting. This give them a date, they can
forgetting process is a good thing: It immediately tell you what
allows us to prioritize what our brains day of the week it was and
think is important, such as what we something about what
spent time ruminating over or what they did that day.
elicited a strong emotional reaction.

● Digital media, which doesn’t forget, can force


us to remember things that might have been better off forgotten. Many
social media apps send you reminders of all kinds of useless anniversaries
and what you were doing a certain number of years ago. Sometimes these
reminders are unwelcome and trigger memories that you had worked hard
to forget.

● And the internet has its own memory flaws. Information found there is
rewritten all the time. It can have multiple authors, and anyone can be an
author. There are also many distracting elements, in the form of hyperlinks
or ads.

● But in terms of a pure storage device, the internet easily has us beat. So
aren’t we better off outsourcing our memory storage to this nearly infinite
repository? There’s some evidence that when we save some information on
our computer, we’re more likely to remember what we’re about to work on
or learn.

32
● What we don’t always realize, though, is how our thought processes can
be shaped by habits.* The Google habit can have unintended consequences
on our repository of information. After all, testing yourself by practicing
retrieving information is a powerful form of improving your memory, by
both strengthening the ability to access that information and highlighting
what you don’t know.

● When it comes to what we don’t know, what we are often unaware of is how
much of our autobiography we rewrite, favoring some details over others,
which then become lost for good. For example, we tend to think that we
remember many details of salient life events, sometimes called flashbulb
memories,** such as when you first heard about 9/11. These events are often
negative, because emotions like fear are powerful memory enhancers.

● We think we remember everything about the experience—but it turns out


that we really don’t. When researchers probe the accuracy of flashbulb
memories, they find that many details don’t add up and that they are
remembered differently from year to year, even while the confidence with
which the person remembers the details stays constant.

● Remembering involves reconstructing the details of an autobiographical


event, and anytime you rebuild something, you’re influenced by your
current state: how you feel, to whom you’re telling the story, where you are,
and so on. This reconstruction is not benign; it can have indelible effects
on the memory trace itself.

* In a 2017 study, Benjamin Storm and his students at the University of


California, Santa Cruz, asked a group of participants to answer 8 hard questions
and let them google the answers. Then, they had to answer 8 relatively easy
questions and had the option of using the internet or their own memories.

In that group, they googled the easy answer 83% of the time. But in 2 other
groups, who either had to answer the hard questions from memory or weren’t
asked the hard questions at all, the googling rate was only about 63% to 65%.
This tendency to google was maintained even when checking the internet
required subjects to get up and cross the room to use a computer!

** Flashbulb memories are so named because when you experience them, it


feels like a flash went off and captured every detail of that single moment.

33
Lesson 4 Outsourcing Our Memory

● By the same token, when we do much of our remembering or the


documenting of life events via digital media, we are not objective curators.
Instead, we choose to document the positive aspects of our life events. As
a result, our memory for autobiographical events is filtered through this
positive-looking lens. Social media encourages us to remember selectively,
enabling us to build an identity that is further removed from reality than
we think.

A Focus on the Future

● Modern neuroscience is backing up the claim that memory is not about


the past by uncovering the pattern-seeking, forward-looking fundamental
properties of the brain. Evolutionarily speaking, our brains don’t care
about the past, because all that matters is that we stayed alive. What’s
important is the future.

● That’s why our memory is optimized to discover the regularities in the


environment: the patterns that repeat or the gist of a series of details.
Understanding these patterns allows us to predict the future—arguably
our most useful cognitive trait.

● That’s true even for our most cherished memories. The fact that we usually
don’t get the details right doesn’t really matter in the grand scheme of
things. Whether the memory is positive or negative, what we remember
is how we felt, who treated us well, or what we should fear. This is useful
information when we’re trying to imagine what might happen when we
find ourselves again in similar circumstances.

Patients with profound amnesia can’t conceive of future events.


They can’t predict the future because they are unable to
rebuild the past.

● So conscious memory for events is what enables us to imagine the


future. And it’s a skill that, so far, technology and artificial intelligence
have not eroded. In fact, digital media gives us tools that help us

34
explore and share our imagination. Depending on what we input, we
can construct models that give us clues to different futures, and these
in turn let us predict possible consequences with even more accuracy
than simply thinking about them.

● If we outsource the content of our memory to the internet or the


cloud, we can free up our minds to work with that information to build
increasingly detailed and interesting futures—but only if we keep
our imagination muscles flexible and strong. If we get sucked into
passively consuming information, without actively manipulating it, our
imagination muscles might atrophy

Questions to Consider

1 What do we mean by human memory?

2 How is what we store in memory shifting in the information age?

3 What might the future of human memory look like?

35
Human
5
versus Digital
Content
Curators

36
W hat are the implications if we outsource all our
curation, of everything from videos to groceries
to friends, to the internet? How do recommendations
made by humans compare with those suggested by a
computer algorithm? And what are the consequences of
digital curation, if any, for how we think?

Netflix’s Recommendation System

● Throughout the 1990s and early 2000s, Blockbuster video was slowly
eating up mom-and-pop video stores, making deals with studios to get
exclusive rights for new releases and managing huge inventories. They were
also notorious for their late fees, which they charged each day the rented
video was returned to the store late.

As Maria Popova, a curator of digital information, has said:


“Curation is a form of pattern recognition—pieces of information or
insight which over time amount to an implicit point of view.”

● In 1997, software engineer and founder of the startup company Pure


Software, Reed Hastings pitched the idea of a subscription model for video
rentals—one that would eliminate late fees—to his former employee Marc
Randolph. And thus Netflix was born.

● In 2000, Netflix offered themselves to Blockbuster for $50 million, but


Blockbuster refused, citing the fact that Netflix was losing money at the
time and was what they considered a niche business. But by 2018, Netflix
had become the world’s most valuable media and entertainment company,
worth more than Disney and Comcast, and Blockbuster had become a
cautionary tale, having declared bankruptcy in 2010.

● Much of Netflix’s success was enhancing what small video stores used
to do so well: recommendations. The Netflix model was based on the

37
Lesson 5 Human versus Digital Content Curators

distribution of red envelopes containing physical DVDs and the mantra


“No late fees, ever!” But along with the DVDs, customers returned
something far more valuable: a rating of how much they enjoyed the film
on a scale of 1 to 5. Building this database of ratings is arguably what
saved the company from obsolescence as streaming video replaced physical
storage options.

● By 2006, Netflix had realized that streaming video was a real threat
and that their recommendation system might be valuable enough to
retain customers as DVD rentals declined. So they announced a public
competition* to improve their algorithm. Their proprietary system, called
Cinematch, used prior ratings to predict which films their customers
might enjoy. And it was far from perfect, but it was still pretty good. The
challenge was to beat Cinematch by at least 10%.

● The competition officially began in October of 2006. Just 6 days in, a


team of 2 engineers had already beaten Cinematch. But it would take
almost 3 years before the coveted million-dollar prize for beating the
proprietary algorithm by at least 10% would be claimed.

● It’s worth understanding both the power and the problem facing algorithm
developers in this challenge, because a similar set of circumstances
is found in most digital curation domains, from Amazon’s shopping
recommendations to Google Scholar.

● Ultimately, the work of suggestion engines can be boiled down to


conditional probability calculations: Given that a user liked X, what’s the
probability that the user will enjoy Y ? If you liked the show Friends, does
that make you more or less likely to enjoy The Big Bang Theory?

● Classic video stores like Vidiots in Santa Monica, California, survived


long past the demise of Blockbuster because their employees made these

* Netflix provided access to a training data set that included more than 100
million ratings from almost 500,000 users about more than 17,000 movies.

38
recommendations really well. It took until 2017 for Vidiots to shutter their
flagship store, which was 7 years after Blockbuster went bankrupt.*

● Vidiots employees loved movies, and they passionately kept track of their
inventory. Their curatorial skills are what kept bringing customers back,
even when it was much more convenient to order a film on Netflix. These
employees could match tone and thematic content, along with other
factors, across different films—a difficult task for an algorithm.

● And while Netflix has gathered much more data, it’s still hard for artificial
intelligence to pull thematic content from films and TV shows in the same
way that a human can. But no human can watch and remember all the
50,000 films that Vidiots houses or the thousands of titles on Netflix.

● In both cases, recommendations aren’t as simple as tracking down whether


people who watched film X and liked it also watched and liked film Y.
Instead, Netflix is trying to predict your affinity for a show on the basis of
other people’s viewing histories, each of which is unique and none of which
exactly matches yours.

● We don’t know how Netflix’s proprietary in-house algorithm solved these


problems, but the winner of the Netflix Prize, BellKor’s Pragmatic Chaos,
is freely available on the internet. The basic idea is that Pragmatic Chaos
calculates a predicted rating by summing a few numbers: the overall
average rating across all shows, which at the time was 3.7 out of 5; plus the
film’s offset (an indicator of the film’s popularity); plus the user’s offset (as
some are more critical than others); and a magic number.

predicted rating = overall average + film offset +


user offset + magic number

● This magic number quantifies the user-film interaction, a latent feature


that can explain how ratings of the same genre, say, are related. There
might be a score for British mysteries or space-based science fiction. You
might be a fan of PBS’s Masterpiece, giving you an additional 2.4 points for

* Vidiots has since reopened as a nonprofit, with the goal of bringing video
store culture deeper into the 21st century by serving as expert archivists.

39
Lesson 5 Human versus Digital Content Curators

Downton Abbey, or you might hate adult cartoons, taking away 1.9 points
from Rick and Morty.

● Calculating these latent features is what


Pragmatic Chaos did particularly well. In 2018, Netflix
And it provided Netflix with a powerful tied with HBO for the
tool to use when marketing directly to largest number of Emmy
their subscribers.
wins—a remarkable
feat, given the stiff
The key to Netflix’s domination of
competition from
the TV industry, even while they
aren’t on cable TV, isn’t just in the
traditional studios.
recommendations.

Network television companies used to spend a fortune creating


and piloting new programs (and some still do), most of which
would not succeed. With its powerful data and algorithms,
Netflix doesn’t need to waste money on pilots and can instead
put its money directly into producing high-quality content.

Recommendation Engines versus Search Engines

● It’s important to consider the differences between recommendation or


suggestion engines—like the algorithms upon which Netflix is built—and
pure search engines, because the proliferation of recommendation engines
is changing our behavior, often outside of our awareness.

● Think about what your goals are when you’re searching for something
online, whether it’s a product or information. Let’s say that you’re looking
for a coffee shop with Wi-Fi in a city you’re visiting. You type in “coffee
shop” and “Wi-Fi” into your search engine and assume that it will return
all the options nearby. In this example, unless you’re in San Francisco or
Taiwan, you’ll probably get a few hits, but the list won’t be overwhelming.
So you want it to be exhaustive.

40
● Now imagine you’re looking for a common object, such as a coffee mug.
The options are seemingly endless, so you don’t want to scroll through
thousands of items. You want a recommendation for the top 3 to 5 mugs,
and that’s enough choice.*

● The internet makes choice overload a real problem in many domains, from
how to spend your time to whom to date. If you come into a decision with a
clear view of what your perfect choice would be, and there’s a choice that is
clearly more in line with this vision than the others, and you’re pretty sure
that you know the set of available options, you can avoid choice overload.

● If you know that you want a mug that can travel without spilling the
coffee, can hold heat all day, is tall and thin rather than squat and wide,
and is navy blue, it’s pretty easy to make your choice and feel good about
it. But if you don’t have those preferences, you find yourself scrolling
through thousands of options, with different price points and different
features, and you no longer know whether size trumps insulation or vice
versa. A recommendation engine is what you need here.

● The problem is that we often think we are getting the results of an


exhaustive search when really we’re just getting a set of artificially curated
suggestions. Google Scholar recommends papers based on its algorithmic
rules—for example, popularity—not necessarily relevance.

● Recommendations have a large influence on our behavior, particularly


when it comes to our choices as consumers. Companies will pay Amazon a
lot of money to recommend their products and list them first. That’s true
in virtually any search marketplace, and there’s an entire industry of people
whose jobs involve skewing the algorithms slightly in favor of their clients.

● But the consequence is that these types of engines help the rich get richer
and don’t do what most of us think they do: introduce us to novel products
that we might not otherwise come across that are a good fit for our needs
or wants.

* Our brains function best when we have to choose between just a handful
of options. Otherwise, we get overwhelmed and end up less satisfied with
whatever choice we ultimately make.

41
Lesson 5 Human versus Digital Content Curators

YouTube’s Recommendation Engine

● In 2011, the engineers behind YouTube’s recommendation engine were


prioritizing clicks over viewing time and were successful in getting their
users to click on lots of videos. But the time users spent viewing them was
too short to generate much ad revenue, in part because they didn’t really
like the suggested videos.

● And it was also annoying the users. The algorithm would queue up videos
that were similar in some ways to the previously viewed video but not
similar to what the users were looking for. So the engineers revamped
the system to encourage longer viewing rather than just clicks. Instead of
measuring success by clicks, the engineers realized that if viewers stayed
on a video longer, that meant they enjoyed it and the recommendation had
been more successful.

42
● But watch-time optimization can lead to users going down potentially
dangerous rabbit holes. Human beings are naturally more likely to pay
attention to confirmatory, reinforcing, or positive feedback. This means
that the content they are more likely to engage with will confirm their
existing beliefs and will attract like-minded individuals, which sometimes
leads to a radicalization of views. It’s also easier to program an algorithm to
match similar content, because the set of options is limited, while there is
an infinite variety of conflicting or different content.

● There’s also the emotional component. When content elicits strong


feelings, viewers are more likely to be drawn to it. In 2014, Adam Kramer,
Jamie Guillory, and Jeffrey Hancock published a now-infamous study in
the Proceedings of the National Academy of Sciences, reporting data from
almost 700,000 Facebook users. It was controversial because the users
had not given explicit consent to participate in the study. But the results
showed that emotional states were successfully transferred via social media,
even without direct interaction.

● Decades of psychological research has revealed that emotions can be


contagious. What was novel in this study was the finding that when the
experimenters manipulated which posts their (uninformed) participants
were exposed to, they found that they could influence the valence—
positive or negative—of their posts.

● For example, when positive posts in a user’s News Feed were reduced, the
user was more likely to include negative emotional words in subsequent
posts. When negative expressions were reduced, the user was more likely
to use positive statements. These results showed that emotions can spread
across massive online communities, with even small nudges by a digital
curation algorithm.

● YouTube was acquired by Google, and in 2015, Google Brain researchers


created an artificial intelligence called Reinforce, using reinforcement
learning to engineer a new recommendation system, a type of neural
network model. The goal of this iteration was not to just categorize users into
niches and recommend more of what they’ve already watched, because users
were getting bored of the same old thing. Instead, it was designed to increase
total viewing time by recommending content that had the right combination

43
Lesson 5 Human versus Digital Content Curators

of novelty, emotional valence, and overlap with previously viewed content,


making it hard for users to resist watching the “up next” video.

● After the implementation of YouTube’s watch optimization algorithm,


the average time viewers spent on YouTube increased dramatically, and by
2017, YouTube hit their outrageously ambitious goal of generating 1 billion
hours of viewing a day. YouTube argues that it has become so successful by
promoting content that is of higher quality—that viewers want to watch.

● But the Facebook study demonstrates the power of the algorithm to


shape our thoughts, emotions, and beliefs, and it’s not necessarily that the
content is of higher quality but that it’s addictive, or harder to turn away
from, because it taps into our need for reinforcement and confirmation.

● The YouTube algorithm continues to be tweaked every few months, so


it’s hard to say exactly how recommendation engines will continue to
affect our cognition. But what’s not in doubt is that they have an outsize
influence on much of our decision-making, at least in terms of what we
pay attention to.

Digital curation can shape not only our minds, but our society
as well, as it can push us toward one emotional state or
another. In the face of this evidence, researchers are calling for
the inclusion of socially conscious rules that promote equality
and prosocial behaviors in algorithms.

Questions to Consider

1 What is the difference between a recommendation engine and a


search engine?
2 What makes a good curator?

3 How does tweaking an algorithm affect what gets recommended?

44
Virtual
6
Realities and
Our Sense
of Self

45
Lesson 6 Virtual Realities and Our Sense of Self

W hat if you could erase the parts of your world that


you don’t like? What if you could live in a virtual
world where everyone was the same race or similarly
abled? Is that a good thing or a bad thing? The truth
is we don’t yet know. But virtual reality promises to tell
us, if only by trial and error. What we will learn about
ourselves includes what it is that we value most and
what it is like to be a human.

The Virtual Environment

● There is one simple starting point for the forces driving the evolution
of the nervous system: An organism needs to sense some part of its
surroundings in order to choose an action. Our brains, when distilled
down to their essence, sense things and then execute actions accordingly.
So our experience of our world—how we sense our surroundings and
translate that into perception—is a fundamental feature of our nervous
system and therefore of ourselves.

● Somewhere along the way, we evolved the ability to be subjectively aware


of what we are feeling—what philosophers call qualia. We not only know
that a stop sign is red, the way a computer might assign it a label, but we
also experience its redness. At its core, consciousness can be reduced to a
series of qualia: what it feels like to feel, think, act, and so on.

● So how does all of this play out in relation to virtual reality—a new form
of experience that is becoming increasingly common as the technology
becomes better and cheaper?

● If our experience of our current situation depends on both our past and
our current state, then spending a lot of time in a virtual environment will
change how we experience things in the future. Our brains are biological
organs, with adaptability being arguably their biggest strength. Experience
engenders change not only in how our brains are activated, but even in

46
their structural anatomy, if the experience is powerful enough or lasts for a
long time.

● Consider the following thought experiment, devised by Australian analytic


philosopher Frank Jackson.

● Mary is a color scientist who is forced to live and work from the confines
of a black-and-white room. She only has access to the outside world via
a black-and-white TV set. But she has all the information known to us
about color, from the visible light spectrum to the 3 cones in our retinas
that have differential sensitivities to different wavelengths of light. In
other words, she knows everything there is to know concerning how our
brains turn light into color. But she has never actually seen color. What
will happen when she is released from her room? Will her experience—her
qualia of color—add anything to her knowledge?

● Most people will argue that without direct experience, her knowledge is
incomplete. There is something about qualia that defines our conscious
experience.

● Now consider Fred, a person who sees an additional color—ultraviolet


rays, for example—that the rest of us don’t. Even if we understand how
a genetic mutation gave Fred an additional cone type that responds to
ultraviolet light, do we know what it’s like to be Fred? Do we know
everything there is to know about Fred?

● An expansion of our normal life experience is what virtual reality promises.


And if we can “see” ultraviolet light through virtual reality goggles, maybe
then we’ll know what it’s like to be Fred.

Identity and Physicality

● At the very core of our identity lies our own sense of our physical bodies.
Anyone with a disfiguring injury will attest to the psychological pain that
accompanies unwanted changes to our bodies. And our sense of who we
are is remarkably malleable.

47
Lesson 6 Virtual Realities and Our Sense of Self

● When you’re first learning to drive a car, you can be a bit clumsy, not
entirely sure where the vehicle begins and ends. With time, you learn the
intricacies of the vehicle—its quirks and limitations as well as what it’s
capable of—so much so that if someone bumps into your car, you feel as
though you’ve been hit directly. Your proprioception, the sense of where
your body is in space, extends to include the boundaries of the car.*

● If you lose a limb, your sense of that limb’s position in space sometimes
paradoxically remains. Patients with phantom limbs, as they’re called, can
experience significant distress, as the body part that’s missing still hurts.

● While the actual causes of the pain remain controversial, one relatively
successful therapeutic technique shines a light on the relationship between
our sense of self, our experiences, and our proprioception.

● After further surgeries severing nerve endings and pharmaceutical


treatments of phantom limb pain proved ineffective, neuroscientist V. S.
Ramachandran devised a clever tool to retrain the brain into relaxing the
limb that’s no longer there.

● He and his students at the University of California, San Diego, built a


box that allows the patient to “see” their phantom limb by using their
intact limb reflected by a mirror. Then, the patient moves his or her
intact limb and watches the reflection, imagining that the moving limb
is actually the missing one. If a missing arm is perceived as being painful
because it’s clenched in an uncomfortable way, for example, then by slowly
unclenching the intact fist, the patient can experience some relief.

● Despite its being incredibly low-cost and low-tech, this treatment is


remarkably effective, though not for everyone. The better the patient is
at internalizing the reflection and having the subjective experience—the
quale—of unclenching his or her missing hand, the more successful the
therapy is.

● This is an example of how malleable our qualia are and how our sense of
self is tied to them.

* This is also true of other tools, such as a well-used tennis racket or skis.

48
The Science of Self-Awareness

● The anterior insular cortex (AIC), or insula, is the part of the brain that
receives input from all of our senses and is a candidate for the repository of
all subjective feelings.

● In an article written by A. D. Craig, he lays out the evidence showing


that the AIC is activated in a wide range of tasks that involve subjective
feelings, but also attention, decision-making, intention, time perception,
and our subjective awareness of all of these things. In fact, the one
common element among all the tasks that seems to trigger activity in this
region is that they engage the awareness of the subject—the sense that the
person exists.

● There are even specialized cells in your AIC called von Economo neurons,
which are long, thin cells that are found in some large animals, mainly in
species that are thought to have rich social lives and who have passed the
mirror test* of self-awareness.

● The AIC integrates information across modalities and has these special von
Economo cells that seem to play a role in our sense of self. But what this
work tells us is that our self-awareness and our identities are closely tied to
our physical bodies, our sensations, our qualia.

Mental Models

● Cognitive psychologists like Philip Johnson-Laird theorize that we create


mental models of the world to help us solve problems. You have a mental
model of your body and where it is in space.

● Your mental body map is based in part on your sensory homunculus, a


miniature figure of what your body would look like if it were proportional

* In this test, an animal’s forehead is marked without its knowledge—say,


when it’s sleeping—and then the animal is put in front of a mirror. If the animal
explores its own forehead, then that’s an indicator that it can tell that the image
in the mirror is a reflection of itself. Human toddlers, gorillas, chimpanzees, and
elephants pass the mirror test, and they all have von Economo neurons.

49
Lesson 6 Virtual Realities and Our Sense of Self

to the relative representation of different parts of your body in your


somatosensory cortex. You probably already know that a paper cut on your
finger hurts a lot more than one on your forearm. That’s because there’s
more of your nervous system devoted to processing information from your
fingers, giving you dexterity, than an equivalent section of your forearm.

● If you’re a pianist, this discrepancy is even more pronounced: Musical and


other types of training changes the mental map of our bodies. Our mental
maps, then, are tied to our ability to experience and track our bodies
and their relationships with space; the more dexterous you are, the more
specific your mental map.

● Are out-of-body experiences proof that the mind transcends the physical
body? Probably not. When you’re coming out of sleep or anesthesia, which
is when the vast majority of out-of-body experiences happen, you might
not realize that you’re floating around your mental map of the scene rather
than actually floating around the room.

People often confuse augmented reality, mixed reality, and


virtual reality.

Augmented reality is the first step in merging the digital and


real worlds. It involves placing digital objects into the real
world or masking real-world objects using digital technology.
So you still interact with the real world, with the addition or
subtraction of an object or part, rendered digitally.

Mixed reality takes augmented reality one step further by


merging the real and virtual environments. This is used in
simulation-based learning for pilots or surgeons, for example.
It’s very effective, and no humans are harmed in the process.

● In fact, work by Olaf Blanke, an expert in cognitive neuroprosthetics, has


shown that out-of-body experiences can be induced in people who are
fully awake by electrically stimulating specific parts of the brain. So our

50
sense of self—of where we are in space, of our own consciousness—can be
manipulated by brain stimulation, by circumstances like waking up from
anesthesia, and even by immersive experiences.

● And this brings us back to virtual reality. What virtual reality promises is
an extension of our physical bodies into a digital space. Virtual reality is a
much more immersive experience than what we are used to from TV, video
games, and other media.

● Virtual reality puts you entirely into a virtual world, which may or may not
look or feel anything like the world you know. In fact, the more immersive
the experience, the less real it needs to be to capture our attention.

● In 2010, Blanke started collaborating with other researchers, including


2 who work with virtual reality, to test its capacity to move our sense of
self into the digital world. Research subjects entered an immersive virtual
reality, and then the researchers manipulated the subjects’ mental models

51
Lesson 6 Virtual Realities and Our Sense of Self

of their bodies—making them feel as though they were leaving their own
bodies in the digital realm, a bit like a virtual out-of-body experience.

● Without an out-of-body or similar experience, we very much feel tied to our


bodies when it comes to our mental model of ourselves. But virtual reality
has the ability to alter that mental model, even with one single experience.

● Say that instead of leaving your body, you get the sense of possessing a
different body—that of a child or someone of a different gender or race.
The illusion can be incredibly powerful. Just like out-of-body experiences,
virtual immersion can profoundly change one’s sense of self.

● Not surprisingly, embodying a person of a different race makes people


more empathetic and reduces their scores on tests of implicit bias toward
members of that race.

● Virtual embodiment holds a lot of promise in terms of changing features


of ourselves that we’re not proud of, such as a negative self-image, bigotry,
inflexibility, intolerance, or poor social skills. But it can also leave scars.

● Virtual reality can create experiences that feel so


real that some researchers worry about inducing
post-traumatic stress disorder. People who have How we
been violent in virtual reality sometimes feel experience the
real guilt and sadness after the experience, even
though they know that no real people were
world depends
hurt. For this reason, leaders of virtual reality on our mental
research are calling for a code of ethics to be models.
created and adhered to.*

● Our self-model isn’t completely transparent to us; we


can’t really see it. But we can see with it, as German philosopher Thomas
Metzinger writes in his book Being No One. It’s a bit like the refrigerator-
light analogy of self-awareness: When the door is closed, we assume it’s

* Some of these recommendations include warning users of the possibility


that the immersive experience can be powerful enough to leave the person
traumatized and that advertising tactics using virtual reality embodiment can
influence behavior outside of the individual’s awareness.

52
off, but we can’t know for sure. We assume we’re always conscious, but the
truth is that when we query our consciousness, we change it.

● Virtual reality, however, can help us see our mental models, including
those of the self, by the ease and methods by which they are changed.

● Understanding how malleable, constructed, and unreal our mental model


of ourselves is can be jarring. But it can also be revelatory. For most of
humanity, we’ve been limited to creating in the real world. But the virtual
world holds far fewer constraints. As such, it’s a place where imagination
can thrive and lead to new inventions, ideas, and observations.

Questions to Consider

1 How do we build our sense of self?

2 What is the relationship between our experiences and our identity?

3 How might augmented versus virtual reality change our experiences?

53
Screen Time’s
7
Impact on
Kids

54
O ne of the main arguments against screen time for
kids is that it turns them into passive viewers rather
than engaged, active participants in an activity. We know
that an enriched environment can foster development,
even in terms of neuroanatomy.* The question is, then,
do screens enrich or impoverish a child’s environment?
Can screens themselves be detrimental? How does a
screen fare in comparison with live interactions?

TV versus Handheld Devices

● Much of the research conducted to answer these questions surveyed the


effects of television on kids’ development, because we’ve had much more
time to assess TV’s influence, compared with the relatively new use
of handheld devices. But many of the effects seem to be similar across
different platforms; it doesn’t seem to matter much whether kids watch
shows on actual TV sets, computers, iPads, or other mobile devices. What
matters is whether that activity is taking the place of other types of play
and whether passive viewing of any kind has repercussions on other aspects
of development, such as emotional regulation.

● But there is a potentially important difference: A TV show is audiovisual


edited content that unfolds over time, without regard to input or attention
from the viewer. It can be fictional, expository, or a form of advertising.
In contrast, interactive media, such as video games on an iPad, require
input from the viewer, and the flow and content are at least to some extent
controlled by the person engaging with the device.

● The problem is that whether they are watching TV shows or engaging with
their devices, many kids spend more time in front of a screen than they do

* A rat raised in a cage with other rats and some toys, such as a running wheel,
will show bushier neurons—those capable of making many more connections—
than a rat raised alone or in an impoverished environment.

55
Lesson 7 Screen Time’s Impact on Kids

interacting with their parents. And when screen time is more rewarding
that face-to-face interactions, we can see why they make these choices.

● In response to the growing evidence that increasingly younger children


are spending more and more time in front of a variety of screens, the
American Academy of Pediatrics (AAP) released a revised recommendation
concerning kids’ screen use in 2016. They called it the Family Media Use
Plan and encouraged parents to be mindful of how and when to introduce
and allow screen time. Instead of just outright banning media use, parents
are told to serve as mentors for their kids, teaching them how to use media
effectively in order to minimize its potential harm.

● The main tenets of the plan include waiting until at least 18 months of age
to introduce any screen time. Then, until the age of 2, toddlers should only
be exposed to educational media programs and only as their caregivers
interact with them and help them understand what they are seeing.

● Then, between the ages of 2 and 5, kids shouldn’t have more than an hour a
day of screen time, and it’s best if parents or caregivers watch with them. After
that, screen time should not interfere with other beneficial activities, such as
playing outside, sleeping, or eating. And parents should designate screen-free
times, such as dinner, and screen-free environments, such as the bedroom.

● One of the reasons why the AAP suggests no screen time for babies and
toddlers up to about 18 months is because kids that young don’t seem
to understand what they are seeing, even if it’s pretty close to what
they would experience in real life. For example, watching a video of
someone talking to the infant has little to no effect on his or her language
development. But interacting with a live person has a big effect and is
critical for the baby to learn to speak.

● Research has shown only negative effects on language and executive function
development in babies under 2 with exposure to TV. That’s thought to be
largely because most of the content is programmed for adults, and because
kids this young don’t really understand what they are seeing, any TV is
treated pretty much as background noise by their little brains.

● And background audiovisual noise is detrimental to infant and toddler


development. They begin to tune out this sensory stimulation, instead of

56
working to process it. What’s more, interactions between parent and child
are much richer when there is no TV on in the background, since the
parent is more engaged with and attentive to the child.

● Both in terms of the quality and the number of words spoken, talking
by the parent is more effective as a learning tool when it’s not set against
the backdrop of a TV. But when parents and infants or toddlers watch
age-appropriate shows together, the quality of the vocabulary spoken after
co-viewing actually increases.

According to Common Sense Media, 98% of homes in the United


States with kids aged 0 to 8 now have mobile devices on which
they can watch videos or play games. The number of kids in this
age group with their own tablet in this cohort exploded from less
than 1% in 2011 to 42% in 2017.

● You might think that because technology is expensive, increased screen


time is a bigger problem in families with higher incomes. But the opposite
is true: Kids in wealthier families and those with more-educated parents
spend on average an hour less time using media than their less-privileged
counterparts. And alarmingly, this gap is widening.

● While the overall time during which kids use media hasn’t changed much,
remaining steady at a little more than 2 hours a day, the proportion of
screen time that kids aged 0 to 8 spend on a mobile device increased from
4% in 2011 to 35% only 6 years later. The actual number of minutes on
these devices has increased almost tenfold, from 5 minutes in 2011 to 48
minutes in 2017.

● So while kids’ total passive viewing time might not have dramatically
increased, kids are spending much more of that time looking at handheld
devices. This shift has implications both in terms of the physical
differences of the viewing experience (for example, a screen held closer to
the eyes might affect the development of the visual system) and in terms of
content, since users of handheld devices have more control over the media,
are able to switch between videos and games, don’t have to sit through

57
Lesson 7 Screen Time’s Impact on Kids

long periods of ads, etc. This means that they might find the experience
more compelling or more rewarding, making it more difficult to put the
screen away.

● Ophthalmologists worry that this increased amount of time spent staring


at handheld devices is ruining kids’ eyes, and the internet abounds with
cautionary tales. But recent research suggests that mobile screen use seems
to have little or no measurable impact on the development of childhood
myopia, or nearsightedness, but that spending time outdoors has a
protective effect. If screen time is taking the place of outdoor time, then
kids will be at increased risk of having eye problems.

Ophthalmologists suggest that for kids and adults alike, the


20/20/20 rule should be applied whenever we’re using screens:
every 20 minutes, take a 20-second break to look at something
that is at least 20 feet away.

58
Negative Consequences of Media Use

● Screens themselves might not be dangerous, but if watching them comes


at the cost of less time outside, they might negatively affect children’s
health. Less outdoor time may mean less overall physical exercise, which
can lead to obesity. Indeed, the relationship between childhood obesity
and screen use has been well established, with randomized, controlled
studies demonstrating that the relationship is causal. These studies have
shown that reducing screen time also leads to reductions in body mass
index gains; that is, kids who reduce their screen time also lower their risk
of developing obesity.

● But it turns out that the cause of obesity in heavy screen users is not
necessarily because turning off screens leads to more physical activity.
Studies measuring physical activity using accelerometers have found little
to no change in activity with media-use reduction.

● These studies have found, though, that media-use reduction leads to the
consumption of fewer calories. Kids who spend more time in front of
screens also eat fewer fruits and vegetables and more energy-dense snacks.
Eating while watching TV or videos on a tablet is common in many
households and may be partially responsible for the increased incidence of
childhood obesity.

● Two other factors might account for the causal relationship between
increases in body mass index and media use: food advertising, which leads
kids to make poor dietary choices; and sleep deprivation, which changes
their appetite-regulating hormone levels and leaves them craving less-
nutritious foods.

● Sleep deprivation is a problem that increased screen time—and mobile


screens in particular, as they follow the children into their bedrooms—
seems to have exacerbated. Poor-quality sleep can cause learning and
memory problems, emotional dysregulation, and more risk-taking
behaviors. Kids are also more prone to depression when they’re not getting
enough sleep, especially during the teenage years.

59
Lesson 7 Screen Time’s Impact on Kids

● Increased risk for anxiety and depression has also been attributed to
heavier media use in children.* However, this is just a correlation and
therefore does not necessarily mean that the increased rates were caused by
the media use.

● At the same time, digital media can actually be used to help kids with
anxiety and depression. It can also be used to help bullying victims find
support. Social media can spread word of support groups, help lines, chat
forums and other resources designed to help kids who are being bullied. So
if they do find themselves on the receiving end of abusive behavior online,
chances are that they will stumble across or easily find help.

Media Multitasking

● The key is to moderate and examine kids’ digital media use, but far too
often, parents, teachers, and caregivers don’t pay attention to usage. But
in addition to monitoring the amount and quality of screen time, parents
and caregivers should also be made aware that digital media lends itself to
multitasking, and many young people, just like their parents, try to do more
than one thing at a time: watching TV while doing homework, for example.

According to a Common Sense Media study published in 2015,


26% of teens report spending more than 8 hours watching or
interacting with screens a day, 31% spend 4 to 8 hours a day,
and 43% spend fewer than 4 hours a day.

● But humans are not built to multitask in the truest sense of the word.
We can’t focus our full attention on many activities at once. Instead, we

* Kids are very prone to making social comparisons, and modern digital media
abound with images and videos of attractive, wealthy, successful people and
can engender negative feelings when a child compares his or her lot with theirs.
Social media apps exacerbate this effect, as kids can more directly compare
themselves in terms of numbers of likes, friends, and other characteristics.

60
quickly switch focus between tasks, and we pay a cost. It can be pretty
minimal, just a matter of shifting attention, or it can be detrimental if we
can’t suppress intrusive thoughts generated by the previous activity.

● Doing homework while engaging with digital media also makes the
homework take longer. Checking social media or text messaging more
often is correlated with a lower grade point average.

● When it comes to media multitasking,* evidence is growing that heavy


media multitaskers are not as good at maintaining and manipulating
information in their mind—what we call working memory—as their
lighter-media-using counterparts. Heavy media multitaskers are less
efficient and seem to have more trouble sustaining goal-directed attention
compared with low media multitaskers. Heavy media multitaskers also
show long-term memory effects.

● Perhaps more troubling is evidence that people who multitask with media
often show some personality traits that might make it difficult for them to
succeed. They tend to be more impulsive, seek sensation or take risks, and
suffer from social anxiety and depression.

● They are also less likely to endorse a growth mindset; that is, they are more
likely to report believing that traits like intelligence or mathematical ability
are fixed and can’t be improved by trying harder or working more. This
consequence, if indeed it holds true, could be detrimental to academic and
other measures of success.

A Silver Lining

● In addition to enabling kids who are bullied or have other problems to


find support online, another area where social media seems particularly
useful is in simplifying and encouraging political engagement. And here is

* The majority of research suggests that people who multitask a lot using media
devices are either no better at multitasking or maybe even a bit worse at it than
people who are less prone to subjecting themselves to more than one medium
at a time.

61
Lesson 7 Screen Time’s Impact on Kids

where teens in particular seem to benefit from spending time interacting


with apps.

● Teens who become civically engaged in adolescence carry that sense of


duty with them throughout adulthood, becoming more likely to volunteer
and vote. Because social media can help young people find and connect
with organizations and individuals who share their values, it can help
mobilize and motivate teens. It might be especially helpful for kids who
are marginalized or at risk. Feeling more empowered and taking active
steps to improve one’s community and opportunities can help kids who are
struggling build self-esteem and ultimately improve well-being.

● Social media also provides a platform for engaged young people to


have their voices heard. With the advent of virtual reality technology,
digital media might soon become an even more powerful tool to help
young people navigate difficult situations. Promising developments
include psychological and neurocognitive assessments, psychotherapy,
rehabilitation, pain management, and the prevention and treatment of
eating disorders. Virtual reality will also improve training in domains such
as communication, social skills, and vocational readiness.

Questions to Consider

1 How do we know whether screen time is good or bad?

2 What are the physical costs of screen time?

3 How might kids’ brains be affected by screen time differently than


adult brains?

62
Video Games
8
and Violence

63
Lesson 8 Video Games and Violence

T he relationship between video games and violence is


a difficult topic to study. While something we spend
a lot of time doing—such as playing video games—can
influence our behavior, there are many other mitigating
factors, and it’s unlikely that any single experience drives
a whole series of actions.

Studies on Violent Media and Aggression

● In 2005, the American Psychological Association (APA) published a


resolution on violence in video games and interactive media in which they
advocated for the reduction of violence in video games and interactive
media marketed to children and young adults.

Sometimes, in the same breath that they claim violent games are
of no concern, game proponents argue that games can even make
people better. And indeed, there is some evidence that prosocial
games can lead to more prosocial behavior.

● Among decades of social science research, they cited 5 studies that they
concluded show that exposure to violent media increases the incidence
of aggressive behavior and angry thoughts and feelings, decreases helpful
behavior, and increases physiological arousal. Four out of 5 of the studies
were coauthored by Craig Anderson, a longtime researcher of violent
video games, and one of his coauthors, Brad Bushman, has also published
extensively on the topic.

● In 2014, Tobias Greitemeyer and Dirk Mügge published the results of a


meta-analysis in the Personality and Social Psychology Bulletin, looking at data
from 98 studies that included almost 37,000 participants. They concluded
that there were significant associations between video game content and
social outcomes: Violent video games were found to increase aggression and

64
aggression-related variables, while prosocial video games had the opposite
effect. Their meta-analysis found these effects to be reliable whether the
studies were experimental, correlational, or longitudinal.

Several families of the victims of school shootings have


attempted to sue video game makers, asking them to take
part of the blame. But in 2011, the US Supreme Court, in the
case of Brown v. Entertainment Merchants Association, came
down on the side of the video game industry, citing the First
Amendment as protecting the rights of violent speech, even in
the case of minors.

● In the meantime, researchers like Christopher Ferguson have been


publishing studies that have failed to link violent video games with
aggressive behavior and have been criticizing the work of Anderson,

65
Lesson 8 Video Games and Violence

Bushman, and colleagues. Ferguson has argued that since the 1950s, the
correlation between media violence and homicides has broken down,
and while video games have continued to obsess America’s youth, rates of
violence are at 40-year lows.

● In the 2014 meta-analysis by Greitemeyer and Mügge, they specifically


compared studies authored by Ferguson with those authored by Bushman
and also with studies in which neither author was involved. They discovered
that studies by Bushman and Anderson, which totaled 8500 participants,
found a small to medium effect of violent video game exposure on social
outcomes. These results were found to be in line with the studies published
by other groups, which included 23,000 participants. Those authored by
Ferguson, with 2400 participants, found no significant effect.

● Why might one set of authors find such reliable results when another
group fails to find them? This discrepancy can happen for a number of
reasons, including the sensitivity and type of social outcome measures and
the control group.

● If you compare kids playing video games with kids doing nothing, you’ll
find an effect. That’s called a passive control group. The best control group
might be one in which the kids either watch the game but don’t play it or
play a nonviolent video game. And in those cases, the effect is often still
there. But if you have a nonviolent video game that is very frustrating,
leaving the players angry, you might find that they’re just as aggressive as
kids playing a violent video game.

● So it’s not always the content of the game that we need to consider, but
how it makes the children feel. And whether it’s good or bad, there’s plenty
of evidence that video game playing manipulates kids’ emotions.

● In 2015, the APA followed up on their 2005 resolution by publishing a


report that included much of the research from the previous 10 years. Four
meta-analyses of more than 150 studies and reports published up until
2009 are described in the introduction. Three of these meta-analyses were
coauthored by Ferguson. And while the APA report acknowledges that the
authors’ interpretations of these results varied considerably, they conclude
that “all four meta-analyses reported an adverse effect of violent video
game use on aggressive outcomes.”

66
● It’s no wonder, then, that the question of whether exposure to violent
video games leads inevitably to, or even nudges people toward, violent acts
remains open.

● In the 2015 report, the APA describes an additional 170 studies published
since 2009 that attempted to address this question. Taking these new
research findings into account, the task force concluded that the link
between aggressive behavior and violent video game exposure is not only well
established but also well studied, with 14 studies reporting significant effects.

● Twelve out of 14 studies found positive correlations, and the samples


included older kids, adolescents, and young adults. And now a few
longitudinal studies, and some using naturalistic methods, have been
published that demonstrate that the effects of violent video games can
bleed out of the lab setting.

Theories on How Exposure Influences Behavior

● The most basic and perhaps oldest theory of how exposure influences
behavior is based on a series of studies conducted in the 1960s by social
psychologist Albert Bandura, widely considered one of the most influential
and greatest psychologists of all time. Bandura’s experiments were designed
to test a theory that kids learn behaviors through observation and imitation.

What was influential and unique about Bandura’s social


learning theory was the idea that kids can learn not just
through rewards and punishment—the central tenets of
behavioral psychology, the dominant scientific view at the
time—but also by observation, imitation, and modeling.

● If kids are prone to imitating the behaviors that they observe—as


Bandura’s experiments showed—then it’s likely that watching violence
and even behaving violently in a virtual world might translate into more
physical aggression in real life. To explain this finding, Bandura proposed

67
Lesson 8 Video Games and Violence

his social learning theory, which suggests that people can learn through
observation and imitation, even without direct reinforcement.

● He pointed out that learning occurs in a social context and that the more
we relate to the teacher, the more easily we learn. In particular, Bandura
proposed this theory to account for the fact that humans and other animals
can produce new behaviors seemingly spontaneously, without having to go
through a careful and slow process of shaping and reinforcement.

● While previous learning theories adequately explained how we and other


animals can learn by seeking rewards and avoiding punishments, they
failed to provide a reason for why we might pick up a novel behavior just
by watching someone else do it.

● Bandura suggests that learning is a cognitive process, not just a behavioral


one, meaning that we need to first pay attention to the relevant
information, then retain it or remember it, and finally be capable of
producing the behavior. But we also need motivation; if we don’t feel like
doing something, we won’t.

● One of the arguments against the idea that playing violent video games can
directly cause us to behave more violently highlights the fact that watching
someone do something in person is much more powerful than watching
it on TV. The screen tells us that what we’re watching is not real. But can
kids make the same distinction?

● In 1963, Bandura conducted a series of follow-up experiments to test this


exact question and concluded that kids could not differentiate between
live, video, and cartoon abuse and acted more aggressively after observing
each of these 3 cases. The kids studied were aged 3 to 6. Maybe that’s a
time during which observation is a powerful mode of learning, but maybe
as we get older, we’re less influenced by what we see others do. So maybe
preschoolers should avoid violent video games, but older kids and adults
don’t need to.

● But there are a few more theoretical frameworks that could explain how
playing violent video games might lead to more frequent aggressive acts,
even in older kids and adults.

68
● The next one to come on the scene chronologically was the excitation-
transfer theory, proposed by Dolf Zillmann in 1983. Zillmann was
building his theory on the assumption that emotions are accompanied
by physiological arousal, or excitation, but that the arousal is nonspecific.
In other words, you might find that your heart is racing in a number of
different situations: when you’re angry because someone cut you off or
because your favorite character in a TV show was just killed; or when
you’ve just come back from a run.

● In each case, you have the same physiological response: a racing heart. But
the causes differ: One is because something made you angry while the
other was because you were exerting yourself physically. And in the case
of anger, we can further separate the causes: because you were cut off or
because your favorite character was killed. Sometimes the physiological
reaction lasts longer than the emotion, such that your heart might still be
racing even though you don’t really feel angry anymore.

● Zillmann noted that this residual excitation, or arousal, can then influence
your behavior, and you might misattribute this change because you
didn’t realize that you were still in an excited state. There’s even a clichéd
metaphor to describe this bleeding of stress from one situation into
another: kicking the cat, or the dog.

● So maybe watching violent video games entails the danger of getting you
into an excited state, which, when paired with an activity in which you
might become aggressive makes you more likely to metaphorically kick the
cat. And there have been studies where such behaviors have been observed.

● But Zillmann noted that the effects of arousal dissipate quickly. So maybe
the long-term effect of playing violent video games on behavior is pretty
negligible. That’s indeed an argument that is often levied against those
who caution us about the negative effects of video game playing. And to be
fair, the effect sizes in many of the studies are fairly small.

● But there’s another mechanism to consider in the short-term influence of


video games. In addition to causing changes in physiological arousal—
which really is not much different from, for example, playing a sport or
having an altercation with someone—games might also activate networks
of aggressive thoughts, emotions, and memories, as is suggested by the

69
Lesson 8 Video Games and Violence

cognitive-neoassociation theory of aggression, proposed by Leonard


Berkowitz in the 1990s.*

● In this theory, frustration and provocation in one situation lead to a


negative affect, which then becomes linked to other life experiences via
thoughts, memories, actions, and physiological responses. Then, when the
negative affect presents itself, it automatically triggers these associations
and activates the flight-or-fight nervous system response.

● The thinking is that violent video games pair frustration with acts
of aggression, and once that link is well established, individuals are
more likely to behave aggressively when frustrated in other situations.
Neuroscientists call this type of model spreading activation—as
associations laid down in one context create a network of linked thoughts,
emotions, and behaviors such that when one component is activated, it
automatically actives the rest of the network.

The General Aggression Model

● Perhaps the most prolific and vocal proponents of the positive relationship
between violent video game playing and aggressive behavior, Anderson
and Bushman have suggested a general aggression model that incorporates
aspects of all 3 previously outlined theories and that also distinguishes the
short- from the long-term effects of media exposure.

● In their model, individuals develop scripts and schemas about aggression


and store these in memory. Then, when faced with a new situation, they
assess the extent to which elements of it are in line with their aggression-
related knowledge structures and adapt their behavior accordingly.

● Already in 2002, Bushman and Anderson felt that the case for violent
media leading to more violence had been adequately made. So they

* Berkowitz credits the origins of his ideas to a monograph published in


1939 called Frustration and Aggression, in which the authors proposed that
“aggression is always a consequence of frustration.”

70
advocated for a move toward understanding why that might be the case
and testing their general aggression model. And a good test of the model is
to assess how people interpret an ambiguous situation: Do they read it as
aggressive or nonthreatening?

● In the short term, the model suggests that a person’s internal state can be
impacted by exposure to violent media, priming aggressive cognitions like
scripts and schemas; that is, in this situation, a person will be hostile and
react violently, increasing physical arousal and triggering angry feelings.

● But it also provides for long-term effects of repeated exposure to violent


media. Over time, people’s schemas and scripts can be shaped by the
media, such that they deem the world a hostile and dangerous place. They
will learn to perceive, interpret, judge, and respond to specific situations in
ways that are in line with a view of the world as hostile and scary.

● With video games, they also learn that aggressive acts are effective in
neutralizing the situation or are rewarded in some way. In the extreme
case, a person might be consistently activating these knowledge structures,
such that they become aggressive under many different circumstances.

● Is there any evidence that such knowledge structures are indeed developed and
that they can influence how a person interprets an ambiguous situation?

● Studies have suggested that at least in the short term, violent video game
playing does influence how a person sees the world. Specifically, research
has shown that violent video games can increase aggressive thoughts,
feelings, and to some extent behaviors in adults. But most of these effects
are short-lived and are thought to be the result of activating hostility
association networks and increasing physiological arousal.

● Adults seem less affected in the long term, but that’s not the case for
children, who show the opposite pattern. While we do see short-term
effects in kids, these effects aren’t as robust in lab settings as they are
for adults. But when it comes to long-term effects, or the results of
longitudinal studies, kids do seem to be more affected.

71
Lesson 8 Video Games and Violence

Questions to Consider

1 How can we study the effects of video games on violent behavior?

2 How might violent video games incite violence?

3 How might violent video games prevent violence?

72
Is Digital
9
Technology
Ruining
Sleep?

73
Lesson 9 Is Digital Technology Ruining Sleep?

T echnologies like smartphones and tablets have had a


hand in making us feel as though we’re getting less
sleep, and they are likely affecting the quality of our
nightly z’s.* But people are also turning to technology
to enhance their sleep—in ways that are becoming more
and more innovative.

A wealth of evidence
Stages of Sleep is showing that of the 3
pillars of health—sleep, diet,
● Our eyes have an outsize influence and exercise—sleep is the
on how our brains fall and stay
asleep. There are receptors in your
most important in terms of
retinas that send their projections your optimal cognitive and
directly to parts of the brain that physical function.
regulate the hormones that make you
feel sleepy. They bypass your visual cortex
and track the slow changes in ambient light
that characterize the setting and rising of the Sun. And if you trick these
receptors into thinking it’s not yet dark, you’ll also find it more difficult to
fall asleep.

Sleep is not a time during which your brain is resting. It is a time for
cycling through different stages of brain activity, some quite distinct
from our waking minds and some quite similar, but each with an
important role to play in keeping our brains functioning at their best.

● We are the only species on Earth that has managed to override the strong
signal of the Sun by inventing ways to bring light into our homes even
when our part of the world has turned its back on our solar system’s core.

* The impact might be greatest on young people, who have more trouble
understanding the importance of sleep and regulating their sleep duration.

74
● Falling asleep doesn’t just happen because we’re tired. There are external
signals that it’s time for sleep, such as the slow dimming of environmental
light, as well as internal signals, such as the buildup of a metabolic by-
product of your nervous system’s activity called adenosine. But the actual
falling-asleep process is a dynamic interplay between the activity of different
brain regions and the neurotransmitters and hormones they modulate.

● Falling asleep happens when cells in 2 parts of our brain, the ventrolateral
preoptic nucleus of the hypothalamus and the parafacial zone in the
brain stem, increase their activity and send us into a different state of
consciousness, one in which the thalamus, at the geographical center of the
brain, blocks access to the outside world.

● When you’re awake, the thalamus acts as a sensory gatekeeper, passing


information about the external environment from your senses to various
parts of your brain for processing and interpretation. We tend to think of
our minds as having a direct link to the physical world outside, but really
our brains are busily constructing an elaborate illusion of reality for us at
every waking moment.

● After all, we see by turning photons of light into voltage changes across cell
membranes in a 2-dimensional retina. Yet somehow we recognize objects,
interpret color, and predict depth.

● So the first thing our brains need to do in order to sleep is to shut down
this illusion and prevent our minds from being distracted by what’s
happening around us and then instead focus on the task of generating the
rhythms of sleep.

● We then begin the slow descent into ever-deeper stages of sleep, each of
which is characterized by distinct patterns of neural activity. With respect
to brain waves, the first stages of sleep look like slightly lazier versions
of resting wakefulness, with stage 2 distinguished from stage 1 by the
presence of 2 specific markers, called K complexes and sleep spindles.*

* These are so named because of their appearance in electroencephalograms;


they resemble spindles when plotted.

75
Lesson 9 Is Digital Technology Ruining Sleep?

● The deepest stages of sleep are called slow-wave sleep, because in aggregate
the brain waves that represent the activity of thousands or even millions of
cells look like synchronized slow waves.

● After your brain passes through the deepest stage, it begins its ascent back
through the lighter stages and, if all is going well, enters rapid-eye-movement
(REM) sleep instead of waking up. REM sleep is also called paradoxical
sleep because the brain’s activity looks a lot like it does when we’re awake,
but our bodies are paralyzed, so the only movement, in a healthy individual,
is that of our eyes, which dart back and forth in synchrony.

● In non-REM sleep, your brain activity looks like slow, large-amplitude


waves traveling from the front of the brain to the back. And this brain state
comes with many benefits.

● Arguably the most studied of sleep’s benefits comes in terms of learning


and remembering. Before an active day of learning, a good night’s sleep
can influence our ability to lay down new memories.

● During the second stage of sleep, called light non-REM sleep, our brain waves
are decorated with short bursts of a signature electrical signal: a sleep spindle.
They are generated by the reticular nucleus in the thalamus, a structure that
plays a significant role in generating our sleep-wake cycle and modulating
consciousness. It seems that sleep spindles aid in the freeing up of space for
what we need to learn next. The more of them you generate through the night,
the better able you will be to learn new things the following day.

● In the next stages, the deep non-REM sleep happens, characterized by those
slow, synchronized waves traveling from the front to the back of the brain.

● While sleep spindles clear the hippocampus whereas deep sleep ensures
proper storage of that information elsewhere, both stages are important for
learning and remembering.

● Sleep spindles are also involved in an important but overlooked feature of


memory: forgetting. While ensuring that newly formed memories are stored,
the brain also needs to toss out irrelevant or even potentially harmful ones.

● The last stage, which we see more and more of as the night progresses, is
REM sleep, which is when we’re most likely to experience vivid dreams. It’s

76
also when neurotransmitters like serotonin and norepinephrine are at their
lowest concentrations, while acetylcholine levels are high.

● These neurotransmitters serve a number of different functions in the brain,


but generally norepinephrine is involved in keeping you vigilant, sending you
into a fight-or-flight response if you’re threatened, whereas serotonin helps
you regulate your mood. Low levels of these neurochemicals during sleep
ensure that you’re not retraumatizing yourself if you dream about something
significant or bad that happened to you the day before. Acetylcholine,
however, is important for sustaining attention and plays a role in memory.

Sleep plays many roles in a healthy person’s bodily and


cognitive functions, and sleep deprivation has dangerous
consequences. If technology is preventing us from reaping all
the cognitive and emotional benefits of deep and uninterrupted
sleep, then it’s changing how we think.

Blue Light from Screens

● Given that our brains are looking for signals from the environment that
the Sun is setting in order to begin the cascade of events that puts us into
sleep cycles, artificial light can block these signals very effectively.

● You might have heard that blue light from screens is particularly nefarious.
And indeed it is, largely because of how our vision evolved. We can trace
the emergence of vision to our ancient aquatic ancestors, who evolved the
ability to see light in a certain part of the electromagnetic spectrum—
specifically, 380 to 720 nanometers. And we’re highly sensitive to light
in the blue part of that sliver of the spectrum; in other words, melatonin
suppression is most potent when we are exposed to light whose wavelengths
are on the shorter side, closer to 400 nanometers than to 700.

● And that seems to be because the ocean filters out the longer-wavelength
light, letting through shorter wavelengths, which our brains interpret as
blue. For a fish, the setting of the Sun is most easily tracked by the amount

77
Lesson 9 Is Digital Technology Ruining Sleep?

of short-wavelength light that reaches its eyeball. So we have special


receptors in our retinas that contain a photopigment called melanopsin,
which is related to melatonin production. As it registers the gradual decline
of blue light, the suprachiasmatic nucleus of the hypothalamus increases
levels of melatonin, signaling to the rest of the brain that it’s time to sleep.

● While we can thank our fishy ancestors for our disproportionate sensitivity
to environmental changes in blue light concentrations, we can blame the
inventors of light-emitting diode (LED) light for its outsize effects on our
ability to fall asleep. Blue LEDs use much less energy than incandescent
light bulbs and have longer life spans, which is great for our pocketbooks
and our environment. But they’re not so good for our health if they are
disrupting our sleep. Light from blue LEDs has a powerful melatonin-
suppression effect, and it just so happens that many of our screens, from
smartphones to iPads and TVs, are enriched with blue LED light.

78
● So sleep studies have shown that compared with reading a printed book
illuminated by an incandescent light bulb for a few hours before bed,
reading on an iPad delays the rising of melatonin levels—the signal for
sleep—by up to 3 hours. Peak melatonin levels aren’t reached until the
early-morning hours, compared with around midnight for traditional-book
readers. That’s the equivalent of crossing 3 time zones.

● Research shows that tablet readers also lose significant REM time; they
feel less rested and sleepier the following day. And in the days following,
when they no longer use an iPad before bed, they still show a significant
lag in melatonin rise.

● So technology, in the form of screens lit by blue-light LEDs, has a direct


effect on sleep hormone levels and the quality of sleep you get if you’re
exposed to it in the evening. That’s setting aside the arousing aspect
of screen interaction, which is amplified if you’re watching disturbing
content or engaging in activities that are hard to wind down from, such as
checking your work email or playing interactive games.

Solving the Problem of Falling Asleep

● While interacting with technology can delay or disrupt your sleep, there
are ways that you can use technology to hack it.

● There is now a cottage industry of apps and tools designed to help you
undo the damage of melatonin-suppressing blue light. The first tool is just
the opposite of blue light: Night-lights that emit primarily red light do
not seem to have the same effect on melatonin suppression and might be a
great alternative.

● But there are other options that use visual stimulation to put you to sleep.
Perhaps the most extreme and controversial form of sensory-stimulation
technology is called audiovisual stimulation (AVS), which is designed to
entrain your brain to synchronize its activity and set you into the sleep
wave states.

● Brain wave entrainment can either be open- or closed-loop. In other words,


you can simply put on a pair of goggles that presents auditory and visual

79
Lesson 9 Is Digital Technology Ruining Sleep?

images at a certain frequency and assume that it gets your brain waves
entrained, or you can include a measure of brain wave activity in the form
of an electroencephalogram (EEG) that feeds into the system. The EEG
data are then used to modulate the audiovisual stimulation, giving it real-
time neurofeedback.

● When it comes to AVS, the open-loop devices have a series of set programs
that delivers sensory pulses of a particular frequency. EEG studies show
that the right type of stimulation pretty reliably generates a response in
sensory regions and the thalamus. So if you can trigger these responses to
come at a certain frequency, you can presumably trick the thalamocortical
loops to fire in sync at the frequency that we observe when you’re falling
asleep. In 20- to 60-minute sessions, the AVS system is fairly good at
generating the brain wave frequency in the desired frequency band.

● Unfortunately, studies showing the effectiveness of AVS devices at putting


people to sleep are few and not particularly well controlled. They use
self-reported measures of sleep, which can be unreliable. The number of
hours you spend in bed might not reflect the quality of your sleep. And
things get worse when considering the evidence that even direct electrical
stimulation of the brain can fail to cause the type of entrainment that
would improve sleep quality.

● Whereas AVS interventions are thought to entrain the brain by stimulating


the sensory systems with specific frequencies, there are also tools for using
electrical stimulation to achieve the same ultimate effect. Transcranial
electrical stimulation (TES) is not like
electroconvulsive therapy (ECT), although
both apply electricity through leads on
the scalp. TES uses very mild electrical Ultimately, technology
stimulation while ECT is designed to can be both harmful and
induce seizure activity, so the applied helpful in terms of sleep, but
voltage is much, much stronger. the first step toward setting up
Nevertheless, the use of TES to induce your ideal sleep environment
sleep remains somewhat controversial. is to recognize which of your
habits fall into the former
● While more advances in the use of category and which
brain stimulation, whether direct or into the latter.

80
through sensory channels, to aid sleep are likely coming down the pike, the
current offerings aren’t life-altering just yet.

● But there are sleep-improvement devices coming to market to consider,


especially those in 3 categories: sleep trackers, smart mattresses, and
sensory devices like noise machines. Such devices hold some promise and
will be innovations to keep an eye on as the tech improves.

Questions to Consider

1 How is technology affecting our sleep?

2 What does it mean to have enough good-quality sleep?

3 What are the effects of sleep deprivation?

81
How “Dr.
10
Google” Is
Changing
Medicine

82
F rom email interactions with your physician to drug
therapies or interventions crafted to your genetic
profile, technological advances are making personalized
medicine possible. Technological advances are also
improving early detection, even at the molecular
level of disease processes. And once we do get sick,
personalized medicine is improving diagnosis, choice
of treatment, and management of care. However, as
medicine becomes more personalized, traditional
models of funding have to be reconsidered, and the
gap in health care between those with means and those
without will widen. And cost is hardly the only potential
problem that technology is bringing to medicine.

Problems That Technology Brings to Medicine

● One big problem that technology is bringing to medicine is the fact that
finding information about any number of health problems has never been
easier. With “Dr. Google” at our fingertips, we’re all prone to suffering
from medical student syndrome—the disease in which you read about
symptoms and suddenly start suffering from them.

Personalized medicine can help us figure out our risk profiles for
certain diseases based on our genes and history. Preventative
treatments and recommendations can help us tailor our diet,
exercise, sleep, and other lifestyle factors to our genome,
microbiome, and other personal information.

● Haider Warraich, a well-respected cardiologist and author, wrote an op-ed


in The New York Times in 2018 called “Dr. Google Is a Liar,” in which he

83
Lesson 10 How “Dr. Google” Is Changing Medicine

sounds the alarm of googling about medical issues. Fake news threatens
our democracy, he says, but fake medical information threatens our lives.

● Warraich cited a 2017 study that found that people who turn to alternative
treatments for cancer, such as diets, herbs, and supplements, and eschew
traditional treatments are 2.5 times more likely to die than those who stick
to doctors’ orders.

● Almost as soon as the internet became widely available, the public began
to use it to learn about their health. A study published in 1999 found
that medical information was among the most-sought-after treasure on
the fledgling internet. But back then, it was difficult to find and parse
information, and it was clear that medical degrees had their usefulness. In
fact, access to all of that information at least initially increased patients’
trust in their doctors, as reported in a 2001 Health Information National
Trends Survey.

● Over the past few decades, however, the information has become
much more accessible, both in terms of technology and digestibility.
Large hospitals like the Mayo Clinic now have their own vast and well-
maintained websites that are dedicated to the dissemination of medical
information. And internet users are much more diligent in checking
sources and maintaining skepticism when it’s warranted. Yet our ability to
access nearly endless medical information sometimes has made us feel as
though we know just as much as our physicians. But we don’t.

● By 2013, 59% of US adults had looked for health information online


and 35% of them used the internet to self-diagnose and check symptoms.
But a study published in 2015 found that only 34% of symptom-checker
algorithms online provided an accurate diagnosis. Only about a third of “Dr.
Google’s” patients sought a second opinion from an actual physician, but
when they did, the majority—about 60%—were told that they were wrong.

● One report published by Bupa, an international health-care company,


found that 47% of online symptom searches resulted in at least one hit
that included the dreaded word cancer. There’s now even a term for people
who suffer from overuse of internet health information: cyberchondriacs.
And the viral nature of information sharing via the internet can greatly

84
expand the harms that alternative, unregulated “treatments” can wreak on
desperate patients.

● Beyond causing stress and anxiety where it might not be warranted, “Dr.
Google” also can erode the relationship between doctor and patient. In
the 1960s, most Americans—73%—felt confident that their doctors knew
what they were doing. By 2012, that number had dropped to 34%, as
published in The New England Journal of Medicine. And this drop did not
go unnoticed: A 2017 survey found that 87% of physicians felt that their
patients trusted them less than they did a decade earlier.

● Access to health information online erodes trust by giving patients the


impetus to second-guess their physicians’ diagnoses and advice. They
can also become excited about a treatment option or feel strongly about a
diagnosis because of online fanfare, and when this view isn’t in line with
that of the physician, conflicts can occur.

85
Lesson 10 How “Dr. Google” Is Changing Medicine

● If a patient doesn’t trust his or her doctor, then he or she will be less likely
to comply with medical advice and then quick to blame his or her health-
care professional when things don’t go as expected. There’s even data to
show that patients who trust their doctors more also have better health
outcomes. In other words, the erosion of trust can keep you sicker longer.

● Searching online has another potential pitfall: the loss of privacy, though
arguably simple searches are less problematic when it comes to this issue
than many of the apps people now use to track their health.

● There are apps that help women track fertility and become pregnant, help
people with diabetes manage their glucose levels, and track blood pressure
and heart rate. The latest version of the Apple Watch even has a built-in
echocardiogram app. And the vast majority of these apps are not subject to
the Health Insurance Portability and Accountability Act (HIPAA), a US
law that was enacted specifically to address the threat of technology on the
privacy of our health information.

● In accordance with HIPAA, if you tell your doctor that you’ve been feeling
a little depressed, he or she can’t tell your employer, for example. But if you
report your mood on an app that doesn’t interact with your health-care
team, the company that developed the app can sell your information to
some third party, including a health insurance company or a recruiter.

● Health insurance companies can then use that information to approve or


deny funding for interventions. Decisions are made all the time that affect
the lives of patients in the US in particular, though one can imagine that
in countries with national health-care programs, the risk is even greater.

How Technology Has Improved Medicine

● There are, of course, many more ways in which technology has improved
medicine. And diagnosis is one area where doctors working alongside
artificial intelligence (AI) can really make a difference. Computers are,
after all, really good at finding patterns. And that’s essentially what
radiologists and pathologists do.

86
● In a research paper from 2015, scientists trained a group of novices to
pick out cancerous cells in breast tissue slides. Without the help of any
physicians, the novices were shown cancerous and noncancerous tissue and
had to indicate which was which.

● After about a month of training, they showed 80% accuracy. That’s


not bad, considering expert physicians are right about 96% of the time.
Amazingly, if the results from 2 novices were joined together, their
accuracy rose to 99%, besting highly trained experts. But even more
astonishing is the fact that the novices were pigeons. And they were much
better at diagnosis than computers at the time.

● Pigeons are blessed with excellent vision—better acuity than ours, in


fact—but it’s unlikely that they’ll be replacing radiologists anytime soon.
And while they were better than computers at the time, computers are
catching up as machine learning improves.

● Perhaps some of the most encouraging trends in medicine are computer-


doctor partnerships, where an algorithm does the brute-force work of
prescreening slides or images, highlighting areas that are of interest, and
then the human physician comes in and quickly assesses just those spots.
In one study of cancer detection, this human-machine partnership boosted
diagnostic accuracy to 99.5%.

● But there are also case studies in which computers just haven’t measured
up to their potential. One such case is IBM’s Watson, which became
famous by beating 2 of the greatest champions of the game show Jeopardy!
in 2011 and winning a $1 million prize. Two years later, IBM announced
that Watson’s genius would be applied to aiding lung cancer treatment
decisions at Memorial Sloan Kettering Cancer Center in New York, among
other projects. By 2019, Watson became a cautionary tale of hubris and
hype, with several projects being discontinued despite the investment of
tens of millions of dollars.

● Part of the problem in training AI doctors is that the data are so poorly
managed in health care in general. If Watson doesn’t have access to
clean and thorough data sets, we can’t expect it to be much better than
a conglomeration of human physicians, so it won’t be replacing them
anytime soon.

87
Lesson 10 How “Dr. Google” Is Changing Medicine

● Another problem facing many machine learning algorithms is target


leakage: what happens when the training data set contains information
that won’t be available when the AI is applied in the real world. For
example, say you want to develop an AI that can accurately diagnose
patients. You have to restrict training data to information that will be
available before the diagnosis is made—not after, such as follow-up visits,
treatment response, and so on. So you’d have to clean up medical records
and only include information logged prior to the diagnosis. Otherwise,
when faced with a new patient, the AI won’t be accurate.

● Whether it’s socialized medicine like the national health-care system in the
UK or private institutions as in the US, health-care information is often
dispersed among different providers and hospitals, with little or no cross
talk. Exchanging records is cumbersome and difficult, so doctors don’t
have a comprehensive view of a patient’s full medical history.

● Plus, patients are treated mostly for acute problems, especially in the US, where
regular doctors’ visits might be a cost that patients with high-deductible health
insurance or no health insurance are unable or unwilling to cover. So no one
gets a chance to evaluate the forest rather than just the trees.

● Here’s where AI could potentially have a big impact. Gathering complete


medical histories on thousands or millions of people is a monumental task.
But if we could do it and then feed it into machine learning AI, we could
learn things about diseases that we would never discover otherwise.

● Even though it’s necessary if we want to harness the true promise of AI or


share medical records easily so that we can start to see the forest, one of
the problems inherent in digitizing our medical information is that when
a doctor spends most of a visit typing and looking at a screen, the patient
feels as though he or she is not being heard, which can erode the doctor-
patient relationship.

● And electronic health record interfaces are notoriously clunky and


inefficient. But with the implementation of voice recognition and task
automation, these problems could become a thing of the past. Imagine
a physician walking into an exam room and telling the AI to bring up
the patient’s record, take voice memos during the exam, and then run

88
symptoms through a diagnostic algorithm, without the doctor even
touching a keyboard or staring at the screen for too long.

● One of the biggest reasons why we see a primary care physician in the
first place is so that there’s at least a chance that someone is keeping a full
record of our medical history. But most of us don’t see the same physician
for decades, as was more common in the past.

● Surely, an automated way of taking blood pressure, managing cholesterol,


or evaluating whether it’s a cold or the flu is more efficient and prevents
visits to offices, where germs are spread. That way, when symptoms are
more complex or it’s time for an annual visit to take a look at the forest, the
doctor has the time to spend with the patient.

Questions to Consider

1 How is technology changing the doctor-patient relationship?

2 What are the downsides of personalized medicine?

3 What role might artificial intelligence play in health care?

89
The Virtual
11
Therapist

90
T oday, it’s easier than ever to find and talk to a mental
health professional from the comfort of your home.
And therapies that can transport patients into virtual
spaces, where experiences can be carefully curated and
personalized to help them overcome their psychological
issues, are growing in number and proving to be quite
effective.

Drug-Based Therapy for Mental Illness

● In the 1950s, psychiatry was forever changed by the discovery of drugs


that affect the mind. Before then, people with a psychotic illness had few
options and were often institutionalized for life.* Then, in the 1960s, the
era of drug-based therapy for mental illness began, and the myth of the
chemically imbalanced brain of the psychiatric patient spread like wildfire.

● It makes intuitive sense to assume that mental


illness results from too much or too little of
a neurochemical if a drug that enhances
or blocks that compound’s effect leads There is a large body
to an abatement of the patient’s most of evidence documenting
worrisome symptoms. the stigma that’s still
● But there’s a reason why many associated with a psychiatric
people find it difficult to comply diagnosis, not only for
with their prescriptions: It takes the patient but also for
time—sometimes several weeks or
family members.
even a month—before the psychiatric

* Psychiatric institutions arose out of a desire to help patients, thanks in large


part to the work of activist Dorothea Dix, who successfully lobbied the US
government in the 1840s to open dedicated psychiatric hospitals to house
patients who were living in inhumane conditions, often in prisons where they
shared accommodations with violent criminals.

91
Lesson 11 The Virtual Therapist

symptoms improve, even though the levels of the neurotransmitter that the
drug acts on are “balanced” almost immediately. The negative side effects
show up before the psychological benefits become clear.

Because the pharmaceutical treatments we have for psychiatric


illness do not work for everyone with a given disorder and often
lose their efficacy over time, finding new ways of helping people
with psychological problems is of critical importance.

● This therapeutic lag is especially problematic for patients with depression


or generalized anxiety disorder. It can be difficult for a patient with
anxiety or depression to seek help in the first place, but add on the
therapeutic lag and it’s no wonder that these patients have a hard time
adhering to their treatment protocol.

● The chemical-imbalance view of mental illness is further called into


question by the fact that antipsychotic medications only treat some
symptoms of diseases like schizophrenia with no impact on other
symptoms: They might attenuate hallucinations and delusions, called
positive symptoms because they represent an increase in a particular
behavior, but have no effect on the negative symptoms (which represent
decreases in behavior), such as emotional blunting. And we’ve learned that
patients with schizophrenia don’t just have altered brain chemistry; there
are also anatomical and functional differences in their brains.

● For the most part, we don’t know exactly what causes depression and other
psychiatric disorders. But we do know that it’s a complex interplay between
neurotransmitters, brain development, neuronal growth, survival, and
psychological factors like exposure to trauma, rumination, and so on.

● It’s just not as simple as recalibrating brain chemistry and therefore curing
mental illness. The complexity of the relationship between neurochemistry
and psychiatric disorders is one of the reasons why we’ve been essentially
stuck with the same mediocre drugs for the past few decades.

92
Cognitive Behavioral Therapy

● The latest innovations in the treatment of mental illness have arguably


come not from drugs but from alternative therapies. For one thing,
neuroimaging and other tools have helped us see that certain talk therapies
can be just as effective, and leave just as measurable a trace on brain
function, as pharmaceutical treatments for disorders like anxiety and mild
to moderate depression.

● In fact, for patients with depression, cognitive behavior therapy (CBT)


is just as effective as antidepressant medications but also lasts longer and
its benefits endure even after the therapy is completed. The effects of
medication don’t continue once the patient stops taking them.*

* A study published in 2005 in the Archives of General Psychiatry found


that 76% of patients with moderate to severe depression relapsed after
discontinuing medications, compared with only about 30% of those who had
completed a course of CBT.

93
Lesson 11 The Virtual Therapist

● Furthermore, when we look at brains before and after treatment, we see


similar changes whether the patient was taking antidepressants or whether
they did CBT in the short term and longer-lasting changes in the long
term for patients with CBT compared with those on medication.*

● The long-term outcomes for CBT are often superior to those of a drug
regimen because once the patient stops taking the drug, the symptoms
often return. But CBT gives the patient cognitive tools that he or she can
apply when the next episode hits.

● Unfortunately, talk therapy is expensive, often not covered by insurance,


and takes time—both in terms of the work that the patient needs to do
and the time it takes to go see the therapist every week. Popping a pill
can feel much easier and is usually cheaper for the patient, at least in the
short term.

Virtual Therapy

● Technology is beginning to make therapy less onerous by providing ways


through which the therapist and the patient can communicate, even
allowing for teleconference sessions that eliminate commute times and
cancellations when one is sick.

● A meta-analysis from 2005 by Steven Hyler, Dinu Gangure, and Sarai


Batchelder suggests that when the couch is replaced by a screen, talk
therapy is just as effective. Although the number of studies comparing in-
person with telepsychiatry sessions remains relatively low, 14 studies thus
far have revealed no differences in objective assessments or reported patient
satisfaction but a significant difference in cost to the patient. In fact, the
data were so compelling that the paper predicted that online psychiatry
would replace the therapist’s couch in the near future. Yet virtual therapy,
while more popular, is by no means the norm in 2020.

* This finding was described in a paper by Robert DeRubeis, Greg Siegle, and
Steven Hollon published in Nature Reviews Neuroscience in 2008.

94
If the treatment outcomes are no different, and it’s more
convenient for both the therapist and the patient, and it’s
cheaper, it seems inevitable that virtual therapy is part of the
future of medicine.

● One of the first applications of virtual reality (VR) to psychotherapy


involved treating patients with specific phobias using an effective method
called extinction. Specific phobias can be developed when a person has a
very aversive encounter with the object of fear and that encounter induces
an irrational fear that the situation will repeat itself. For example, if a
4-year-old boy is bitten by a German shepherd, the boy learns from that
one experience that dogs are to be feared. And the fear can generalize to all
dogs, not just German shepherds.

● Fear is a very effective learning tool. It taps into powerful neural circuitry
that evolved to protect us from life-threatening situations. So it can shape
our behavior even with one exposure. If you lived through an experience
that set off your full-blown fight-or-flight nervous system and left you
physically and psychologically scarred, you don’t want to put yourself
in those circumstances again in case the outcome is more final. So your
brain is very good at tracking things that might lead to such an aversive
event and triggering your nervous system to behave such that you’re out of
harm’s way.

● People with a specific phobia need to learn that the object of their fear
is not as dangerous as they think. In the case of the boy, he needs to
distinguish dogs that are not vicious from the one that was. One effective
way to eliminate the fear is to give him opportunities to interact with
dogs and learn that nothing bad will happen. This process is called
extinction because it essentially extinguishes the conditioning between the
stimulus—in this case, the dog—and the aversive outcome, such as pain
from a dog bite.

● The problem is that you can’t just force him to interact with dogs. While
this might turn out OK, much more likely it will cause the person to
reengage the sympathetic nervous system, inducing a panic attack, which
is a very aversive experience. So even if the dog doesn’t bite, he has still

95
Lesson 11 The Virtual Therapist

learned that the fear he holds for dogs is justified. After all, something
bad—a panic attack—happened when he was exposed to the stimulus.

● But in the safety of a psychologist’s office, extinction therapy involves a


series of incremental steps toward an encounter with the feared object,
ensuring that nervousness does not blow up into a panic attack. You might
start with photos of a dog, then move on to videos, and only once the
patient has successfully navigated these preliminary encounters do you
introduce an actual dog. And even when you do, you keep the dog on a
leash and have the patient walk toward the dog, rather than the dog toward
the patient.

● But some fears are harder to bring into a therapist’s office, such as the fear
of flying or public speaking. That’s where VR comes in. What’s more,
some people with phobias have a hard time imagining interactions with
the objects of their fear and are reluctant to engage in real versions. VR can
overcome both of these obstacles.

● Since the 1990s, psychologists have been experimenting with VR programs


during extinction therapy, and they have been quite successful, despite
the fact that for the first 15 years at least, VR technology left much to
be desired.

● In situations where a patient’s anxiety begins to spiral, it’s easier to turn off
a VR program than to extract the person from a live situation, reducing
the risks of exposure therapy. This is an important benefit when exposure
therapy is applied to an arguably more extreme or debilitating anxiety
disorder: post-traumatic stress disorder (PTSD).

● Exposure therapy in PTSD is controversial because while it can be effective


for many patients, some don’t seem to benefit. But what’s worse is that
by some reports, up to 10% of patients actually experience a worsening
of symptoms, and it’s not clear which category a given patient will belong
to a priori. It’s possible that the worsening is part of the process and that
patients who terminate the treatment early are the ones who fail to benefit,
but it’s also possible that the treatment protocols just aren’t working for a
significant proportion of people with PTSD.

96
● But prolonged exposure therapy remains one of the most effective and
empirically validated treatments for PTSD. Yet a 2004 survey found
that only about 17% of clinicians use it to treat their patients. Barriers to
treatment include patients’ aversion to in vivo exposure and their inability
to effectively use their imagination to relive the trauma. VR exposure-
based therapy (VR-EBT) can again overcome these barriers, and 76% of
PTSD patients reportedly are more accepting of it than of traditional EBT.
And VR-EBT has been shown to be as effective as in vivo therapy.

● Given the subjectivity of traumatic experiences, PTSD therapy is most


effective when it’s carefully adjusted to the individual patient. And VR is a
tool that can be personalized fairly easily.

● Another application of VR in medicine involves another subjective


experience: pain. In fact, one of the first reported uses of VR for
therapeutic purposes was to help patients who have to undergo painful
medical procedures, such as daily wound care for severe burns.

● For some burn patients, these daily treatments are even more painful than
the original burn, as the nurses have to clean the wound, remove dead
tissue, and stretch the skin. Even the strongest pain medications, such as
opioids, aren’t enough to numb the pain.

● Pain is a psychological phenomenon as much as a physical one. There’s a


cottage industry of hypnosis methods to help women endure childbirth.
What makes VR particularly promising in the alleviation of pain is its highly
immersive properties; it can capture your attention and distract you from
processing the pain more effectively than almost any other intervention.

Mental Health Apps

● Just like general health apps, those for mental health are a growing set of
tools designed to enhance more traditional forms of therapy. One of the
major problems in the mental illness field is the fact that a large proportion
of people with mental health issues don’t seek treatment. Stigma, cost,
time, and accessibility are all contributing factors. But each of these
barriers can be alleviated or eliminated by apps at our fingertips.

97
Lesson 11 The Virtual Therapist

● A person with suicidal ideation can text a crisis center in the middle of the
night without disturbing anyone else in the household. Patients can enjoy
total anonymity, though privacy issues should be considered.* Patients who
are reluctant to seek help can take the first microsteps toward treatment
using an app. The cost is much lower, and mental health providers can
reach a larger population of patients. Apps hold tremendous promise,
which is why there are already thousands of them, with new ones being
created all the time.

● Technology can also help individuals who have trouble communicating


in more traditional ways, such as people on the autism spectrum or who
experience social anxiety. There are apps specifically designed for different
populations of people, such as nonverbal children with autism, who make
up 20% to 30% of kids on the spectrum.

Questions to Consider

1 Can a virtual therapist be effective?

2 What are some ways that virtual reality might be used to enhance
therapy?

3 Can robots replace therapists?

* Most apps aren’t governed by the FDA yet and therefore aren’t subject to the
Health Insurance Portability and Accountability Act, better known as HIPAA.

98
How Big Data
12
Can Predict
the Future

99
Lesson 12 How Big Data Can Predict the Future

I n the last decade or so, science has been embroiled


in a scandalous crisis: the replication crisis, in which
the results of a shocking number of experiments have
failed to be reproduced in other studies. One of the very
basic tenets of the scientific method is that observations
deemed true should be replicable. Yet some of the
foundations upon which whole scientific disciplines
have been built are threatened as their results have been
found not to be as robust as once thought.

The Growth of Sample Sizes

● Part of the replication crisis is a normal and healthy part of science: As


tools become more refined and methods become more rigorous, we are
able to tease out spurious results from cause and effect, and finer-grain
analyses can uncover additional factors and influences. And things change
over time: Generational effects can explain some of the failures to replicate,
particularly in the social sciences.

● But a big issue is also the fact that before we had the ability to collect and
analyze massive data sets, we had to rely on samples of a population, such
as 10 to 20 mice or 15 human participants. Sampling can provide a decent
proxy for the population provided that it’s random—so that any individual
factors wash out across the group—and that it’s large enough to uncover
the effect in question.

● Of course, the smaller the effect that you’re searching for, the larger the
sample size must be. But even large effects, when limited to 15 human
brains in a neuroimaging scanner, can be caused by factors that aren’t
obvious in the initial study design and that wash out when the sample size
is orders of magnitude larger.

● When you’re trying to figure out how large of a sample you need to find a
given effect, you must consider the estimated effect size and the diversity

100
of your population—and then how generalizable your results will be to the
larger population of interest.

● Many psychological studies have been conducted on university


undergraduates because they represent a willing subject pool. But they are
not representative of the larger population in many ways, including race,
socioeconomic status, and age.

● For some scientific questions, such as how the visual system works,
diversity in terms of education and income are less important. But for
others, such as whether self-control is a limited resource, these variables
matter a lot. So when choosing your sample, you need to be mindful of
what you’re looking for and what factors to control for so that your sample
is representative and random.

● But what happens when the sample size gets so large that it includes almost
the entire population? That’s what technological advances are currently giving
science and other domains that rely on data collection. It’s called big data.*

The Power of Big Data Analysis

● The ability to collect and analyze large data sets is changing how science
is being conducted and what it takes to be a successful scientist. Even in
fields like biology, the emphasis is shifting away from manual skills like
pipetting and preparing slides for the microscope and toward complex
statistics and coding algorithms.

● Whereas a scientist used to be able to build a career by breeding a


transgenic mouse, nowadays gene editing is becoming so easy that there
are DIY CRISPR kits for sale online that are geared toward high schoolers.
Advances like this are, of course, great for science—and therefore
ultimately humanity—but they are changing how we think about
conducting experiments and interpreting the literature upon which entire

* The phrase big data doesn’t show up in publications until around 1956
(though there’s a curious blip in 1931), but it rises exponentially in popularity
around 1995.

101
Lesson 12 How Big Data Can Predict the Future

fields are built. The very way that we devise, test, and confirm or overturn
hypotheses is changing. What we consider strong evidence has shifted.

● Big data allows scientists to let the data drive the process of hypothesis
generation. When you have information from the (almost) entire
population, you can just ask the data.

● Before big data, biologists interested in understanding the genetic


underpinnings of some trait or disease would spend a lot of time thinking
about the biochemical pathways involved, postulating a hypothesis of how
genes might influence steps along the pathway. Only then would they
devote the resources to sequencing the genes and testing their hypothesis.

● Today, you can sequence the entire genomes of 1000 people with the
disease or trait, find the common markers, and use an algorithm to
identify causes—resulting in a bunch of candidate genes to consider.

102
● Now the hard work is not in careful lab work but in mindful analysis of
multidimensional and massive data sets. When your algorithm spits out
500 candidate genes, how do you figure out which are important and
which are coincidental? You can’t just reason your way through; you need
to devise a mathematical model of how you’ll rank and order the results.

● The shift from hypothesis testing to data-


driven hypothesis generation is both
exciting—as it opens up questions that Data scientists are
were thought to be unanswerable—and in great demand these
a bit frightening, as we don’t have as
long a history of checking for errors days, as virtually every
and bias using this approach as we do scientific discipline
using the former method. recognizes the promise
● Arguably, though, letting the data and peril of big data
generate the hypotheses eliminates a key analysis.
pitfall in science: getting too attached to a
pet theory or falling prey to the confirmation
bias, in which we interpret new evidence as support
for our preexisting theories.

● The downside is that we’re generating results without understanding the


mechanisms. We might find that a set of genes is responsible for a type
of cancer without knowing how the set of genes causes it. And this shift
represents a fundamental way in which big data is changing how we
think—not just as scientists, but in terms of information in general.

● In 2020, less than 2% of all stored information globally is nondigital; this


is a massive change from the year 2000, when 75% of it was in material
form. And digitization turns all kinds of information into data.

● But big data’s promise is not just about size; it’s also about the ability to
quantify previously unquantifiable things, such as which routes you take
to work each day and how much traffic you have to fight to get there. Your
ability to predict traffic patterns at different times of day used to be built
on your memory for all your previous commutes or the current conditions
that you heard about on your local news station. But now mapping apps

103
Lesson 12 How Big Data Can Predict the Future

can crunch millions of data points from many different sources, giving you
real-time updates and predictions regarding your expected travel time.

● As small sample sizes become large populations, data sets don’t need to
be cleaned up for missing data points or outliers, as used to be the case
in scientific studies. Messy data is OK now because the samples are so
massive. You can keep that one participant who wasn’t really paying
attention to your task because his or her data represent a miniscule
proportion of your overall set.

● Big data analysis is allowing us to solve problems like how to teach a driverless
car to navigate city streets. In fact, we technically don’t need to teach the car; it
teaches itself by collecting and crunching massive amounts of data. That’s why
we see them driving around seemingly aimlessly around our cities.

● The previous model would have been to have engineers working on the
problem for years in a computer simulation or a parking lot. It was a
very difficult problem and progress was very slow. But then our ability to
crunch large data sets became much easier, and now the cars are gaining
real-world experience to teach themselves.

Prediction Models

● As a result of big data analysis, correlational studies will become much


more prevalent than causal ones. We will use big data to predict rather
than to explain. This approach to information might be less satisfactory,
but it’s more accurate.

● Traditionally, we’ve thought of our brains as chasing certainty and being


uncomfortable with and unable to fully grasp probabilities. We often choose
the answer that seems right instead of checking our work. Our approach to
probabilities is an example of how our thinking process leads us astray.

● We tend to think of probability changes in terms of trends. For example,


things that have just increased will continue to do so, such as stocks or
housing prices. But we should think about them as likelihoods. If you’ve
just flipped a coin 10 times and each time it has come up heads, there’s still
only a 50% chance that the next flip will yield heads.

104
● Part of the reason we make this error is that we attribute the trend to some
cause. Study after study in behavioral economics has shown that we’re
primed to infer causation when 2 things co-occur. If you’re thinking about
your aunt and suddenly the phone rings and it’s her, you can’t help but
wonder if there’s some force in the world that led to this turn of events—
that somehow your thinking about her caused her to call you.

● Part of the reason that health-care professionals record your vital signs at
virtually every opportunity is so that they can spot any changes that might
signal a negative outcome. They care less about the immediate cause of a racing
heart rate or higher blood pressure and more about what those signals portend.

● A vivid example of how big data is changing our approach to health is


the role that Google plays in predicting how bad the flu will be in a given
season. In 2008, the company launched a project designed to use Google
search information to track the influenza virus.

● They called it Google Flu Trends, and they showed that search data can
provide accurate estimates of the prevalence of the flu in certain regions
up to 2 weeks before the Centers for Disease Control and Prevention could
using their more traditional tracking tools. The results were published in
the prestigious journal Nature.

● But in 2013, Google Flu Trends overestimated the prevalence of flu at its
peak by 140%. And this epic failure of big data made the headlines. The
project was quietly shut down shortly thereafter.

● One of the problems that this failed project highlights is that often
the people writing the algorithms to analyze big data don’t have the
background knowledge that experts in that domain do. In the case of
Google Flu Trends, clinicians argue that Google mistook searches for
symptoms of other ailments for those of the flu. And most people make
the same mistake: The vast majority of doctors’ visits for flu-like symptoms
are actually caused by some other virus. That’s why Google Flu Trends was
chronically overestimating the prevalence of the flu.

● This is a problem of inferring causation from a correlation. When people


search for terms like runny nose or cough, are they doing so because they
feel ill or because the flu has been covered by their local news?

105
Lesson 12 How Big Data Can Predict the Future

● The failure of Google Flu Trends represents a pitfall and temptation of big
data: jumping from finding to finding without processing the implications
of the findings or even considering alternative causes. It can encourage
shallow thinking. And governments are in danger of making these same
mistakes; using big data to predict crime, for example, can increase bias
and alienate communities unfairly.

● This is where the prediction model falls short: Predications based on


historical data don’t have to come to fruition the way a cause would
inevitably lead to an effect. And it’s harder for our human brains to
consider the limitations of predictions based on probabilities than it is to
understand cause and effect. But probabilities are a more accurate view of
how the world works.

● The question facing us now is how to turn big data into knowledge—
shifting from a view of knowledge as an understanding of the past to a
prediction about the future, which may not include a satisfying explanation
of why the predicted event might occur. Many of the statistical tools we’ve
relied on so heavily, such as p values and control groups, might be traded in
favor of data analytics, such as discovering trends in large data sets.

● And while we might not be very good at considering the limits of


correlations, favoring instead an attribution of cause and effect, our
memory systems are actually much better at predicting the future than
they are at accurately representing the past.

Questions to Consider

1 What is big data, and how is it used?

2 What are the consequences of outsourcing science to tech companies


who own the massive databases?

3 How does the availability of population-sized data affect what questions


we should be asking and how we might interpret the results?

106
Is Privacy
13
Dead in the
Information
Age?

107
Lesson 13 Is Privacy Dead in the Information Age?

I n the information age we find ourselves in today, we


regularly disclose deeply personal information on
social media platforms and sell our rights to privacy
for free Wi-Fi. We don’t often think about what we are
giving up for free access to social media, and if we do,
we give it up willingly. Put that way, it seems as though
we’re selling ourselves too cheaply. But think about
what we gain: the ability to track, connect, and maintain
relationships with people who are important to us and
who are often far away.

The Privacy Paradox

● Our increasingly lax attitude toward our own privacy is known as the
privacy paradox. Doomsday prophets point to the paradox as the harbinger
of the ultimate demise of our private lives.

● As a case in point, 2018 turned out to be a terrible year for Facebook


when it came to light that the company had indirectly allowed a third
party to sell personal information from up to 87 million users to the
British political consulting firm Cambridge Analytica. In 2019, Facebook
incurred the largest fine to date from the US Federal Trade Commission,
about $5 billion, as a result of the breach.

In 2019, Facebook had more than 2 billion monthly active users


across its different platforms, which include Facebook itself but
also Instagram, WhatsApp, and Messenger. The amount of data
that we now publicly share—and that tech giants like Facebook
access and monetize—is staggering.

108
● Yet in 2018 and continuing into 2019, Facebook continued its rapid
growth, adding users at an even faster pace than before the breach came to
light. Despite the knowledge that their personal information might have
been misused, users continued to upload it to the site. What’s more, while
we all know that we can adjust the privacy settings on our apps, phones,
and other devices, many of us just don’t bother.

● Does the fact that we’re willing to trade privacy for what social media has
to offer mean that we no longer care about our privacy? How important
can privacy be if we won’t spend 2 minutes protecting it by adjusting the
factory settings on our devices?

● A 2019 study conducted by Dan Svirsky at Harvard Law School suggests


that the answer is complicated and is an interesting reflection of our
minds.

● Imagine yourself in a café outside of your country of origin. You need a


public internet connection to check your emails because you don’t want
to pay for an international data plan, so you find a spot that has Wi-Fi.
As you’re trying to get online, a box pops up giving you 2 choices: access
their free Wi-Fi with your Facebook login or pay $2 for 30 minutes of
connectivity. Which do you choose?

● Most of us, it turns out, readily log in via social media or hand over our
email address when asked.

● But what if the box gave you these options: allow them to access your
personal information and give you $1 for your trouble, or let you browse
the web in complete privacy? In this case, most of us would probably
choose privacy. After all, in that direct setup, we recognize that our privacy
is more valuable than a measly dollar.

● But that isn’t rational!

● To explore this conundrum, Svirsky set up the following experiment. First,


he asked participants to fill out a survey about their health and financial
status. They could do so anonymously, or they could log in first with
their Facebook profile and get paid a bonus. They were told explicitly that

109
Lesson 13 Is Privacy Dead in the Information Age?

logging in via Facebook was the “low-privacy” condition and that they
could opt for a “high-privacy” condition if desired.

● Of course, most participants chose anonymity when the payoff was a


paltry $0.50. But even when the payoff was raised to $5—which for many
participants was nearly as much as an hour of work at minimum wage in
their states, and many of the participants had low incomes—some 40% of
them still refused to sell access to their Facebook profiles.

● What happened when the request for access to personal information


was only slightly veiled? In the veiled condition, participants were given
the option of filling out the survey anonymously or after logging in via
Facebook, but the privacy settings—that is, details about what information
was collected from their Facebook accounts—were only visible if they
voluntarily clicked an additional button.

● Under these conditions, most people chose to be paid for the survey but
did not click the button to reveal whether or not they were selling their
privacy. They preferred to hide. This behavior persisted even well after the
Cambridge Analytica scandal broke, in which people’s trust in Facebook
keeping their information private was tested. Interestingly, though,
immediately after the scandal was made public, people were more likely
to click to reveal the privacy settings. But 45 days later, they reverted to
ignoring the extra button.

● In the case of the café offering Wi-Fi access, your choice to sell your
privacy is veiled. Though we don’t generally think of logging in with
Facebook as giving away our private information, that’s exactly what
we’re doing.

● So what’s going on? Svirsky suggests that we’re avoiding details about the
trade-offs we make between convenience and privacy. If that’s true, then
simply providing people with more details isn’t going to change their
behavior in a lasting way.

110
Institutional Trust

● Over the past few decades, the information that companies collect has
widened enormously to include all kinds of personal data. In the 2012
Target pregnancy scandal, the American retailer began predicting whether
shoppers had recently become pregnant and offering specials on baby
items. Enter a teenage girl’s angry father, who admonished the company
for normalizing teen pregnancy by sending pregnancy coupons to his
daughter. But he soon found himself apologizing, as the company had
accurately predicted that she was indeed pregnant.

● This overt revelation of the kinds of algorithms the company uses on


personal information crosses the line for most people. And Target knows
this. So now they toss in a few irrelevant coupons to make it less obvious
what they are tracking. And we readily accept their thinly veiled attempts
to keep us feeling as though we remain anonymous.

● People haven’t stopped signing up for Target RedCards; in exchange for a


5% discount, we give the company unfettered access to our data. But we’re
also not expecting companies like Target to use their access to our data to
cause us harm. We trust them.

● One of the reasons why we keep things private is because we don’t trust the
world with that information. And trust is certainly changing as we fill our
lives with technology.

● One of the great leaps forward that human society has made was to
develop institutions that foster trust. In her book Who Can You Trust?,
Rachel Botsman describes the evolution of trust as starting from the point
at which we trust only those closest to us, whom we have personally vetted,
to the current age, in which trust is placed in many different institutions.
And she points out that trust in contracts, courts, and corporations created
the foundation for an organized industrial society.

● This trust revolution was built within the social structure of institutions
whose laws and practices are laid out and then agreed upon by their
members. And trust is key: When we belong to or operate within an
institution, we trust that these laws will be adhered to. And we punish
those who break them.

111
Lesson 13 Is Privacy Dead in the Information Age?

● But what happens when institutions break our trust and are not punished?
Trust is eroded much more easily than it is built, and once broken, it can
be very hard to reestablish because it’s based on a belief that has been
proven false.

According to a 2017 Gallup poll of Americans who came of age


at the turn of the 21st century, 75% distrust the government,
86% distrust financial institutions, and 88% sometimes or
never trust the media. But that same generation will hop
into a stranger’s car, sleep on a stranger’s couch, and upload
personal photos to the internet through apps like Uber, Airbnb,
and Facebook.

● While institutional trust is eroding, we’re moving back to a model where


trust is established from peer to peer, but this time, peers can be far away
and unknown. Botsman calls it distributed trust.

Distributed Trust

● Just as we receive news from multiple sources, often from people we know
personally or from people who are experiencing the newsworthy event
in the moment, we now rely on multiple individual experiences to tell us
whom to trust.

● Many applications that enable people to connect with and share resources
with strangers, such as Airbnb and Uber, are changing the very nature by
which we evaluate trustworthiness. What we’re learning is that given the
right motivation, strangers aren’t likely to betray our trust.

● The internet was invented to help us connect, and the value of a person’s
connections can be broadly defined as social capital. While our obsession
with devices and online communication can come at the cost of in-person
interactions, the internet also affords many more ways of earning social
capital. A person’s reputation remains a valuable commodity, as it can give
him or her access to all of the conveniences that a digital world offers.

112
● When you book a home on Airbnb, for example, you’re well aware that
a company is learning a lot of information about you and your personal
preferences. You sacrifice that privacy for the convenience of living in a
home away from home, complete with a kitchen and other amenities, while
on a trip. And because you have a sense that somebody is watching—that
your behavior will be reviewed—you behave better overall than you might
in a hotel room or other accommodation in which the provider doesn’t
have the ability to give you a bad rating.

● The idea that people who log in to other sites via Facebook don’t care
about their privacy is patently untrue, as Dan Svirsky’s study and others
have shown. In fact, what they are exchanging their privacy for can
sometimes be a path toward demonstrating their trustworthiness.

● Trust is something that we feel intuitively; we can actually categorize novel


faces as trustworthy or not. It’s often not a conscious, rational decision.
And it’s not accurate, for the most part. Faces that are more attractive,
more likely to look as though they are smiling, or look more like baby faces
are rated as belonging to more trustworthy individuals when there’s no
reason why these features would actually correlate with integrity
or honesty.

● Technology is replacing some of these intuitive markers with actual data:


reviews of past behaviors, measures of social capital, and so on. Trust
can now be based on collective experience, which can be a more accurate
filter than simple intuition. But for a person to demonstrate his or her
trustworthiness in this new distributed trust model, he or she must make
public some personal information that previously was considered private.

Privacy as a Luxury Good

● Social capital has value and can be bought. This means that people who
have more money can increase their social capital by bypassing traditional
channels. You can buy followers on social media platforms, pay people to
leave reviews for your products on Amazon or your restaurant on Yelp,
and so on. And by the same token, you can pay more to ensure that your
privacy remains intact.

113
Lesson 13 Is Privacy Dead in the Information Age?

● As Ramesh Srinivasan suggests in his book Beyond the Valley, we’re moving
into an era in which we’re not just buying technological innovations; we’re
entering into agreements with companies that provide services in exchange
for our personal data.

● Kate Crawford, a leading researcher into how artificial intelligence affects


society, already considers privacy a luxury good in our digitally connected
society.

● What’s changing rapidly is that where we once considered privacy as an


assumption that no one was looking, we’re now living in a world in which
we’re beginning to assume that someone is always watching. Instead of
opting to share our personal information, we will need to opt out, and that
will come at a cost. As a result, privacy will be something we need to pay
for, one way or another.
● Crawford is quoted in The Atlantic as predicting that the next 10 years will
see the development of more encryption technologies and boutique services

114
allowing people to pay a premium for greater control over their data. She
points out that this state of affairs will establish a new divide between the
privacy-rich and the privacy-poor.

● Instead of us dictating how and what we share with others, the connected
digital world will establish privacy norms. Once we take for granted that
our lives are no longer private, how will that change the very essence of
who we think we are?

Questions to Consider

1 Do digital natives and digital immigrants view privacy differently?


2 How has what we post online changed with the awareness of privacy
issues?

3 Will privacy become a luxury available only to those who are willing
to pay for it?

115
The Emotional
14
Effects of
Social Media

116
P lenty of research has shown the negative effects of
social media, such as inducing sadness and loneliness
in users. But if the internet makes us sad and lonely, then
why are we so interested in it? The runaway success of
social media suggests that there must be net positive
benefits for users.

Sadness and Jealousy

● The question of whether social media makes us sad is hard to answer


because both the technology and how we use it change rapidly. But some
studies have tried to address this issue.

● In 2013, Ethan Kross and his colleagues at the University of Michigan


published a study in which they text-messaged their participants 5 times
a day over the course of 2 weeks to ask them to fill out an online survey
that assessed how they were feeling at that moment. Participants were also
asked about their use of the social media site Facebook and whether they
had had any direct interactions with other people recently. Their overall
well-being was assessed both before and after the 2-week period.

● The authors sought to answer 2 questions: Does Facebook make you feel
worse in the moments after use? And does it affect life satisfaction in the
longer term? These authors found that the more people reported using
Facebook, the worse they felt. But the association didn’t go in reverse—
that is, people weren’t using Facebook only when they were feeling sad.

● Over the course of the 2 weeks, the more they used the site, the more their
overall well-being declined. Direct interactions with people showed the
opposite effect: These interactions were more likely to improve the person’s
mood, and the more he or she socialized with people in real life, the better
off he or she was at the end of the 2-week period.

● You might think, then, that lonely people are more likely to turn to
Facebook—and that was found to be true. But when the authors controlled

117
Lesson 14 The Emotional Effects of Social Media

for loneliness, Facebook use continued to predict declines in well-being


and mood. And a review of studies has shown that users of Facebook are
not generally lonely people to begin with.

● In addition to making people feel lonely,


Facebook use can also cause a rise in feelings of Companies
jealousy and envy as users compare their own like Facebook
lives with the selectively rosy ones of their capitalize on the
friends. Envy is more often engendered when
users interact with social media passively—
human propensity
that is, when they just read other people’s to make social
posts and look at their pictures without comparisons.
commenting or posting themselves.

118
The Desire to Belong to a Group

● Social media platforms tap into another basic human desire: to belong
to a group. Experimental psychologist Robin Dunbar has put forth the
social brain hypothesis, which suggests that modern human brains were
shaped by evolution specifically to enable us to get along with others and
live in larger social groups. During the great expansion of brain size among
hominids around 1.5 million years ago, our ancestors also began living in
larger and larger groups, and those who were better equipped to navigate
social interactions were more likely to produce offspring that survived.

● Looking at social group size and neocortex volume, Dunbar noted that the
more neocortex a primate species has, the larger its social community is.
Extrapolating from this relationship, Dunbar has calculated ideal social
group sizes for different primate species, including humans.

● For humans, Dunbar’s number, as it’s commonly called, is 150,* which he


suggests is about the maximum number of other humans with whom most
people can maintain a stable and meaningful relationship.

● Dunbar has also suggested that our social networks are hierarchically
structured in layers. We have about 5 people with whom we are
exceptionally close: our spouses and our kids or perhaps a sibling and a
best friend. He calls this the support clique.

● Then there are about 15 people that we could count on in an emergency,


with whom we’d feel comfortable dropping off our kids or our pets. This is
the sympathy group.

● The next layer up is that magical number of 150—people we might not


see every day but who we’d hope would come to our funeral or whom we’d
invite to our wedding.

* Dunbar has found evidence for the accuracy of this number in a variety of
sources, including the estimated size of Neolithic farming villages, the basic
unit of an army in Roman times, and the number of employees in a company
building. In fact, one study found that companies with more than 150 employees
in a single location tend to have communication problems that slow down
productivity.

119
Lesson 14 The Emotional Effects of Social Media

● Finally, the outer layers of 500 or 1500 match up with the number of
acquaintances that we’d recognize enough to say hello to at the airport or
the number of faces we can name, including celebrities and people we don’t
know personally.

● Dunbar has argued that we have a basic need to keep track of our social
network and maintain social bonds, which he calls social grooming. Much
like physical grooming in nonhuman primates, checking in with friends,
liking their status updates, and generally staying in touch helps promote
the stability of the social group. Facebook and other social media platforms
tap into this desire and make grooming our network feel simpler.

● Since Facebook and other social media platforms allow us to quantify


social circles, Dunbar has used such platforms to test his hypothesis.
In 2010, he published a study showing that the typical network sizes as
measured by social media platforms like Facebook do indeed correspond
to the predictions made by the social brain hypothesis. The platforms
also do not, as one might expect, seem to expand our social networks in a
significant way.

● Dunbar argues that the same constraints that limit our social network
size in real life also apply to our online lives: the cognitive constraint of
maintaining and remembering the relationships and the time constraint of
interacting with them. Understanding these constraints also gives us hints
as to what kinds of social media use might benefit us versus what might
harm us.

● Several studies of Facebook use have shown that active interaction with
a small number of users—posting on walls and commenting on posts—
can lower feelings of loneliness and promote well-being, while passively
reading feeds and viewing content has the opposite effect. Strong ties can
be maintained with active interaction, and that can increase your social
capital. But active interaction requires time and effort. Passive scrolling can
make you feel less satisfied both as a result of making social comparisons
and wasting time that you could be spending developing meaningful
connections with others.

120
Benefits of Social Media Use

● Despite all the negative effects of social media that have been found, there
are studies showing that social media use can make you happier, as in
the work concerning active interactions. Studies have shown that using
platforms like Facebook can make us more trusting and even encourage
political participation.

● And sharing is psychologically rewarding—it’s pleasurable. One


consequence of social media is that it is changing how we process
experiences: We think about how we’ll share them with our social network,
whether through photos or posts, instead of just experiencing them in the
moment. And it turns out that just thinking about how we’ll be able to
share the experiences changes how we mentalize them, including activating
our reward system even before we’ve uploaded the post.*

● One of the ways in which social media platforms are changing social circles
is by combining different social spheres and blurring the lines between
categories of relationships, such as those between business colleagues.
Many Facebook users, for example, have friend networks that encompass
work colleagues in addition to friends from other spheres.

● And while there are many instances in which individuals have lost jobs
after posting inappropriate content, there’s also evidence that users are
learning to implement strategies that minimize this type of friction, such
as choosing more specific privacy settings, messaging directly rather than
posting on walls, and self-censoring content.

● Social media is also giving companies opportunities to interact more


personally with their customers. Setting aside the data that they can collect
from their customers, there’s also evidence that having a social media
presence helps a brand strengthen its relationship with its customers by
allowing them to associate the brand with human characteristics. This
anthropomorphism is a powerful way by which brands can engender
loyalty and engagement.

* This finding comes from a study out of Matthew Lieberman’s lab at UCLA,
published in Psychological Science in 2013.

121
Lesson 14 The Emotional Effects of Social Media

● In one way at least, social media’s success might also be its downside:
Because of the ease with which we can now access our social circles and the
proliferation of ways by which we can connect, our attention is constantly
being pulled in many different directions at once. That makes it difficult
to interact actively, and in fact, studies of active versus passive social media
engagement show that while the increases in social capital and well-being
are only possible through active interaction, most of us spend more time
passively scrolling.

● Indeed, teenagers who use 7 or more social media platforms are more likely
to report feeling anxiety and depression than those who are active on 2
or fewer. With 7 different platforms, it’s much more likely that these kids
spend most of their time scrolling rather than interacting. And there are
implications for how they learn to handle or avoid boredom.

The Emotional Trap of Boredom

● Access to smartphones and all of the distractions they offer has eliminated
many of the situations in which we learn to overcome boredom. People in
waiting rooms, commuting on the train or bus, or even waiting for food
at restaurants now have the option of checking email, playing games,
watching videos, listening to podcasts, or interacting with social media
instead of reading a magazine or simply being alone with their thoughts.

● John Eastwood, who runs York University’s Boredom Lab, defines boredom
as an unfulfilled desire for satisfying activity. We’re often bored in situations
in which we feel as though we’re not in control of our attention; we feel
disconnected from our inner thoughts or the external world.

● The problem with social media is that while it can seem to stave off
boredom in the very short term—for example, while you’re waiting in line
at the DMV—it can make you more susceptible to boredom in the long
term. That’s because you become disconnected from your inner mental
life. Your thinking is shallower as you skim over material like images
and short posts, and rapidly switching between nuggets of information
becomes a mental habit. As a result, it feels less natural to delve deeply and
lose yourself in your thoughts.

122
● Allowing yourself to get bored can actually be really helpful, especially
if you’re a teenager learning to navigate the new ways of thinking you’ve
developed now that your prefrontal cortex is becoming more efficient
through the process of myelination: the stage during which connections
between neurons get wrapped up in an insulating fatty sheath, increasing
the speed and efficiency with which messages are exchanged. But if you
never have the opportunity to do this kind of mental work because you’re
using social media to fill up every spare minute, then your ability to
tolerate boredom and to benefit from it diminishes.

● Boredom can also serve as a motivator. Since it’s an aversive state, we work
to avoid it, and that might lead us to make positive changes in our lives.
We might then find the courage to find more meaningful and engaging
goals and projects.

● Because boredom feels like an emotional trap, we try to find ways to escape
from it. If pursuing meaningful goals and developing strong and lasting
personal relationships is the equivalent of a healthy mental and emotional
diet, we need to limit the number of snacks that we consume before meals.
And if our escape from boredom too often involves simple snacking on
social media, we’ll never be fully satisfied.

● Social media gives us opportunities to overcome geographical and even


temporal obstacles to developing sustaining connections with others. But it
can also fill us up without providing the nutrients we need to thrive.

Questions to Consider

1 How has our use of social media evolved in response to our feelings
when we use it?

2 Why might checking social media make us feel sad?

3 What are ways in which we can minimize the effects of social media
on our emotions and well-being?

123
How Online
15
Dating
Transforms
Relationships

124
O ver the past few decades, how Westerners meet
each other and end up in relationships has changed
dramatically. According to a study published in 2017, the
percentage of married couples meeting online has gone
from essentially 0% to 25% if you’re heterosexual and nearly
70% if you’re in a same-sex partnership. Meeting through
friends is still the most popular way to find your lifelong
mate if you’re heterosexual, but just barely, accounting
for almost 30% of couplings. If you’re homosexual, you’re
now slightly more likely to meet your partner at a bar or
restaurant than from among your friends.

Online Dating Services

● In the 1990s, a number of computer-based dating programs failed, largely


because they did not have the computing power to sift through enough
profiles and find appropriate matches. But once computing power became
affordable and many more people were able to use the internet, online
dating services began to flourish.

● Today, there are so many dating services that it’s helpful to group them
into categories.

— Some are open to anyone, while others cater to a particular niche, such
as religious affiliation, social class, age, or hobby.

— Almost all of them have a proprietary matching algorithm, which can


be more or less scientific.

— And they can be divided up into traditional websites and services that
are more app-based, including those that use your smartphone’s GPS
function to match you with people who are geographically close to you.

125
Lesson 15 How Online Dating Transforms Relationships

● Most services now use a combination of these 3 strategies, with some


giving users the option to self-select while also providing algorithm-based
recommendations.

● The wide proliferation of online dating services speaks to the fact that one
major obstacle to active partner searching has been largely removed: the
stigma associated with advertising yourself. Personal ads in newspapers and
other media remained on the fringes of the dating world because it was
considered a social taboo to seek help in finding a partner in such an overt
fashion. This stigma bled into the online dating scene initially but has
since been somewhat attenuated, at least among digital natives.

● With this change, the question before us is whether online dating


services have simply given people more opportunities to meet or they are
fundamentally changing how we evaluate potential partners and how we
approach significant romantic relationships in general.

● Are there significant differences between people who find each other
online and those who use more traditional means? Recent research of
personality traits among these 2 groups suggests that there aren’t many
differences between the populations anymore, though online daters might
be slightly less likely to be religious and slightly more likely to reject
traditional gender roles and be open to new experiences than those who
eschew the practice.

Online Dating Profiles

● Once a person has decided to seek love online, one of the first steps is to
create a profile and a list of traits that would be desirable in a partner.
This step can be incredibly impactful when we’re considering how dating
services are shaping relationship building.

● What does profile accuracy tell us about ourselves? Not surprisingly, we


tend to paint a rosy picture of our various traits. For example, in 2010, a
study of more than 20,000 profiles found that the average reported height
of online daters is an inch or so taller than the national average. Women
aged 20 to 29 report being 5 pounds lighter on average, while those in the
50 to 60 age range slice off an average of 22 pounds.

126
● While women are more likely to lie about physical characteristics, men
have been shown to be more likely to lie in their profiles in general—as
a 2012 study by Rosanna Gaudagno, Bradley Okdie, and Sara Kruse
reported—with a tendency to appear to be more dominant, have more
resources, and be kinder than they actually are.

● Catalina Toma and Jeffrey Hancock followed up this work with a


linguistic analysis of dating profiles in 2012. They found that profiles
containing more lies were also more succinct, in contrast to reports that
indicate that liars in general use more words in in-person conversations
than truth-tellers. Liars were also less likely to use first-person pronouns
(such as I and my) and fewer words related to negative emotions. The
authors interpreted these results as suggesting that more deceptive daters
were also more likely to distance themselves from their profiles.
● These findings also show something more fundamental that online
dating services make clear: Despite the fact that the internet is designed

127
Lesson 15 How Online Dating Transforms Relationships

to connect us with others, we often use it to create an idealized reflection


of ourselves, and much of the time we spend online involves focusing on
ourselves rather than others. We manage the information about ourselves
that is visible to the public by curating uploaded photos, crafting realistic
but rosy profiles, and tagging ourselves in posts or images or videos that we
approve of and untagging those we don’t. We generally don’t create false
identities but tweak and polish the ones we have.

● Teenagers, who represent some of the most active and prolific social
network users, explore and build their identities with the help of these
tools. And single people, or those looking for romantic or sexual partners,
can also benefit from the questionnaires they need to fill out and the
psychological work they need to do in order to present their best selves to
potential mates.

● Social media companies, then, play an active role in helping young


people shape their identities. By the same token, dating services shape the
decisions and ultimately the relationships that their users engage in.

● Profile creation and other social media habits affect how we think about
ourselves and how we craft our identity. There’s now growing evidence
that how we behave online can have implications for our personalities
offline. For example, because social media encourages us to focus on
ourselves, creating, tweaking, and refining our online identities can push
us toward behavior that builds our self-esteem. But it can also make us
more narcissistic.

Evaluation

● What the difference between algorithm-based services like eHarmony


and self-selective apps like Tinder shows is how online dating services are
changing how people approach relationship initiation. Instead of meeting
face-to-face and exchanging information dynamically, daters can browse
profiles at any time of day or night and without the other person being
involved in the process.

128
Comparing Tinder users to those using other online dating apps or
none at all, the authors of a 2016 study, Karoline Gatter and Kathleen
Hodkinson, found that Tinder users were slightly younger and that
the men were more sexually permissive than the women, but there
were no differences in sociability or self-esteem between groups.

● This also means that you can glean a lot of information about someone at
one time, rather than slowly learning through conversation. And instead of
considering one or 2 potential partners at a given time, daters are offered
dozens, hundreds, or even thousands of options simultaneously.

● There are consequences to these different ways of meeting someone, with


perhaps the most salient being the evaluation mode—what Eli Finkel and
other psychologists call joint evaluation versus separate evaluation. In joint
evaluation, you consider multiple options at the same time, a process that’s
encouraged and enabled by online dating services. In separate evaluation,
you judge your compatibility with one partner at a time.

● The problem is that once you’re in a relationship, you’re stuck with that
person in isolation. And if you chose that person by comparing him or her
to others, rather than evaluating his or her compatibility, you might have
made the wrong choice.

● Studies of human decision-making in domains outside of dating have


shown that how a person weights attributes of a product differs if the
person is in joint evaluation mode versus separate evaluation mode.

● When engaging in joint evaluation, especially when the items—or, in this


case, people—you’re comparing are highly similar, you can’t help but focus
on the differences. How else are you supposed to make the decision? And
sometimes, these differences really are unimportant given how you’ll use
the item. You might look at 2 profiles of similar dating partners and find
that one went to a better college than the other. So maybe you choose the
person who went to the better college over the other person. But does the
prestige of the college a person graduated from really matter that much?

129
Lesson 15 How Online Dating Transforms Relationships

● This overrepresentation of differences in the joint evaluation mode is


called the distinction bias. And it’s the reason why electronics stores place
20 similar TV screens all in a row on the wall. That placement helps us
compare them and ultimately pushes us to spend more money because
we become enamored of the one feature that the more expensive screen
has that the others don’t. But more than likely, we would have been just
as happy with any of the choices and instead we’ve wasted both time (in
comparing similar models) and money.

● When we engage in side-by-side comparison, we make false assumptions,


such as that our enjoyment of a particular feature will be proportional
to how much it costs to have it. But it’s better to consider each item on
its own and decide how much we like it and then compare our overall
assessments of all items being considered.

● When it comes to our predictions of how much we’ll enjoy a product, a


consumer research study by Christopher Hsee and France Leclerc from
1998 showed that items that are already attractive on their own are actually
enjoyed less when compared simultaneously while those that aren’t as
attractive benefit from joint evaluation.

● You can see how this evaluation mode difference might apply to online
dating: If you’re faced with choosing between several attractive candidates,
joint evaluation diminishes your appreciation for any one of them. If,
by contrast, you’re going to have to consider only relatively unattractive
options, then comparing them will make you feel better about your choice,
as you’ll focus on the positive aspects of what makes your particular choice
superior to the others.

● The joint evaluation mode in dating, then, might push you to focus
on differences between dates that are most salient but perhaps not
most important in terms of long-term relationships. Attributes that are
easily compared via online dating sites are things like income, physical
attractiveness, and education. Attributes that are likely more important
but require face-to-face contact are sense of humor, conversational style,
kindness, and generosity.

130
Mindset

● Many online dating services market the size of their user base, touting the
millions of available singles a person will be able to choose from. But too
many options can lead to choice overload, exhausting us and leaving us less
happy with our ultimate choice.

● Online dating, then, turns finding love into a shopping expedition—


encouraging us to compare features rather than rely on experiential
information, such as whether you enjoyed the person’s company.

● Do we make better long-term decisions by listing pros and cons or by


spending time with someone and extracting patterns of behavior that
may not be consciously accessible? The answer to that question involves a
consideration of the type of mindset that we find ourselves using.

● Perhaps the most worrisome psychological shift that online dating services
push us toward is a move from a locomotion mindset to a deliberative or
assessment one.

● A deliberative or assessment mindset, as Eli Finkel and colleagues define


it in the journal Psychological Science, is one with which a person focuses
on the critical evaluation of entities or goals in comparison to all of the
available alternatives. It allows us to pursue the optimal choice among an
array of options to evaluate pros and cons with less bias. It also seems to
lead to more accurate forecasts about the future of a romantic relationship.

● The problem is that relationships aren’t static; they are dynamic and
require effort to maintain, just as a garden does. To nurture and maintain
a satisfying long-term relationship, we need to have a locomotion mindset,
one that continues to move and change as time goes on, which emphasizes
the psychological resources we’ll need to attain our desired goals as
conditions shift. Online dating services make us feel as though we’ve put
all the work in up front and once we’ve made our choice, we just need to sit
back and enjoy the fruits of our labor. But love doesn’t work that way.

● And research has shown that people who have a strong assessment
mindset coupled with a weak locomotion mindset tend to become overly
critical of their partners as well as more pessimistic about the future of

131
Lesson 15 How Online Dating Transforms Relationships

their relationship and their ability to attain


desired goals. Over the long term, these
Technology in the form
relationships are less satisfying.
of online dating encourages
us to approach the task of
Online daters tend to approach finding a partner as a sort of
individuals who are similar to them algorithm with a winning
in terms of race, religion, political outcome rather than a
views, education, and so on, and this garden that requires
pushes our society ultimately toward regular maintenance.
less diverse communities. But these
services also enable us to match with
people outside of our immediate social circle,
and the rise in interracial marriages, for one thing, seems to be at
least partly attributable to online dating. So it’s unclear whether
the ultimate effect will be less or more diverse partnerships.

Questions to Consider

1 How is online dating different from conventional dating practices?

2 Does online dating steer us toward instant gratification and away


from long-term relationships?

3 Does online dating give us better matches than conventional dating?


Is it safer for minority groups, such as the LGBTQ population?

132
Technology
16
and Addiction

133
Lesson 16 Technology and Addiction

W e often hear talk about internet addiction, about


how we’re compelled to check our phones every
few minutes and that we’re heading down a path that
will end with many of us collapsing, dry-eyed, in front
of our computer screens—not that different from the
heroin addict who overdoses in an alley. And there’s
no doubt that addiction leaves observable traces in the
brain, from the very structure of brain cells and brain
regions to the pathways between them. Can technology
abuse engender the same changes?

Sex and Drugs

● The American Society of Addiction Medicine defines addiction as a


“primary, chronic disease of brain reward, motivation, memory and related
circuitry.”

● For many years, addiction researchers and clinicians have argued whether
the definition of addiction should include types of behavior rather than
remain specific to psychoactive substances like heroin, alcohol, and cocaine
that target our reward pathways. Psychoactive drugs that don’t directly
affect the brain regions and neurotransmitters involved in reward are
generally not considered addictive.

● One category of such drugs is hallucinogens, such as psilocybin and


LSD. They’re not considered addictive because they don’t usually lead to
cravings or compulsive drug-seeking behavior. They are not benign. They
can be harmful for other reasons, such as by inducing a psychotic break or
violent behavior, but they don’t alter the brain’s reward pathways the way
that addictive drugs do.

● But what about drugs that work on dopamine? Isn’t that the reward
chemical in the brain?

134
● Dopamine plays a number of different roles in the brain, including
facilitating movement, some types of memory, and motivation and
pleasure. But its role in reward is not simple.

● Naturally rewarding behaviors like eating, sex, and social interactions


cause the release of dopamine in parts of the reward pathway, such as the
nucleus accumbens. But it’s not just pleasure that dopamine modulates; it’s
also desire.

● Some neuroscientists differentiate between wanting and liking in terms of


the neural basis of reward. And sometimes it’s hard to tell them apart. In
a classic experiment in the 1950s, James Olds and Peter Milner implanted
electrodes in the nucleus accumbens of rats and taught them to press a
lever that would give a little electric jolt directly into this region.

● They then observed how the rats compulsively pushed the lever. And it
turns out that the rats would do so even in the presence of a receptive
female. They’d run over an electrified grid to get to the lever, even though
they would not do so for food if they were starving.

● But did the rats enjoy the stimulation? Or did it simply cause them to want
it? We can’t ask the rat, but we can ask human addicts, who often report
not enjoying the object of their addiction but still being highly motivated
to pursue it.

● Usually, the extent to which we want something is related to how much


we like it when we get it. When you’re really hungry, you want food much
more than if you’ve just eaten. And food tastes much better then, too.
But wanting and liking aren’t fully linked in the brain: You can want
something very intensely even though you don’t like it. Part of the reason
this is possible has to do with the neurotransmitter systems involved.

● Dopamine controls wanting, but liking, or pleasure, also relies on the


endogenous opioid system. In other words, when we want something, we
see more dopamine in the motivation and reward circuitry, but when we’re
experiencing pleasure, our opiate receptors are activated by opioids like
endorphins produced in our brain.

135
Lesson 16 Technology and Addiction

● Sex and drugs like cocaine and methamphetamine share a lot of


commonalities in terms of how they affect the brain. But sex is a very
powerful motivator, and orgasm is considered by some as the most potent
pleasurable experience. For most people, sexual arousal and orgasms induce
higher levels of both dopamine and endogenous opioids than any other
natural reward. So it shouldn’t be too surprising that highly addictive
drugs like coke and meth produce effects that are more similar to what
happens during sex than any other naturally rewarding experience.

Gene Expression and Reward Pathways

● Addictive drugs change the brain in a number of ways, but there are 2
salient mechanisms by which they can induce long-term effects: one is
through changes in gene expression, which make the individual seek out
the rewarding effects of the addictive substance, and the other is through
changes in the brain’s reward pathways, which make previously pleasurable
activities less satisfying and encourage riskier reward-seeking behaviors.

● There’s now a well-characterized gene transcription factor called ∆FosB


that seems to play a role in many different forms of addiction, including
behavior-based ones like gambling and across many different addictive
drugs, including amphetamines, cocaine, nicotine, opiates, and alcohol.
A transcription factor is a protein that controls the rate at which a gene is
expressed—that is, whether it is turned on or off.

● ∆FosB has been called a molecular switch for addiction because it


accumulates in brain regions involved in addiction called the nucleus
accumbens and the dorsal striatum after repeated administration of
addictive drugs or high levels of consumption of naturally rewarding
things like sweets or fats. We even see it overexpressed in behaviors that
have become compulsive like sex or running.

● What puzzled scientists for a long time was the fact that some people could
take addictive drugs like cocaine for a while, seemingly without becoming
addicted, but then at some point, prolonged drug use would seem to flip a
switch, leading to compulsive behaviors characteristic of addiction.

136
● ∆FosB gene expression may be the mechanism by which this happens.
When it’s turned on, it changes the anatomy of cells in critical regions of
the reward pathway, making the individual more sensitive to the addictive
drug or behavior, increasing that person’s desire for the specific stimulus.
This is called reward sensitization.

● When you repeatedly experience pleasure, you begin to induce long-


term changes in your brain, with the help of the ∆FosB gene variant.
When the protein that this variant codes for accumulates in the nucleus
accumbens—the brain’s reward center—the neurons there begin to
rewire such that you begin to crave the thing that led to the pleasurable
experience, be it sex or drugs or something else. Essentially, these changes
enable long-term neuroplasticity or memory formation.

● Both frequent sex and repeated use of cocaine and methamphetamines


cause increases in ∆FosB, which is not true of many other experiences that
are less intensely pleasurable.

● Addiction, after all, has 3 components: cravings, or a strong desire to


obtain the object of the addiction; a loss of control over behavior related
to the substance or activity; and negative consequences in work, home, or
social life or in financial, physical, or psychological health.

● So, through the accumulation of ∆FosB, the wiring in the brain’s reward
pathway changes such that the person experiences strong cravings for
the object of their addiction: He or she becomes sensitized to cues that
anticipate the reward. But paradoxically, the drug or activity itself might
have become less pleasurable.

● Repeated use also raises tolerance, which means that more of the drug or
activity is needed to induce the same effect. But this seems contradictory:
How can you have both reward sensitization and tolerance? The answer is
that tolerance involves a different set of mechanisms, but it can be roughly
thought of as the brain’s attempt to return to normalcy.

● If you stop taking an addictive drug, then you also experience symptoms
of withdrawal—physical symptoms that are essentially the opposite of
the drug’s effects. So both tolerance and withdrawal can make a drug
addictive.

137
Lesson 16 Technology and Addiction

Internet Pornography

● With repeated exposure, drugs of addiction lead to long-term changes


in the wiring of the reward system, tolerance for the drug of abuse,
and withdrawal symptoms that can be both physical (relatively short-
lasting) and psychological (in the form of triggered cravings that can
be long-lasting and even permanent). Can behaviors that don’t alter
neurochemistry so directly also have a similar effect?

● They’d have to be pretty powerful. But technology can turn a relatively


benign experience into one that’s supercharged. And if that experience
taps into one of the strongest motivational drives we have, generates an
unforgettable high, and is easily repeatable, we might have a problem.
Enter internet pornography.

● While dopamine affects motivation by making us want something that


previously brought us pleasure, it’s not just a pleasure chemical—it’s also
part of the process that makes us seek out a substance or experience.

● Dopamine levels track the anticipation of a reward. From an evolutionary


standpoint, it makes a lot of sense. Yes, the feeling of pleasure is the reward
that we seek. But seeking is more important than being rewarded. We
need mechanisms that encourage us to keep working for the reward so that
we can find the food, mates, and shelter that will ensure that our genes
survive. That’s why novelty is rewarding.*

● In our ancestral environment, access to novelty was somewhat limited


in comparison with what the internet provides for us today. To meet a
new person, our ancestors probably had to travel pretty far, so it makes
sense that we evolved neural circuitry that encouraged this wanderlust
and search for novelty: Farther away potential partners would be more
genetically diverse, and genetic diversity is natural selection’s palette.

* If you offer a male rat a female in heat, the rat will copulate with gusto for a
while but eventually will be less interested. Replace the old female with a new
one and the rat finds his mojo again. This is called the Coolidge effect, and it is
attributed to a gradual decline in dopamine levels with the same old thing and a
surge when there’s something new.

138
● But the internet has made it trivial to find novelty. Individuals who
feel that their use of pornography has become an addiction describe the
cravings for new clips and images as insatiable, sometimes spending hours
clicking through videos in an endless stream of diversity, eschewing the
supposed goal of climaxing in the process.

● But it’s not just novelty that makes internet pornography potentially
addictive: It’s also filled with supranormal stimuli.

● In the past, a Playboy magazine or a sexy poster was no match for a real
live person. But pornography online is rife with supranormal stimuli:
artificially enlarged breasts and penises and exaggerated sounds and
actions. So some users of porn find themselves preferring the fake stuff
over reality. And however good internet porn is today, virtual reality is
poised to give people the perfect sexual experience, on demand, with
unlimited variability.

● Neurologically, how might internet porn use become addictive? Pretty


much the same way that cocaine use can. There are many documented
similarities between how our brains approach sex and cocaine. In
fact, researchers believe that the reason cocaine is so addictive is that
it mimics and accentuates the effect that sex has on the brain. So if a
drug that mimics sex can become addictive, why can’t a technology that
supersizes sex?

The Harm of Internet Porn Addiction

● Sex is pleasurable, and the internet provides ample opportunities to


experience it. What’s the harm?

● The answer comes from the disturbing reports of self-proclaimed internet


porn addicts, who find themselves meeting the criteria for addiction, as
they experience insatiable cravings, lose control of their behavior, and find
themselves facing negative consequences at work or at home.

139
Lesson 16 Technology and Addiction

Young men and women are more susceptible to the negative


effects of porn consumption because their brains are still
developing, and as a result, they are highly sensitive to rewards.
Their brains are hyperplastic, particularly when it comes to sex, as
they reach full sexual maturity and peak fertility. The wiring that
occurs during this stage can have a lifelong impact on a person’s
sexual health and well-being.

● Researchers have documented changes in ∆FosB expression in the nucleus


accumbens of male rats through sexual activity that look very similar to
the ones that are induced by addictive drug use. This increase leads to
reward sensitization—or more dopamine activity in the anticipation of a
reward, causing intense cravings and seeking behavior.

● But there’s also the problem of tolerance: With extended repeated


exposure to porn, everyday sexual encounters in real life are less and less
appealing. The dampening of pleasure even extends to orgasms induced
by pornography, such that users are forced to search for ever more intense
and novel videos or find themselves in a state of perpetual want with no
satisfaction.

● There is now a growing literature mapping the brain changes that occur
in people who show signs of internet porn addiction onto changes that are
seen in individuals with substance abuse disorder. Of course, these diseases
are not equivalent, but the fifth edition of the Diagnostic and Statistical
Manual of Mental Disorders, which is used by psychologists around the
world, includes hypersexual disorder, whose risk factors include the easy
access of sexual content.

● There’s also evidence that repeated porn viewing by heterosexual men


can increase feelings of aggression toward women. And there’s increasing
evidence that erectile dysfunction and other sexual problems have
risen sharply in the past decade, tracking the proliferation of internet
pornography.

140
Addictive behaviors are not new, nor have they been created by
the internet or other modern technological advancements. But
technology has made it easier to acquire stimulation that can lead
to the brain changes signaling an addiction.

Questions to Consider

1 What does watching pornography do to your brain?

2 Why is internet pornography so addictive?

3 Are teenagers more vulnerable to internet pornography’s negative


effects?

141
Is the Internet
17
Hurting
Democracy?

142
T here is a strong interest in merging neuroscience
with politics. Indeed, there are measurable and
documented differences in brain function in people
who ascribe to quite different political views. And the
internet and other technologies can shape our behavior
and ultimately give one candidate an advantage over
the others. In the aggregate, these changes can affect
the ways in which we make voting decisions and how we
build and craft our communities in turn.

Political Affiliations

● In a large proportion of the developed world, there is an increasingly


obvious divide between people who consider themselves liberals and
those who self-identify as conservatives. Though what it means to be a
Democrat or a Republican and which party represents the more liberal or
conservative ideology in the United States has changed dramatically over
the past 100 years, we can roughly distinguish individuals who are more
comfortable with change and adaptation as liberals and those preferring
stability and tradition as conservatives.

● As such, there are measurable differences in both the function and


anatomy of people belonging to these 2 groups. There’s even evidence from
twins reared apart that some of these traits are hereditary.

● Conservatism is associated with a larger right amygdala, an area that can


signal threat and modulate fear-based memories. Conservatives would
be more likely to be swayed by the emotional aspects of content, at least
initially. And there’s evidence that they respond more aggressively to
situations in which they feel threatened and that they have a heightened
sensitivity to threatening facial expressions. Under these circumstances,
a desire toward more stability makes sense, as stability leads to greater
predictability and therefore less anxiety.

143
Lesson 17 Is the Internet Hurting Democracy?

● Liberalism is associated with greater brain volume in the anterior


cingulate cortex, an area that’s important for error detection and conflict
monitoring, among other self-regulatory cognitions. Liberals are more
comfortable with change because they might be better able to reason
through a problem without emotions getting in the way. They are also
often novelty-seeking by personality and therefore don’t mind and even
enjoy unpredictability. It doesn’t make them anxious.

● A naïve view of politics—one that is difficult to hold in today’s


sociopolitical climate—is that we come to our political decisions rationally
and deliberately, weighing all options and objectively considering all the
facts. The truth is that we come to these decisions with strong biases that
distort our reading of the information available to us. We are partisan, and
technology is making us even more so.

Tribal Mentality

● Our evaluation of social policies is skewed by the extent to which we


consider the authors of those policies “us” or “them.” We value members
of our own tribe more than members of a different tribe—and much more
than members of a perceived enemy, or threatening tribe. The result of this
in-group bias is that we even perceive and process information differently
depending on whether we consider it coming from someone like us or
someone quite different.

● That means that well before we become conscious of it, our political or
social affiliations influence what we see, hear, and experience. We can
see this effect in both behavior and brain activity. What’s more, these
affiliations are easily manipulated.

● Studies in the lab suggest that we’re pretty quick to realign ourselves when
alliances shift, even if they’re arbitrary. For example, say a group of people
were just assigned to wear different-colored name tags. Within seconds,
they’ll find themselves perceiving the situation differently now that their
vision of their tribe has shifted.

● That’s because there are many different ways with which we can categorize
people: race, socioeconomic status, language, gender, interests, education,

144
marital status, age, fandom, dietary preferences, etc. And maybe you don’t
have a clear view of your preferred name tag color, but you probably have a
preferred political party, or at least an ideology. And while it might be easy
to nudge you to think of people wearing your same-colored name tag as
part of your tribe, it is very difficult to overcome a strong political or other
identification that already is entrenched.

● That’s because our brains have adapted to a complicated world. There’s


a compelling case to be made that one of the main factors driving the
evolution of our ancestors’ brains during the time in which there was an
explosive growth in brain size was social interaction.

● Robin Dunbar calls this the social brain hypothesis. We are motivated to
seek connections with others—to belong to a tribe.

● In order to become part of a tribe, we need to distinguish our people from


other people, and we do that automatically and very quickly. Conscious
deliberation is far too slow. We tend to get gut feelings about someone, and
these feelings are essentially a manifestation of the shortcuts our brains
have evolved to help us be good tribe members.

● These intuitions are part of what Daniel Kahneman terms System 1 thinking.

Categories of Cognition

● Kahneman divides cognition into 2 categories: System 1 and System 2.


System 1 is fast; System 2 is slow.

● Our fast System 1 quickly appraises the world and makes suggestions to
System 2 concerning what we should do next. It’s automatic in that it
doesn’t require conscious deliberation or searching. In fact, much of what it
does is not available to us consciously.

● We often think that most of our actions are driven by our slow System 2,
but that’s generally not the case, because, as Kahneman points out, System
2 is lazy. It will choose the easy solution after giving it a quick once-over.

● This distinction between these 2 modes of thinking helps explain why


debating someone whose political views are antithetical to your own can

145
Lesson 17 Is the Internet Hurting Democracy?

be so frustrating. Not only is it highly unlikely that you’ll change the other
person’s mind, but it’s actually more likely that both of you will become
even more convinced that your own view is correct.

● There have been many studies that have demonstrated this effect, showing
that when presented with the same information, people with opposing
beliefs, regardless of whether those beliefs tend toward liberalism or
conservatism, will each come away feeling as though the evidence was in
line with their beliefs. It’s the confirmation bias: the fact that we often look
for and remember evidence that confirms or supports our beliefs rather
than evidence that would disconfirm it.

● While we can’t blame the internet for our tendency to categorize people
into in-groups and out-groups or for succumbing to the confirmation bias,
many features of the internet exploit these human tendencies and make it
harder for us to remain objective and rational.

146
● One of the big changes that the internet has brought is a shift from a
relative scarcity of information—a problem that the internet was explicitly
designed to solve—to information overload. What happens when we’re
bombarded with information? We need to make choices about what we pay
attention to, and those choices are often dictated by System 1.

● System 1 can be thought of as emotional, in contrast to our rational


System 2. This 2-tiered model of our thinking can help explain why we
might vote for a candidate who is part of our preferred political party but
whose policies or actions are incongruent with party values. This behavior
can be baffling if we think that our choices are entirely rational, but it’s
understandable given what we know about how these 2 systems interact
and affect our behavior.

● We can see Systems 1 and 2 in action if we scan the brain of someone


presented with this conflicting information. In a study published in 2006
and conducted at Emory University, for example, participants were shown
evidence that their preferred political candidate made statements that were
inconsistent with their party’s values.

● Under those circumstances, the experimenters observed activation in parts


of the brain that are involved in emotional processing, feeling pain and
other negative emotions, and detecting errors: the medial orbital prefrontal
cortex, the anterior cingulate cortex, and the insula, which is a region
important for our subjective awareness of our feelings and for the emotion
of disgust.

● But when the participants were asked to reflect on what they just saw,
they showed activation of a region called the ventral striatum, which is
part of our reward and appraisal pathway. From other studies, we know
that what’s likely happening, then, is that the participants are taking the
opportunity to explain away or justify their beliefs that this is still their
preferred candidate.

Threatening Democracy

● The success of tech giants like Facebook, Twitter, Instagram, YouTube, and
other social media companies rests on their ability to capture our attention

147
Lesson 17 Is the Internet Hurting Democracy?

and keep us scrolling. So they design their platforms to capitalize on the


fact that our reward pathway is intricately linked to our emotions. If they
can make us feel—whether those feelings are positive or negative—we’ll
keep looking.

● And more importantly, we’ll behave in ways that benefit their advertisers
and, by proxy, their share price. They want us to click on things and like
things, so they tweak their content to maximize these behaviors.

● If you look at the history of tweaks that tech giants have implemented in
their home pages and apps, you’ll notice an increase in photographic and
video content as they realized the power of the image over words to engage
us. You’ll notice the increased prominence of numbers of likes, retweets,
shares, and so on, to show us that we’re part of a larger crowd—and if 10.6
million people liked this content, won’t you?

● Such changes have made these platforms a constant companion to many


of us, notifying us and interrupting us dozens, if not hundreds, of times
a day. It’s not that we get pleasure out of it; it’s more nefarious than that.
It’s that we crave the little dopamine burst we get when we do come
across some rewarding message or content, so we keep looking for it, like
a rat who once got a food pellet for pressing a lever on what is the most
addictive reinforcement schedule: intermittent.*

● We check our phones repeatedly because, once in a while, we get rewarded


for it. And when we use our phones, or any other mobile-based technology,
the companies that made the device and the apps that run on it collect
some of our data.

● And as Brittany Kaiser, a former business development director at


Cambridge Analytica—the company accused of manipulating voters
in elections around the world through targeted ads—has said, data
has become the most valuable commodity in the world, surpassing oil.
Whether the work of Cambridge Analytica actually had a measurable

* As any casino manager knows, people will keep playing the slots if the rewards
are randomized and unpredictable.

148
effect on the elections they were involved in remains an open question. But
their methods reveal a threat to democracy that is very real.

● Cambridge Analytica is accused of collecting data from profiles of


Facebook users and lumping them into voter categories, the most
important of which they called persuadables. These were people who the
company considered vulnerable to being nudged to vote for their preferred
candidate in the case of the US election or their referendum position in the
case of Brexit.

● How might this work, given the neuroscience of politics? Threats and
the perception of danger tend to be associated with a greater desire for
order and a resistance to change, which pushes people toward a more
conservative candidate. Bombarding a so-called persuadable with fear-
mongering messaging might trigger his or her alarm response, activating
the amygdala and inciting feelings of anger and hate.

Mark Zuckerberg has said that his entire mission in life is to


connect people. But the outcome of some of the practices that
social media companies engage in to keep eyeballs on apps are
leading to greater division and tribalism.

● Conservatives and liberals differ in the way


that their System 1 appraises the same
information—and the internet enhances Conservatives
this effect, leading to more and more
division and tribalism. And while report being generally
technology didn’t create differences
between liberals and conservatives,
happier and more
it can accentuate them, and with satisfied with
our natural pull toward tribalism
and our need to belong, the internet
their lives than
is a powerful tool to nudge people into liberals.
behaving at the behest of their emotionally
driven System 1.

149
Lesson 17 Is the Internet Hurting Democracy?

● But what’s most nefarious is that we’re often not privy to all the factors that
influence our decision-making. That’s why large social media companies
like Facebook and YouTube need to be watched carefully: Changes in their
algorithms can create outsize effects, pushing society in directions that
leave us more polarized and insulated—and ultimately less democratic.

There are now dozens of studies that contrast the structured


and persistent judgment and decision-making observed in
conservatives with the higher tolerance for ambiguity and
uncertainty in liberals. Conservatives score higher on measures of
the need for personal order, structure, and closure, while liberals
tend to show more openness to new experiences.

Questions to Consider

1 What are some ways by which technology can shape politics?

2 Is technology a threat to democracy?

3 How does technology feed our innate tribalism?

150
The Arts in
18
the Digital Era

151
Lesson 18 The Arts in the Digital Era

S uccessful social media companies know the value of


the crowd: Twitter, Facebook, and Instagram would
all be lost without their counts of likes, shares, retweets,
and other evidence of social approval. We watch movies
in the theater and we go to live shows because the
crowd is a powerful enhancer of our enjoyment.* When
it comes to social proof—situations in which the opinion
of the crowd has a large influence on how we rate
something—how confident we are in our own taste, or
how ambiguous the object’s value is, matters. The less
confident we are, or the more similar the choices, the
more easily we are swayed by the crowd.

The Digitization of Art

● Technological innovations have made physical versions of movies, music,


and even some visual art obsolete. Digitization—whether it’s of visual
art, poetry, music, or film—is revolutionizing these industries by making
it much easier for people around the world to discover and enjoy artists’
creations. Streaming and online art repositories are often thought to
be equalizers, helping unknown artists build audiences without the
gatekeeping functions of radio stations, movie houses, and so on. But
are they?

● With access comes excess. Whereas there was a real physical limitation to
how many albums you could purchase and store, thereby limiting the set of
available options when you wanted to listen to music, Spotify and other cloud-
storage options have brought the world’s musical catalog to our fingertips.

* Laugh tracks, a common feature of TV sitcoms, make us think that what we’re
watching is funnier than it is, even when we find them annoying and know
they’re fake.

152
● But have you ever felt overwhelmed by all the choices? And if you have,
what influenced your decision on what to play or watch or experience
next? Probably some kind of recommendation. So here we go again
with the gatekeepers, except this time, we’re letting the companies that
provide us with access to the digital materials decide how they’ll make
the recommendations, whether it’s crowdsourcing or an algorithm or,
increasingly more commonly, the highest bidder. And with a few gentle (or
not-so-gentle) nudges, companies like Facebook that can access hundreds
of millions of users have the power to make or break artists and their art.

● But, you might be thinking, surely the cream rises to the top, especially
when the playing field is so level, as the internet was supposed to be the
great equalizer, providing access to global audiences virtually for free. And
to a certain extent, you’d be right: The best artists generally do well under
any circumstances. That’s because they’re pretty easy to spot. The music of

153
Lesson 18 The Arts in the Digital Era

great artists—Freddie Mercury, Elvis Presley, Yo-Yo Ma, Luciano Pavarotti,


Madonna, Beyoncé—stands out, even buried among millions of choices.

● In 2006, Matthew Salganik, Peter Dodds, and Duncan Watts built an


online music player to test the hypothesis that a listener’s knowledge of a
song’s popularity influenced how the listener rated it. They found 48 songs
by undiscovered artists and created 8 versions of the player, each with its
own chart indicating how popular the songs were. All of the tracks started
with zero plays, and then as participants were recruited, the charts slowly
ticked up, showing the downloads.

● Eventually, more than 14,000 listeners participated in the experiment,


and each was assigned to one of the 8 versions—or a control condition, in
which the songs appeared in random order so that the users couldn’t tell
how often tracks were downloaded or played, but the experimenters knew.

● The exceptionally good tracks slowly rose in the charts in all of the
versions, and the exceptionally bad ones didn’t. But it’s what happened to
the ones in the middle that’s really interesting.

● The listeners were much more likely to download a track if they knew that
other people had listened to it and liked it. And the effect snowballed. This
same effect explains why authors are so keen on having their books reach
the New York Times best-seller lists early. By some counts, being on the list
can boost sales for first-time authors by more than 50%.

● To test just how powerful social acceptance can be, Salganik and his
colleagues conducted a follow-up experiment in which they flipped the
charts upside down: The hits became the worst ranked and the bombs were
at the top. This time, the previously top-performing tracks didn’t do as well,
but they still were more frequently downloaded than the ones that were now
at the top. And the bad tracks actually caused the listeners to give up on the
players entirely, dropping out of the study. So good press and social scores
can enhance good content, but they can’t save the bad stuff from failure.

● Of course, there’s a lot of subjectivity involved in how we categorize good


and bad when it comes to art. That’s why we can generally recognize the
truly exceptional—good or bad—but ranking the stuff in between is
what’s hard. Tech innovations, including digitization and platforms that

154
allow us to access ever larger amounts of content, make this problem even
harder, as the vast majority of the stuff is in the middle.

A Shift in the Value of Art

● The controversial and provocative Banksy, an anonymous artist who


got his start making street art with subversive messages,* conducted an
experiment that tells us a lot about the influence of social acceptance on
our notion of value.

● In 2013, Banksy set up a stall in New York’s Central Park, alongside others
laden with touristy wares, and hired a nondescript 60-year-old man to
sell authentic, signed, original canvases for $60 each. His total sales for
the day were $420—not bad for an unknown artist or someone selling
knockoffs, like most of the other stalls. But the artwork he sold for $420
was estimated at the time to be worth $225,000.

● Banksy’s art is known for taking on the art industry’s practices in


particular and societal values in general. He wants people to examine the
perception of art as a commodity, even when it questions the value of his
own work. In 2018, just as his famous piece Girl with a Balloon was sold
at auction by Sotheby’s for $1.4 million, a device embedded into the frame
immediately began shredding it while viewers watched.

● Banksy’s oeuvre demonstrates that social proof is a powerful mediator of


value in the ambiguous world of art. And technology has only added to its
power, as Banksy’s art has benefited tremendously from the viral attention
that it has garnered. If the whole world couldn’t watch the video of a self-
shredding million-dollar painting, would it be as valuable?

● By providing access to the world, digitization of art also saturates markets


with unprecedented volume, and it has become next to impossible to
browse through the material and discover new artists without the help of
some kind of curator or algorithm.

* An example is his piece The Mild Mild West, which shows a cute teddy bear
throwing a Molotov cocktail at police in riot gear.

155
Lesson 18 The Arts in the Digital Era

● This saturation can also make it daunting for young artists to take the time
to hone their craft. Why spend 10 years learning to play the violin when
you can listen to Itzhak Perlman (or any other great violinist) whenever
you want—or better yet, play a digital violin just as beautifully yourself in
a virtual world? And the chances of your standing out from the crowd and
being able to make a living as a musician in the real world are dwindling.

● That’s because streaming services pay their artists a pittance. And the
public is becoming accustomed to being able to enjoy art—music, film,
and so on—without paying for it, or paying a small monthly subscription
fee that gives them access to massive catalogs of artwork.

● While streaming music services are often touted as solving some of the access
problems that the record label monopolies created, such services are not artist-
friendly. And the shift from owning music—such as buying a track, digital
or analog—to renting music—paying for access to a catalog as one does on
Spotify—is a big deal. It’s changing how we consume and value music.

● For one thing, we’re much more likely to pay more to own rather than rent,
naturally. Illustrating this point, Sam Rosenthal, who founded the label
Projekt, the first to withdraw its catalog from Spotify, wrote that 5000 plays
of music from his label yielded $6.50 in royalties from Spotify. If those same
tracks had been downloaded via iTunes, his income would have been $3,487.

● Furthermore, to earn a monthly minimum wage, musicians would have


to have their music played almost 900,000 times a month. Given that the
number of tracks available for listeners continues to grow at a rapid pace,
these numbers are unsustainable.

● Some labels blame Spotify explicitly for the cratering of physical album
sales in countries where the app is available. In the last few years, though,
Spotify has continued to dominate the music industry, and some labels
that initially withdrew have come back, hat in hand. Labels representing
very popular artists have more leverage, of course, to negotiate better
royalties. But bands that hope to build an audience and make a living
doing so are out of luck.

● It’s not just musicians who suffer. All kinds of artists, writers, and small
business owners are feeling the pinch.

156
● Almost outside of our awareness, how we value everything that can be
accessed online is changing. This shift in the value of art leaves little room
for amateurs and community theaters, smaller arts organizations, and places
where amateurs can train and perform. With the limited time we have, why
waste any of it on mediocre art when we can experience the greats?

The Benefits of Learning Something Hard

● Aldous Huxley wrote:

In the days before machinery men and women


who wanted to amuse themselves were compelled,
in their humble way, to be artists. Now they sit
still and permit professionals to entertain them
by the aid of machinery. It is difficult to believe
that general artistic culture can flourish in this
atmosphere of passivity.

● The move from active participant to passive consumer is particularly


troubling when it comes to music. That’s because the Mozart effect—the
idea that music can make us smarter—is only true if you actually play
Mozart (or any other composer). We don’t benefit, beyond simply being
less bored, from just listening to music while we work or study. But actively
learning to play a musical instrument, or sing or compose music, can be
beneficial in other areas of life, too, from learning how to read and regulate
our emotions to mentally rotating abstract shapes.

● In order to reap the benefits of musical training—or any learning, for that
matter—we need to be willing to step outside our comfort zone, to try and
fail, and to take risks and get frustrated. Technological innovations often
make our lives easier, and we become less able to tolerate the challenges
inherent in learning something hard.

● Struggling only makes us better when it comes to learning. And learning


takes time and direct experience, which our technological innovations are
often robbing us of.

157
Lesson 18 The Arts in the Digital Era

● Making things easy is not good for learning. The easier something is, the
less likely it is to leave a permanent trace in your brain. When learning
feels easy, we get illusions of competence. When it’s hard, we induce lasting
neuroplastic changes in our brains.

When we get something right on the first try, we think, Wow, I’ve
learned that well! But the truth is that performance in the moment
is not a good measure of long-term retention.

● Perhaps G. K. Chesterton, who famously coined the phrase “if a thing is


worth doing, it is worth doing badly,” had it right when he wrote:

A man must love a thing very much if he not only


practises it without any hope of fame or money, but
even practises it without any hope of doing it well.
Such a man must love the toils of the work more
than any other man can love the rewards of it.

● When you love something so much that you’re willing to toil at it without
pay, then it will certainly change you. And like so many amateurs
throughout history, you might even end up changing it. But this won’t
happen if we allow technology to lure us into a world where we no longer
pay for art or entertainment and there’s no more incentive for amateurs to
toil away, as their flawed performances are perceived to have no value.

Questions to Consider

1 What are the consequences of digitizing music and other art?

2 Why should we try to save the amateur musician/painter/poet?

3 How does musical training shape the brain, and does listening to
music have the same effect?

158
How AI Can
19
Enhance
Creativity

159
Lesson 19 How AI Can Enhance Creativity

B y the end of 1985, Garry Kasparov became the


youngest world chess champion in history, at the
age of 22. Earlier that year, he had beaten 32 of the top
computer programs during an epic simultaneous chess
exhibition in which he was the lone human player. It
was a media hit, assuring the public that humans still
outpaced machines in terms of sophisticated thinking.
But just 12 years later, in 1997, Deep Blue, IBM’s $100
million supercomputer, finally beat the grandmaster.
Two decades later, you can download a chess app
that can play as well as a grandmaster for free on your
smartphone. In 2018, Elon Musk reiterated his warning
that artificial intelligence (AI) is among the most
dangerous threats to humanity—”far more dangerous
than nukes,” he said.

Chess Grandmasters and Go Experts

● For almost as long as psychologists have studied intelligence, they’ve used


chess masters as exemplars of intellectual expertise. Seminal experiments in
the 1970s demonstrated that chess experts don’t just play better; they play
differently. There are qualitative changes that come with expertise, and
they trickle down all the way to how experts see or perceive the board.

● One such change is called chunking. It’s a way of binning individual


elements—say, chess pieces—into a meaningful whole, or chunk, that frees
up memory. If you’re looking at a chessboard in the middle of a game and
you’re not an expert, you’ll see what looks like a random assortment of
pieces on squares. But if you’re an expert, you will know that the pieces are
arranged in a meaningful way (here the king is stalemated; there a pawn
becomes a queen). Those are memorable arrangements, and ones that are
easy for a chess expert to re-create on a sample board.

160
● Novices need to memorize each piece and its location one by one, and
their working memory, which enables them to keep this information in
mind and to manipulate it if necessary, is quickly taxed. An expert sees the
same board and assigns a meaningful chunk (oh, it’s the classic Budapest
Gambit!). The expert needs only to keep this one piece of information in
mind to re-create the entire board.

● Simultaneously considering many possible moves and outcomes is a


difficult task for a human. This process of recognizing patterns and
assigning meaning to chunks is developed through extensive training, and
eventually it becomes almost automatic.

● Early chess-playing computers were not very good at recognizing patterns


and planning moves in advance using different strategies. Even to this day,
while machines are strong in some areas where humans are weak, human

161
Lesson 19 How AI Can Enhance Creativity

toddlers can still best supercomputers at things we find easy, such as seeing
and walking. But machines are catching up.

● When Deep Blue beat Kasparov in 1997, there was still a feeling that AI
could never really mimic human intelligence in any meaningful way.
After all, even Deep Blue used brute force rather than pattern recognition,
chunked knowledge, or other “human” methods, such as reading body
language, overcoming exhaustion, and avoiding mistakes. It just so
happens that chess can be won by brute force. But there’s another game
that cannot.

● The Chinese game Go, which is much more complex than chess, is played
on a 19-by-19 board of squares with 361 black and white stones. It’s too
big a matrix to allow brute-force methods like exhaustive search to thrive
and, as Kasparov writes, “too subtle to be decided by the tactical blunders
that define human losses to computers at chess.” Instead, Go* is revered as
a uniquely human game, requiring creativity, intuition, and perseverance.
It’s a game of 9 simple rules but many complex strategies. The number
of possible positions on the board exceeds the number of atoms in the
observable universe.

● That’s why it was so shocking to Musk and many others tracking the
ascent of AI when, in 2016, DeepMind’s AlphaGo beat one of the world’s
best Go players: 33-year-old Lee Sedol, who had 18 world championships
under his belt at the time. AI expert and Taiwanese venture capitalist Kai-
Fu Lee calls it China’s Sputnik moment, as it launched a frenzy of research
and work in AI in the East.

● There was a time when computer experts thought that AI would never
beat a human at Go. Sedol himself confidently predicted a 5-to-0 sweep
against the machine. He lost 4 games and only won 1. But what is most
remarkable about AlphaGo is not so much that it beat arguably the best
human player but that it learned to do so on its own (sort of).

● The engineers who programmed AlphaGo trained it to play using a


database of 160,000 games that top Go players had played. Then, it began

* Go is thought to be humanity’s oldest continuously played game.

162
to play against versions of itself, using a technique called reinforcement
learning. This is the key difference: Programmers can’t predict the moves
that the AI will make; they emerge from the extensive training.

● As Kasparov noted, Deep Blue was the end of human mastery of chess;
AlphaGo is the beginning of truly intelligent AI. Deep Blue represents a
form of AI that capitalizes on the strength of machines to crank through
permutations—what we’ve been calling brute force. The reason your free
chess app can best a grandmaster is because in chess, there are a finite
number of moves and a finite number of possible games. The computer
just has to cycle through all of them to figure out which one will lead to a
checkmate. In a sense, the chess app isn’t thinking; it’s just searching.

● But Go is more complicated—by orders of magnitude. And thus AlphaGo


is a more humanlike AI. It learns from its mistakes. And that’s how it
became creative. It wasn’t just replicating what humans had done in
previous games. It chose moves that humans hadn’t seen before and,
in doing so, encouraged the humans playing against it to behave more
creatively as well.

● What’s fascinating about this matchup is how the computers, whether it’s
Deep Blue in chess or AlphaGo in Go, affected their human counterparts.
After losing to Deep Blue, Kasparov became obsessed with computer chess,
and it had a profound influence on how he’s spent the rest of his career.

● Fan Hui, one of the first expert Go players to be defeated by AlphaGo,


continued to play against the AI, and his world ranking improved
dramatically as a result. Both he and Sedol have remarked on how the
machine made them better, more creative players, compared with similar
training but against humans rather than machines.

Music and Art Created by AI

● Examples of AI composing music that is indistinguishable from what


humans have produced have rocked the music and art communities. But
just like Deep Blue and AlphaGo, once you can get past the idea that
machines can mimic much of what we consider to be deeply human, they
also push us past our own boundaries. They make us better.

163
Lesson 19 How AI Can Enhance Creativity

Once we find out that a piece of art was created by a computer,


it loses some of its value. And people get angry when they learn
that what they thought was at the very core of what it means to be
human—to be expressively creative—might be at stake.

● At Yale, Donya Quick has a computer program that can emulate Bach
and fool even the most sophisticated experts. But it can go a step further,
creating completely novel sounds and compositions. The program, called
Kulitta, can expertly meld styles, incorporating rules from classical and
jazz genres or composing music that doesn’t adhere to any of those rules.
Listening to Kulitta’s work makes you wonder what Bach might have
composed had he lived through the golden age of jazz.

● There are now robot reporters, chefs, painters, poets, and DJs. Granted,
many of them aren’t fooling the experts yet, though Forbes.com has even
outsourced the writing of corporate earnings previews to a company
called Narrative Science that generates them using an AI. They’re hard to
distinguish from what a human would write.
● Even if these artistic robots aren’t quite at the level of creative humans,
there are plenty of ways we can learn from them, both in terms of the
content they produce, which can lead us to think outside the box, and how
they learn to do it.

● Neural networks, reinforcement learning, and other methods of creating


AI can make it difficult to figure out why a machine made a particular
decision. Because their choices emerge from the learning algorithms on
which they are trained, it’s hard to pinpoint the cause of a decision.

● But that’s true, too, of the vast majority of the decisions that we make,
including those of judges, mortgage brokers, and doctors. The difference is
that we can force makers of AI to find out the cause of the decision—but
we can’t force it out of other humans.

● But in order for the AI to explain a decision, it needs to know itself. In


a sense, it needs to be made conscious of its own thinking—or have
metacognition. For example, when classifying a face as male or female, the

164
machine would have to explain how it came to make that choice: Was is a
particular feature, a relationship between features, a probabilistic map of
features? And in this, we’re bound to gain a deeper understanding of our
own metacognition and ultimately even our own consciousness.

Using AI as a Partner in Creativity

● Perhaps the best use of AI, and the least scary, is as a partner. Since
computers are much better than we are at brute-force calculations and
search, we can outsource those tasks and benefit greatly.

● Say you want to build a new car. There are many different parts and
options to consider, each with its own pros and cons. By building a digital
replica of your design, you can simultaneously test many different versions
using digital simulations.

● Before AI, engineers had to do much of this work themselves, testing and
retesting prototypes. But now, they can outsource it to machines and get
answers much more quickly. This means that they can spend the extra
time on the more creative tasks—innovating and connecting—which is
what we humans do better (so far) than our computer counterparts.

● This brings us to the question of what it means to be creative. The naïve


view is that creativity involves some spark that comes essentially from
nowhere, or perhaps from the deep recesses of the unconscious mind, and
represents a completely novel and useful idea. But the creative process
involves some proportion of preparation, incubation, insight (that’s the
spark), evaluation or editing, and elaboration or iteration.

● So which part of the process, exactly, is the most human—the least amenable
to outsourcing to a computer? Probably the spark. But even this stage doesn’t
come out of a vacuum. And there are many ways that AI and other types of
computer programs can help us with each of the other stages.

● Interactions between different cultures in humans also seem to have


a pronounced enhancing effect on creativity. Jackson Lu at Columbia
University has documented this effect by looking at a variety of data sets,
from carefully controlled lab experiments to surveys of more than 2000

165
Lesson 19 How AI Can Enhance Creativity

professionals from 96 countries. He found that the more people interact


with those from other countries and cultures, the more likely they are to
show creativity in the workplace or become entrepreneurs.

● What if that foreign friend or partner was, in fact, an AI? Kasparov, Sedol,
and others who have interacted closely with AI report that their creative
work has benefitted from the interactions. Diversity helps us think outside
the box, and what’s more diverse than a machine that solves complex
human problems in a distinctly inhuman way?

● We prize creativity as one of humanity’s most precious gifts. Instead of


watching in horror as machines begin to encroach on this revered part
of our humanity, what if we harness their strengths to enhance our own?
That’s thinking outside the box.

Questions to Consider

1 What did Garry Kasparov and Lee Sedol learn from being bested by
computers?

2 What are the differences between brute force and deep-learning


algorithms?
3 How might we harness technology to make us more creative?

166
Do We Trust
20
Algorithms
over Humans?

167
Lesson 20 Do We Trust Algorithms over Humans?

B ecause humans can be biased, we outsource tasks


that require cold objectivity to computers. We
believe that the computer, devoid of human weaknesses
like prejudice or nepotism, will make rational and
fair decisions. But it turns out that machine learning
algorithms are just as biased as the data we feed them
during training.

Machine Learning

● Consider mortgage lending decisions. Who gets a mortgage loan, and for
how much, should be a computational decision in which the risks and
potential revenues are weighed and assessed. A mortgage broker should
not be swayed by the desperate pleas of a single mother or the flashy car a
client shows up in. Mortgage brokers should be able to plug in the relevant
numbers, such as measures of employment security and income potential,
assets, and liabilities, and come up with a loan amount that the applicant
can afford.

● This kind of calculation is something that computers were built for—and


in which they easily best humans. So when mortgage brokers began using
algorithms to help them decide who should get a loan, they thought that
they were removing bias and subjectivity from the equation and coming up
with the best financial decisions.

● But the algorithms began redlining districts to resemble what mortgage


distributions looked like during segregation, discriminating against
African American communities. They were just as biased, if not more so,
than the humans they were designed to replace or enhance.

● What exactly is machine learning?

● Machine learning can be roughly divided into 2 approaches. The expert


systems approach is rule-based: Computers are taught to think by applying

168
a series of logical rules that the programmers code in. If an applicant has
more assets than liabilities, then a mortgage might be a good fit.

● But quickly things get complicated. Are those assets in danger of


depreciating? Does the person have a job that can sustain a monthly
payment? Is the industry he or she works in stable or volatile? Simple rules
can’t always answer each of these questions. So the coders would turn to
experts and write their wisdom into the algorithms.

● But then you still just get computers that are as smart as, but not smarter
than, humans. What if you could teach the algorithm to teach itself rather
than telling it what to do?

● Computer scientists who rejected the expert systems approach had the
goal of making the algorithm smarter than them. So they turned to the
most intelligent system they could find: the human brain. This strategy is
called the neural networks approach because it more or less mimics certain
aspects of human brain function. Layers of artificial neurons are built on
top of each other, and they send and receive information much like our
own brain cells.

● Instead of explicitly teaching these neural networks a set of rules, programmers


feed them training data, just like we do to developing baby brains. A baby isn’t
taught grammar, for example, but is spoken to for years before he or she learns
to use the rules appropriately. The baby learns by experience.*

● Babies are incredibly effective learners, even though they aren’t given
explicit instructions. So why not try the same approach to train an AI?

● Neural networks do just that. They extract regularities in the data that
they are fed and find the patterns and write their own rules. And they’re
not new. AI pioneers in the 1950s and 1960s were already experimenting
with neural networks. But they found them to be fairly limited, so they
were relegated to the fringes of computer science.

* According to a study in which the scientists recorded all the sounds that
babies were exposed to over the course of many days, babies hear an average
of about 12,000 adult words a day. So by the time they’re 3 and speaking for the
most part in full sentences, they’ve heard some 15 million words.

169
Lesson 20 Do We Trust Algorithms over Humans?

Between 1956 and 2015, there was a trillionfold increase in floating


operations per second, a measure of the processing power of a
computer processing unit. To put this in perspective, the computer
that guided the Apollo Moon landing was about as fast and
powerful as 2 Nintendo entertainment consoles. It had 4 kilobytes
of RAM, and nowadays we complain when a laptop only has 8
gigabytes of RAM.

● By the early 2000s, the availability of computing power and data had made
neural networks much more powerful. But there was still the problem of how
well they were trained, or what algorithms they used to make decisions.

● Almost overnight, in 2012, neural networks, now programmed using


what’s called deep learning, began to show explosive improvements in
growth and results. Suddenly, they became mainstream. Every year now,
AI is getting much smarter.

Deep Learning

● One of the earliest applications of deep-learning AI systems was in the


fields of insurance and loans. With a clear goal—to minimize default
rates—AI can churn through data like credit scores, income, and current
debt to learn to categorize borrowers as low- or high-risk. But eventually,
deep learning will enable technological advances like self-driving cars and
medical diagnoses to become a regular part of life.

● What makes deep learning so effective?

● We don’t teach babies how to recognize faces, yet almost as soon as their
visual acuity is sufficiently precise to distinguish facial features, they begin
to stare intently at faces and can distinguish their caregivers from, say, the
dog. This sounds trivial, but anyone who has tried to program a computer
to do the same thing will tell you that it’s hard.

● Real faces are hard, but they’re not as hard as pictures of faces or other
objects, which are 2-dimensional. A baby can recognize its primary

170
caregiver’s face from photographs. Once babies learn a concept like dog,
they can apply it to many different types of dogs, even ones they’ve never
seen before. Think about how hard that is. How do you decide what’s a
dog and what isn’t?

● How we learn concepts like dog has puzzled psychologists for decades.
We still don’t really know. If we don’t know how we do it, how can we
program a computer to do it?

● That’s where deep learning comes in. We just give the program the same
experience and the same (or a similar) neural architecture, and it figures
it out!

● In traditional machine learning, you feed a bunch of input into the


program (say, a bunch of photos of dogs), teach it to identify certain

171
Lesson 20 Do We Trust Algorithms over Humans?

features (wagging tails, 4 legs, snout, etc.), and it spits out a decision: dog
or not-dog.

● In deep learning, you feed the program a bunch of input—again, maybe


a set of dog photos—and then you let it figure out what are the defining
features of all the dogs versus the not-dogs. That’s why you need so much
data. But then it becomes smarter than a psychologist because it chooses
which features are salient, ones that we might not even be aware that we’re
using, such as the ratio between the diameter of the eyes and the length of
the nose.

● Whatever it’s doing, with such a huge amount of data, deep-learning AI is


astonishingly good at extracting relevant features. In the last decade, we’ve
gone from very crude facial recognition software to phones that can use
your face as an unlocking mechanism.

The Automation Bias

● So what’s the problem?

● Most people will accept decisions made by algorithms more readily than
those made by fellow, flawed humans. The belief that automated decisions
are better than human-generated ones, even when presented with evidence
that the decisions are incorrect, is called the automation bias. It’s been
observed in many different situations, including in ones in which the
consequences of a bad decision are truly catastrophic.

● It might seem strange to think that a program, created by humans, can


become more authoritative than the humans themselves. But our brains are
lazy: If a solution to a problem seems right, we tend to latch onto it more
quickly than we should, and we trust that computers don’t make stupid
mistakes like we do.

● We’re cognitive misers. The resources we use to think are limited, so


we tend to scrimp and save, cutting corners where we can. Behavioral
economists and psychologists have demonstrated the many ways in which
we take cognitive shortcuts—heuristics that can bias our decision-making,

172
such as the fact that we tend to overestimate the frequency of an event that
we just heard or read about or that pops easily into mind.

● People tend to think that flying is more dangerous than driving because
when there is a plane crash, the media coverage is extensive. It’s easy to
forget that car crashes are much more common because we don’t think
about them very often.

● In addition to the availability heuristic, social loafing and the diffusion


of responsibility are 2 other human tendencies that contribute to the
automation bias. Social loafing describes the observation that people tend
to make less of an effort if they are working in a group rather than alone.
The diffusion of responsibility is a phenomenon in which people are less
likely to take action in the presence of others, as they assume that someone
else will step in or is responsible.

● Of course, there are situations in which the opposite is true—that being


observed by others makes you step up and perform better. But when the
“other” is a computer, it seems that we are a bit too willing to delegate.

● It’s pretty easy to imagine how an error of omission might occur in the
presence of an automated aid; you assume that the computer is paying
attention so that you don’t have to. For example, in 1972, the crew of an
Eastern Airlines plane set the autopilot to keep the plane at an altitude of
2000 feet while they tried to figure out why the landing gear indicator
light hadn’t turned on. But they failed to monitor their altitude, and
when air traffic control warned them that their landing gear had not, in
fact, dropped, they were only 30 feet above ground, and it was too late to
prevent the crash.

● But what about errors of commission? These are errors made when a person
follows the lead of an automated aid—even though the choice being made
is incorrect and even goes against many hours of training.

● For example, in 1998, a group of researchers set up an experiment in which


25 trained pilots flew simulated flights in an airplane with which they
were familiar. There were 4 opportunities for the pilots to make errors of
omission, which they did 55% of the time.

173
Lesson 20 Do We Trust Algorithms over Humans?

● But there was also one opportunity to make an error of commission. The
pilots received a message mid-flight that one of their engines had caught
fire. This message was contradicted by normal engine parameters and the
absence of 5 other indicators that normally would be present if there were
indeed a fire. They were even reminded of these indicators during the
simulation training.

● But 100% of them chose to shut down the engine in response to the false
fire message. In a post-experimental questionnaire, though, they said that
the one message would not be sufficient to diagnose a fire in the absence
of other indicators and that in that situation, it would be safer not to shut
down the engine.*

● So the automation bias is real and has real consequences. As AI becomes


more intelligent and useful, this bias has the potential to become even
more problematic.

● Are African Americans more likely to miss a mortgage payment? If so, do


we want to live in a society in which that assumption, even if it’s based on
accurate historical data, is carried into the future? Don’t we want to build a
society in which that’s not the case?

● What’s the solution? We can work on tweaking algorithms so that their


criteria are more stringent when it comes to evaluating a borrower’s risk of
default, for example, if he or she has been historically disadvantaged. And
that seems to be the step that’s missing when we use algorithms to predict
things that have a history of bias but that we hope will not be brought into
the future we are building.

When it comes to decisions like whether to award bail or what kind


of a sentence to give a convicted criminal, defendants often say
that they’d rather a human judge be the final arbiter as opposed to
an algorithm. But judges are also biased and fallible.

* Additionally, 67% of them reported a false memory that there was another
indicator present during their simulation.

174
● In many aspects of our society, there is bias baked into our history, so if we
use what happened before to predict what will happen, bias will come with
it. But if we can systematically program algorithms to become less biased
over time, could we use them to better society and eliminate bias?

● The AI Now Institute, created to fight bias in AI, recommends that


core public agencies responsible for things like criminal justice, welfare,
education, or health should not use deep-learning AI, whose decision-
making processes are not fully transparent. Whereas reinforcement
learning and neural networks can be more effective at human tasks like
facial recognition than brute-force algorithms, they are just as susceptible
to bias as the data on which they are trained. And if you don’t know why a
decision was made by the AI, you can’t check its objectivity.

● While we’re still learning how they work, it’s important to have regulatory
bodies checking algorithms for unintended consequences and looking
under the hood whenever possible.

Questions to Consider

1 What is machine learning, and how does it work?

2 What can machine learning tell us about how we learn?

3 How is machine learning affecting decision-making and


opportunities?

175
Could
21
Blockchain
Revolutionize
Society?

176
B lockchain is a social technology that has the power
to reknit the fabric of society by changing our
approach to currency and exchange. And it’s already
affecting many different aspects of our lives, without
our even knowing it.

What Is Blockchain Technology?

● The internet changed the way data and knowledge flow between people
and organizations. This information has become vast, but it’s also easily
changed and easily lost. So that means that it’s not particularly reliable. It’s
not trustworthy.

● If we send an email to someone and it ends up somewhere else or gets


changed somehow along the way, it’s not the end of the world. We can send
another one without really losing anything. Exchanging information on
the internet is really about exchanging copies of originals, and the originals
aren’t any more valuable than the copies.

● But if we send money, it does matter if it ends up in the wrong hands or


if it is lost along the way. The entire point is to get the money to the only
person whom we wish to pay, intact and unchanged. A copy of the bill is
not the same as the original.

● So when we exchange things of value on the internet—money or other


goods—we use third parties that can assure us that the transaction is
secure, such as banks or marketplaces like eBay.

● But banks and other marketplaces aren’t immune to security breaches


by hackers or dishonest people. They also centralize power; we become
dependent on them. What if instead we had a system that did not rely on
any intermediary, was completely secure, and was capable of exchanging
items of value between people with exceptional accuracy and safety?

● That’s what blockchain was meant to do. You might call it in the internet
of value, as Don and Alex Tapscott do in their book Blockchain Revolution.

177
Lesson 21 Could Blockchain Revolutionize Society?

● When Jack Ma’s company Alibaba, the


Amazon of China, went public, it raised
$25 billion, making it the largest tech
Joichi Ito,
initial public offering of all time. It’s founder of the MIT
been a huge success. Media Lab, said, “The
● When Ma spoke of why he thought Blockchain is to trust
his company was so successful, he kept
using the word trust. That’s because the
as the Internet is to
company’s escrow system, called Alipay, information.”
holds a buyer’s money until the buyer
receives the product that he or she purchased.
That way, the seller is assured that the buyer will
pay, and the buyer is assured that the seller will send the desired item.

● It’s been immensely successful because customers and sellers can interact
through the site and still be reasonably sure that they won’t be defrauded
because there’s this added layer of security through Alipay.

● Engaging with the internet requires a leap of faith, since it also affords
some anonymity, which Ma realized as he founded Alibaba. We rely on
intermediaries like Alipay, PayPal, and other services to keep us all honest
and happy. In this way, the internet is shaping who and how we trust.

● Blockchain is another step—actually, a giant leap—in that direction.

How Blockchain Works

● In response to the global economic crash of 2008, someone or some people


named Satoshi Nakamoto—their identity is as cryptic as the currency they
invented—came up with the idea behind bitcoin, a peer-to-peer electronic
system of exchanging money.

● Nakamoto posted a paper in which they described a protocol for creating


a system in which value is exchanged securely but there’s no intermediary,
such as a bank or credit card company or PayPal. Instead, the system relies
on social consensus, a lot of computing, and human nature.

178
● When we exchange value, we want to make sure that we don’t duplicate
the transaction. If you give someone $100 for a bike, you can’t give that
same $100 to someone else. And you don’t want the seller to sell that bike
to someone else at the same time, either. You both want the originals.

● But how do you know that the seller hasn’t sold the bike to anyone
else? And how does the seller know that you haven’t given that $100 to
anyone else?

● We could write it down in a ledger, which anyone can access—but it’s the
21st century, so let’s digitize it. That’s essentially what the blockchain is: a
digital diary of value exchanges.

● But even banks, with their expensive security systems, can get hacked.
Can’t someone just change the ledger to benefit themselves?

● That’s the core genius of the blockchain. It’s really, really hard—some use
the word impossible—to fake or hack.

● Each entry in the diary is tagged with something called a hash, basically
a string of letters and numbers. But each new entry is also tagged with
all the previous entries, so it’s like a hash plus a record of all the other
transactions. So if you want to change an entry, you have to change all the
entries that came before it, too.

● But that still sounds hackable. After all, computer algorithms can batch-
change things.

● So Nakamoto proposed another layer of security: Every entry also has a


little puzzle to solve—one that requires a lot of work. This puzzle is called
a nonce, and it’s a number or code that can only be used once. Once
it’s been used, it’s no longer valid. And it has to satisfy a certain set of
conditions, depending on the specific blockchain. For example, maybe it
has to have a particular series of numbers in it or a value that’s greater than
or less than some other value.

● In any case, the right nonce is hard to find, and finding it is proof that a lot
of work was done. That means that changing it, and all the other nonces,
would be even more work—so much so that it can’t be done very quickly.

179
Lesson 21 Could Blockchain Revolutionize Society?

● Now if the ledger were only stored in one location, presumably you
could take the necessary time and computing power to figure out each
puzzle and change every nonce. But what if it’s simultaneously stored in
its entirety on 5000 computers, distributed across the globe? Now this is
getting harder to hack.

● Additionally, every time there’s a new entry, or nonce, the set of computers,
or nodes, that each contains a copy of the original record has to approve of
the entry, validating it by checking it against the versions of the blockchain
that it has. If a majority of these nodes approves the entry, it gets added to
the chain.

● The blocks in the blockchain are essentially groups of entries (say, one
spreadsheet in a ledger). Then, they are chained together because each
block also refers to all the previous blocks—hence the chain. It’s spread
over many computers, and the entries are constantly checked by people
who earn or have value embedded in the blockchain. Finally, it’s time-
stamped and updated every 10 minutes.

● It’s the Fort Knox of ledgers. Note that there’s no centralized node; it’s
completely decentralized and works on consensus. That’s why it has the
power to change the world.

Problems with Blockchain

● But blockchain technology is not immune to problems. Bitcoin—the


currency that blockchain was first used to create—has experienced falling
values. And in 2014, the largest bitcoin exchange at the time, Mt. Gox,
handling 70% of bitcoin traffic, went bankrupt.

● When you buy bitcoin, you need to be able to identify yourself—with a


name that is used in the ledger or, in the case of bitcoin, a private personal
key. Mt. Gox’s problem was that these identities were stored in a relatively
insecure file, which was hacked. About 850,000 bitcoins, amounting to
more than $450 million, were lost or stolen.

● Mt. Gox now serves as a cautionary tale, and individuals and exchanges
are much more careful about protecting their private keys. And this also

180
illustrates that the hacking of blockchains happens at its edges—the entry
and exit points—rather than in the ledger itself.

● What’s more, there are plenty of companies that use blockchain, more or
less successfully. Just as a company is responsible for its own IT, which can
be good or bad (secure internet connectivity or an easily hackable Wi-Fi
network), different uses of the basic idea behind blockchain have different
strengths and vulnerabilities.

● There is also the problem of how the anonymity that blockchain technology
provides lends itself to supporting shady activities. For example, the Silk
Road* was a marketplace on the dark web, the part of the internet that is
only accessible if you have specific kinds of software that allow users to

* The Silk Road marketplace was founded and run by Ross Ulbricht, who called
himself Dread Pirate Roberts, or DPR, online.

181
Lesson 21 Could Blockchain Revolutionize Society?

remain anonymous. Because of this anonymity, users can engage in criminal


behavior, such as buying or selling drugs. All the transactions on Silk Road
were conducted using bitcoins, and the FBI estimated that about $1.2 billion
in revenue had been collected over the 2.5 years that the site had been active.

A Truly Decentralized Economy

● People talk about how the gig economy, fueled by tech companies like
Airbnb and Uber, has disrupted the workforce by decimating the value of
taxi medallions and hotel rooms. Financial industries have been relatively
unaffected in comparison, but blockchain has the power to change that.

● These companies promised to build a “sharing economy,” but that’s not


what happened. Instead, we have companies that have ballooned in size
because they’ve aggregated goods and services. Airbnb has aggregated
empty rooms and apartments and built a big company on top of them,
siphoning off a service fee on each transaction. Of course, there are other
benefits to entice a host to list with Airbnb, including insurance and some
vetting of guests. And guests get to access a large inventory of rooms as
well as reviews from previous guests.

● But a blockchain Airbnb could still give users access to inventory and
reviews. And just as Uber destroyed the jobs of many taxi drivers, a ride-
sharing blockchain could destroy Uber’s headquarters. Blockchain presents
an opportunity for the creation of a truly decentralized economy.

● Because it is so easy to lie online, we don’t really trust what we read and
whom we encounter on the internet. But with blockchain technology,
we could become much more confident in the truthfulness of what the
internet provides. Imagine a kind of ledger of honest acts or integrity: A
person’s reputation could follow him or her to a new destination, opening
doors more easily. A person from a developing country in Africa could
prove his or her trustworthiness with a digital wallet and begin transacting
business in a new place immediately.

● Any contracts, licenses, or other documents would become much more


portable; birth certificates, diplomas, and medical records would all be
much easier to access and verify. Voting could become impenetrable.

182
A New Model of Identity

● In the developed world, we have to protect our identity, and our actions
online and off are surveilled by any number of organizations, from tech
giants like Facebook to the government. But if you’re living in a developing
country, you might have the opposite problem. Even in the US, 7% of
people don’t have any identification, making it impossible for them to vote
or open a bank account.

● This lack of documentation can have devastating costs. According to a


UNESCO study of children in Thailand who belong to hill tribes, the
single greatest factor putting them at risk for human trafficking is the
absence of an ID or citizenship status.*

● Though these things are traditionally the responsibility of the state,


blockchain technology has the potential to put the establishment of
identity back into the hands of the citizens. Individuals could prove who
they are via the accumulation of their own actions and assets, not through
government-issued documentation. This is the idea behind a decentralized,
digital, self-sovereign model of identity.

A digital ID system in Estonia is a step in the right direction.


Although it’s still a physical card issued by the government, it
contains a chip that allows the user to share information with
private and public services, from health care to voting, which now
happens on personal smartphones and computers.

But this voting system, because it is centralized, remains


vulnerable to hacking. When governments digitize our identities
and store them centrally, we are setting ourselves up for potential
disaster. A blockchain version would be much more secure.

* Even refugee camps, set up to help the poorest of the poor and the
disenfranchised, are exploited by traffickers who recognize this vulnerability.

183
Lesson 21 Could Blockchain Revolutionize Society?

● Blockchain could also change how we define our identity. The internet has
allowed us to craft our digital personas to some extent, and now we could
extend that control over our identities offline, too.

Changing the World

● Blockchain might soon make economic activity a direct peer-to-peer


transaction, eliminating the need for centralized banking or any kind of
intermediary.

● Blockchain could disrupt our economy in ways that could help eliminate
poverty and inequality. It could bring the billions of poor people without
access to a bank into the world’s exchanges.

● What happens when we work together to create a trusting society, with


checks and balances, but no central authority? How does that kind of setup
change not only how we behave but also how we think and relate?

● Michael Casey, coauthor of The Truth Machine: The Blockchain and the
Future of Everything, thinks that this technology has the potential to make
the world “flatter”: to take the power away from the major value holders on
the internet, such as Google and Apple, and hand it back to the people.

● The core idea is that if you incentivize people to take care of a ledger by
giving them a piece of the value of the ledger and then you make it really
hard to cheat, you’re building trust on the backs of many, many individuals
rather than on an institution. Perhaps in the future the very nature of how
and who we trust will be fundamentally changed by this new technology.

Questions to Consider

1 What is blockchain, and what is it best used for?

2 What are the ways in which blockchain can help poor people?

3 What are the benefits and costs of putting our identity into digital
formats?

184
Effects of
22
Technological
Metaphors on
Science

185
Lesson 22 Effects of Technological Metaphors on Science

M etaphors are indispensable. They spark our


imagination, help us hypothesize and refine theories,
interpret data, and share ideas with other people. But
as much as metaphors help us develop understanding,
they can also constrain our thinking and lead us down
the wrong path. They can impede scientific progress
by encouraging scientists to ignore parts of a problem
that don’t easily map onto the dominant metaphor,
and they can perpetuate injustices and reinforce public
misunderstandings. Since technological innovations can
also inspire new ideas, they have an oversize influence
on the types of metaphors that scientists use. This is
especially true when it comes to metaphors about the
human brain.

The Computer Metaphor

● From about the late 1940s up until the last decade, the computer has been
the dominant metaphor used by scientists to understand the brain.

● In the 1960s, with the development of computers that were capable of


complex reasoning, the similarity to the brain became more apparent.
Both things received input from the environment, performed some
computations, and then executed actions accordingly.

Metaphors are indispensable—both when we’re trying to


understand something ourselves and when we’re trying to
describe something to another person. After all, what better
way is there to gain an understanding of something new than by
comparing it to something we already know?

186
● Back then, as Paul Cisek describes in an influential paper published
in 1999 in the Journal of Consciousness Studies, the computer metaphor
provided mechanisms for 4 unanswered questions puzzling neuroscientists.

— Cognition—how perceptions are converted into actions—could behave


like a computer program by creating internal representations that are
then manipulated by a set of rules.

— Internal states like memories or thoughts could have a physical basis,


something that was hard to imagine before the digital storage of
information.

— Brain anatomy and function could be compared to hardware and


software, respectively, suggesting that software (the psychological
phenomenon, or the mind) runs on hardware (the biology of the brain).

— The study of psychological phenomena could be formalized with


equations, logic, symbols, and information theory.

● This computer metaphor was and remains so compelling that it’s hard
to talk about the brain without invoking it. We talk about processes,
representations, wiring, programming, encoding, storage, etc.

● But it’s in part because metaphors are so influential that they also can
limit understanding. After all, they are not wholly correct. And they can
influence our thinking in ways that we’re not fully aware of. Research has
shown that metaphors can nudge our thinking in one direction or another
and can lead to confusion when they’re not quite mapping on to the
concept that we’re trying to understand—and that we’re not very good at
noticing their influence.

● Some scientists have argued that the computer metaphor has become
so pervasive when talking about the brain that we almost can’t describe
cognition or brain function without referring to it. We talk about
cognitive functions like working memory, decision-making, and analogical
reasoning as distinguishable components with specific hardware—or
neuroanatomical underpinnings.

● The problem with thinking about the brain as containing hardware or


being hardwired—the idea that circuits are set in place—is that we then

187
Lesson 22 Effects of Technological Metaphors on Science

tend to think of machine parts that don’t change. A computer’s hardware


is by definition not malleable. But the brain is a biological organ. It’s not
infinitely plastic, because it must contain a record of our experiences and
learning, but it’s also by no means static.

● Arguably, a fundamental task for the brain is to learn. When computers


learn, they don’t change their wiring or hardware. But brains do.
Learning is reflected in brain changes ranging from large-scale network
modifications to the tiniest alterations on cell membranes. Because
learning is such a dynamic process, it’s also prone to failure or errors,
something that can be hard to incorporate into the computer metaphor.

● Brains also evolve, following the rules of evolution, just like any other
organ or organism. Computers don’t. They are engineered for specific
environments; they don’t adapt, for the most part. There are 2 main
corollaries of this fact that are often ignored because they are not part of
the computer metaphor.

● First, Mother Nature is a tinkerer, which means that brains reuse, recycle,
and refine existing functions or traits, and studying their building blocks
in other animals can provide many insights into how they work in humans.
Consciousness is a great example. We tend to think of consciousness as an
emergent property of the human brain, often like an on-off switch. But
consciousness is by no means binary. It appears in more primitive forms
in animals who feel pain or other emotion analogs, or who can remember
both what and where something happened, or who behave as though they
understand justice.

● The second problem with ignoring the evolutionary trajectory of the brain is
that while computers are designed, hopefully optimally, to perform certain
tasks, brains are not. So even if we can do those tasks, we probably don’t do
them as effectively or efficiently as we could if they had been engineered.

● Memory is a great example of this problem. It’s hard not to think of memory
as a 3-stage process that involves encoding some information, then storing it,
and finally retrieving it, with little to no alteration between steps.

● But that’s far from how human memory actually works. We can encode
and store information that we are not even aware of, and it can influence

188
our behavior outside of our explicit knowledge. We also alter memories
whenever we retrieve them; instead of pulling out a file and reading it as it
was written, we rewrite it as we retrieve it! Memory is constructive, rather
than veridical or objective.

● In fact, memory isn’t even about the past at all; instead, it’s a tool we use
to help us predict the future. And this is not at all what memory is in
a computer. So when we compare our fallible memories with what we
can store in bits on a silicon chip, we can’t help but feel inadequate. But
if we consider memory a way of harnessing the past to imagine future
consequences, then we realize that what we’re capable of exceeds any
computer ever built (so far).

● These examples demonstrate that metaphors go beyond simply being


tools we use to describe things. They can shape how we think. When
considering abstract concepts, such as justice or consciousness, we rely on
our experiences, previous knowledge, and physical objects to understand
them. Justice is a scale, balancing right and wrong, or a blind lady holding
a sword and shield; consciousness is a switch, or a stream, or a level.

● And that’s the problem with how powerful the computer metaphor has
become: It’s influenced scientific thinking to the extent that it’s now
starting to hold us back.

Anatomy and Function

● Thinking about hardware and software encourages us to think of brain


anatomy and function as separate. Anatomy dictates function: If you lose
the part of your brain where language resides, you’ll have trouble speaking.

● But unlike in computers, function also dictates anatomy in the brain. With
training, you can repurpose another part of your brain to take on speech
function or even regain some function in the part that was damaged.

● That’s a pretty simple part of the metaphor to think beyond, though.


What’s more difficult is overcoming the temptation to think of some part
of the brain “reading” or “using” the software. Computers have users;
brains are users.

189
Lesson 22 Effects of Technological Metaphors on Science

● Some cognitive scientists have argued that the mind and brain are so
distinct conceptually that studying the brain tells us nothing about the
mind. There’s some truth to that: Many neuroimaging studies have been
overinterpreted by either their own authors or the media.

● Just because listening to music activates your reward system, that doesn’t
mean it’s just like an addictive drug, which also activates overlapping
circuits. If your amygdala lights up when you see an ad for a political party
whose policies you disagree with, it doesn’t necessarily mean that you’re
afraid, even though we know that the amygdala plays a role in learned fears
and is active when we are exposed to something we’ve learned to fear. But
it does much more than that.

● So brain data aren’t informative in and of themselves; we have to consider


them in the context of the behavior that accompanies them.

● If you feel scared when you see a rival political ad and your amygdala lights
up, then we might be able to say that the amygdala activation underlies the
fear you feel. But so what? We know you’re scared because you said so or
because we saw other types of behavioral changes indicative of fear, such
as sweaty palms or nervous fidgeting. What does the amygdala activation
information add?

● If we handed a bunch of engineers from another planet an iPhone and gave


them all the tools we have to reverse-engineer it, would they be able to tell
what it’s for and how it works? Probably not.

● Cognitive scientists have even set out to prove this particular point. One
group took a microchip and threw a number of cutting-edge neuroscience
tools at it. They mapped its connectome, tracking all the connections
between different parts. They lesioned parts of it at a time to observe the
effects of damage to different parts. They measured local field potentials,
the electrical activity of the circuit. They learned a lot. But they didn’t get
anywhere near understanding what it was actually for—or what it does
and how it works. It turns out that it was a chip made by Atari with which
people could play video games.

190
Memory

● Even if we mapped out every cell, every connection, and every chemical
and electrical change in a massively complicated human brain, would we
understand consciousness? The argument here is no.

● But here again is where the computer metaphor fails and even leads us
astray. Take memory as an example. Computer memory fails only when
the hardware fails. But if everything is in working order, you’ll get the
same answer to a query every time.

● Human memory is dynamic. We know that from measuring memories in


humans, even those with all parts in working order. Expose 5 people to an
event and you’ll get 5 different stories of what happened. Ask each of them
to tell the story 5 times and you’ll get 25 different versions. Why?

● The answer is in how memories are constructed and reconstructed in the


brain. When you’re experiencing the event, a set of neurons is active. Then,
you sleep and those neurons replay the day’s activity. We can track these
activation patterns. Then, when you wake up and remember, the activity is
regenerated, but it’s been culled—what the brain thought were unimportant
details are no longer part of the trace. The more you remember, the more
engrained that pattern becomes. Damage that part of the brain and you lose
the memories that are still reliant on the activity of those cells.

● But over time, some memories are so ingrained that the pattern of
activation that represents them is not the same as when they were first
encoded. So you can damage the part of the brain where they were initially
laid down and still extract information about what happened because there
are intact parts of the brain where the information now resides.

● This type of memory formation is very different from how a computer


would solve the problem. If you want to mimic a human brain function
using artificial intelligence, it can actually be more efficient to mimic the
way the brain would solve the problem—by feeding it examples and letting
it discover the rules or patterns itself.

● Our inability to imagine how memory emerges from patterns of activation


stems from our inability to imagine the brain as anything but a computer.

191
Lesson 22 Effects of Technological Metaphors on Science

● We can’t talk about memory without referring to terms like encoding (the
process by which information or experience gets into the brain), storage
(how it’s represented, whether in a pattern of neural firing or the strength
of connections between neurons or some molecular changes), and retrieval
(how the activity of the brain yields the experience of pulling up the
memory or information).

The fact is that, unlike a computer, when we turn off our


brains, the knowledge we accumulated ceases to exist.
And as technological innovations begin to be influenced by
the biology of the brain and its close-knit relationship with
experience—as in the neural network deep-learning model
for developing artificial intelligence—we might find ourselves
learning more about our brains by simulating them rather than
by trying to reverse-engineer them.

192
Questions to Consider

1 How have technological innovations influenced scientific thinking?

2 Why is the computer metaphor for the brain so compelling?

3 Why does this metaphor hold us back?

193
Robots and
23
the Future of
Work

194
D uring the information age, we designed computers—
machines that can do things for us. Initially that meant
things like counting and monotonous computations, but
now they can play chess, compose music, and learn. In
fact, we’ve built machines that are so good at humanlike
tasks that they are putting many humans out of work.
Now people are talking about the coming of the second
machine age, this time with machines that are smarter
than we are. What will the future human workforce look
like, and how should we prepare ourselves and the next
generation for the robot revolution?

The trucking industry is often referred to as the next big group


of human workers soon to become obsolete. But the robots are
coming for many white-collar jobs, too, and accountants and
doctors are also possibly on the chopping block.

Smart AI

● Superintelligent robots—the kind that populate science fiction books—


are likely still at least a few decades away. But algorithms are already
shaping our choices, our minds, and our society. It’s not walking, talking
robots that are the real threat; it’s an intelligent program with an internet
connection whose goals are misaligned with ours.

● But a machine can’t have goals, right? Tell that to a heat-seeking missile or
even a Roomba. Conscious, evil robots are not what artificial intelligence (AI)
researchers have nightmares about, but they do worry about the unintended
consequences of smart AI and that someday AI might become so smart that it
prevents us from tweaking its settings or changing its programming.

195
Lesson 23 Robots and the Future of Work

● If we define intelligence as the ability to accomplish complex goals, as MIT


cosmologist Max Tegmark does, then it doesn’t matter much whether the
thing that is intelligent is animate or inanimate. And there’s no question that
AI today can accomplish complex goals. For the most part, though, those
goals are aligned with what the programmers want the machine to do.

● But programmers being human, and humans being unable to consider all
the myriad consequences of complex actions, there is plenty of room for
error and misalignment of goals.

Digitization across the Board

● Tesla and Uber want to build self-driving trucks, which will save the
trucking industry money and presumably be easier on the environment,

196
as they can be powered by electricity. In order to even be allowed on the
roads, they’ll be safer than human-driven trucks, which kill 4000 people
a year in the United States alone. And they will address the shortage of
drivers in the trucking industry, which is estimated to increase to 175,000
jobs by 2024.

● Sure, some truck drivers might be out a job, but other jobs will be created,
industry insiders say. And the self-driving truck innovation illustrates a
scenario that many techno-optimists hope for across all industries: We
leave the things that humans find challenging or boring to the computers
and free up time to do the things we really love to do. Would a trucker
prefer to have to focus on the road for long stretches of highway or take
that time to FaceTime with his or her kids or even exercise in the back of
the truck?

● Maybe an AI-enhanced future will lead to greater expansion of the


human mind, enabling us to solve the very real problems our species and
our planet faces. That digital utopia depends on a continuing growth in
salary and improvements in quality of life across income levels, like what
happened in the US between the 1940s and 1970s. In general, as the
economy grew, so did the average salary, no matter the income level.

● But after 1970s, things began to change in the US especially. Lower-


income households have stagnated, while wealth and salary increases have
preferentially benefitted the top 1%. People with graduate degrees have
seen a 25% bump in salary; those without a high school diploma have had
to endure a 30% drop. And the difference in wages between those with
and without college degrees has continued to grow.

● Since 2000, the situation has gotten worse for workers, as owners have
been taking home more of the corporate profit, leaving employees with less
and less. This trend will not be improved with automation. In fact, unless
something changes, those who own the machines will get an even bigger
proportion of profits than the workers.

● That’s in part because digitization of work products—tax-preparation


tools, diagnostic algorithms, films, music, books, articles—means that
additional copies can be made without the need to hire workers to make
them and that they cost very little, if anything. Tech giants like Google

197
Lesson 23 Robots and the Future of Work

and Apple and streaming media producers like Netflix can afford to hire
fewer employees and still be among the world’s most valuable companies.

● Digitization in the tech economy also increases the income differential


between average or even very good workers or content creators and the
stars. J. K. Rowling became a billionaire not
because her work was exponentially better
than many other fantasy writers but
because her books were turned into Session work, in which
films, video games, and other content musicians are hired to
that is easily spread around the world. record background music for
But for every J. K. Rowling, there ads, films, and other media, is
is a growing number of writers who threatened by automation, as
can’t make a living anymore because
digital orchestras are becoming
the royalties they receive from digital
much more convincingly
versions of their work are laughable.
human-sounding.

Preparing for the Robot Revolution

● Given the growth of the tech industry, it would make sense for parents to
encourage their kids to learn to code and become proficient at computing.
And having a general understanding of programming and other ways of
utilizing digital tools is likely essential for most workers moving forward.

● But among AI experts and futurists like Kai-Fu Lee and Max Tegmark,
there’s an interesting trend: They are not advising their kids to follow
in their footsteps, instead telling them to find jobs in industries that
are not predicted to be in imminent danger of being fully automated or
revolutionized by AI.

● Lee suggests that nursing and other caregiving professions will grow
and that jobs in accounting will shrink. Tegmark advises kids to ask 3
questions about potential careers:

— Does the job require interacting with people and using social
intelligence?

— Does it involve creativity and coming up with clever solutions?

198
— Does it require working in an unpredictable environment?

● These questions are telling in that they capture what Tegmark and others
see as job characteristics that will be hard for computers to master.

● Computers are good at computing and automated tasks. They will be


better than most humans at complex math. And that includes computer
programming.

● There is a trend that more and more computer programming is becoming


automated. Why write lines of code for simple functions when you can just
copy and paste a script that does what you need? And why write the script
if you can get the machine to do it?

● That’s why teaching coding to everyone might not be the best way to
prepare the majority of students for the robot revolution. You don’t need to
be an expert in French literature to speak French.

● Instead, we need to prepare kids to be better humans. It’s harder to


automate the bedside manner of a great nurse than the keen diagnostic
skills of a doctor. That means that we need to change how we think about
education and career development. Acquiring knowledge might not be
the best path forward because tools we use to memorize information,
rewarding students for regurgitating the things they read or were told, are
antithetical to developing creative cognition.

● Flexibility in thinking will become increasingly more valuable, while


crystallized intelligence—remembering lists of facts—can already be fairly
well handled by Google.

● Daron Acemoglu and David Autor at MIT suggest dividing work into
a 2-by-2 matrix: On one side are tasks that are cognitive versus manual,
and on the other are tasks that are routine versus nonroutine. Routine
manual tasks are ones that were most adversely affected by the industrial
revolution. They also won’t fare well in the coming robot revolution.

● But the same goes for routine cognitive jobs like tax preparation
and dressmaking. Nonroutine tasks—whether they are manual, like
hairdressing, or cognitive, like science—will be most likely to weather the
coming changes, and even grow.

199
Lesson 23 Robots and the Future of Work

● According to a report by McKinsey & Company in 2017, humans remain


better than machines at the following things: new pattern recognition, logical
reasoning, creativity, coordination, natural language understanding, social and
emotional skills, and moving around unpredictable or varied environments.

● What machines are good at—and will only get better at—includes
computation and programming. Soon, robots and other AI will become
too complex for the majority of us to understand, even with extensive
training and education. So teaching all kids to code as a way of preparing
them for the future might be misguided.

● What every kid will most likely have to do is work alongside machines in
some capacity. That’s why a little coding is important; they need to be able
to run and maybe fix their coworking robots.

● Instead of asking kids to learn how to write code to program or improve an


algorithm, what about helping them develop an understanding of how that
algorithm might shape their choices, behavior, or environment? We already
have a deeper knowledge of how robots work than how we work.

● By automating the routine, repetitive, and predictable tasks that we’re


not built to optimize—because we get bored, tired, and frustrated and
make mistakes—we’re giving ourselves the opportunity to spend more
time doing things that our brains are optimized for: social interaction,
imagination, pattern recognition, and innovation.

Attitudes toward Robots and AI

● A Pew Research Center report surveying more than 4000 Americans in


2017 found that many adults have beliefs and attitudes about robots and
AI that are misaligned and that reflect a misunderstanding of how their
fellow humans behave.

● According to the report, 75% of respondents agree that self-driving


cars will enable older adults and those who have disabilities to live more
independently. And 39% think that fewer people will lose their lives in car
accidents if driverless vehicles become common. But 30% of people think
that the roads will be less safe. And 56% of respondents said that they

200
would not ride in a self-driving vehicle. Add a human supervisor and the
majority of people in the survey say they would feel better about it.

● The question, then, is what does the human supervisor add? When it
comes to road safety, humans are generally poor decision-makers, choosing
to drive under the influence or without seat belts or at illegally high speeds.
AI wouldn’t do any of these things.

● Most people who say that they wouldn’t want to ride in a driverless car
express not wanting to cede control to a machine that might be making life-
and-death decisions. But we cede that control to a fellow human every time
we ride with one and to many humans when we join them on busy roads.

● Thinking that a machine would make a less ethical choice than a human
is a great example of how poorly we understand ourselves. After all, there’s
plenty of evidence that humans make poor choices behind the wheel, ones
that are self-serving or simply reckless, such as texting while driving. The
self-driving car in the future will never make that mistake.

● There are 6 million car accidents in the US every year, and 90 people die
every day, despite the fact that we’ve had many decades to perfect this
particular skill. Clearly, we’re just not that good at driving.

Most Americans express distrust that algorithms could be used


effectively to hire workers. About 75% say they wouldn’t apply
for a job in which a computer made the hiring decision. They
think that the hiring process would be too impersonal and that
important human traits would be overlooked—that you can’t tell
how someone interacts without a human interaction.

But, as numerous studies have shown, human-led hiring decisions


are often imperfect. There are gender, age, race, and other biases
that influence decisions. Interview performance is a notoriously
bad predictor of job fit. And we’ve had to introduce checks and
balances like blind auditions for orchestra positions to get around
human bias.

201
Lesson 23 Robots and the Future of Work

● Ultimately, what this and other research shows is that our understanding
of what computers are good and bad at, and what are our own strengths
and weaknesses as humans are, is flawed. And we’ll continue to be
surprised by what we can and cannot outsource to AI and robots.

● But what’s clear is that the second machine age—the robot revolution or
the automation of the workforce—is already disrupting not only what
we do for work but how we should educate our kids. And while we need
to heed Elon Musk’s call to be thoughtful about the consequences of
continuing to develop ever smarter AI, we also need to recognize that AI is
already shaping us and our opportunities.

Questions to Consider

1 What can we learn about being human from studying robots?

2 What are humans better at than robots?

3 How should we prepare our kids for an automated future?

202
Redefining
24
What It Means
to Be Human

203
Lesson 24 Redefining What It Means to Be Human

T he exercise of creating and thinking about robots


helps us see how we humans are different from them
and what we consider to be most human.

AI Affecting How We Treat Each Other

● Ayanna Howard, an engineer who builds intelligent machines that interact


with humans, is pretty clear on what differentiates us from artificial
intelligence (AI): We make mistakes. She describes a robot built to lead people
out of a burning hospital and notes that when the robot behaves perfectly,
humans don’t trust it with their lives. It’s just a machine, after all. But if the
robot makes a “mistake” and then corrects itself and even apologies, the
humans will follow it anywhere—because it seems more human.

● The fact that we aren’t perfect is, of course, not the only thing we learn
from studying robots, AI, and other technological innovations.

● In one experiment, Nicholas Christakis, who runs the Human Nature Lab
at Yale, and his colleagues had participants work on tablets alongside a cute
humanoid robot to lay railroad tracks in a virtual world. In one condition,
the robot made fairly bland comments, ones that we expect robots to
make. In the other condition, the robot made mistakes and owned up to
them. “Sorry, guys,” it would say in a perky voice. “I know it might be hard
to believe but robots make mistakes too.”

● It turns out that the human participants not only liked that robot more,
but they also worked better together: The confessional robot, as Christakis
calls it, helped the humans communicate with each other and collaborate
more effectively.

● In other words, a robot made humans better at humanlike things like


connecting with each other.

● Dystopic views of the robot revolution—how AI or even technology in


general might change us—often include the isolating effect that these
technologies can have on us, disconnecting us from each other as we

204
connect to our devices instead. But what Christakis and others have shown
is that technology can both nudge us toward more prosocial behavior and
away from it.

● In another of his studies, Christakis and his collaborators set up a virtual


social cooperation game, in which players were assigned a bunch of money
and then could choose to either hoard it or share it with their neighbors
over several rounds. The experimenters promised to match any donations
made to neighbors, thereby doubling their contribution.

● Early in the game, human players acted generously about 2/3 of the time,
expecting that if they made the donation in one round, their altruism
might be reciprocated in subsequent rounds. But when the experimenters
added a few bots that posed as humans but behaved selfishly, keeping all
the money themselves round after round, the human players eventually
stopped cooperating. The actions of these bots altered the behavior
of thousands of humans, changing them from, in Christakis’s words,
“generous people to selfish jerks.”

A now-famous study of 5.7 million Twitter users in the months


before the 2016 US election found that 40,000 users retweeted
politically charged Russian trolls (human users who intentionally
provoke others) 80,000 times. Of 28,274 users who interacted
with and retweeted Russian trolls the most, 892 were liberals and
27,382 were conservatives.

Among all the Twitter users included in the study, there was about
an equal number of liberal- and conservative-leaning bots (basically
nonhuman trolls). The study found that even a small number
of bots—about 6% of users studied—can influence the national
conversation if we let it, at least as it’s measured by tweets.

● In better news, Kevin Munger at New York University ran a study to see
whether a gentle reminder from a compassionate bot might be able to make
Twitter users nicer. His study showed that under the right circumstances,

205
Lesson 24 Redefining What It Means to Be Human

some humans can behave better after being chastened by a bot, as long and
they respect or relate to the type of human being that the bot is imitating.

● There’s little doubt that as long as we don’t know it’s not human, AI in its
various forms can affect even our most humanlike behavior: how we treat
each other.

AI Changing the Definition of Being Human

● Is the definition of being human changing as technology becomes a larger


presence in our lives?

● Most AI experts agree that we’re not looking at a future in which


humanoid robots walk among us and we don’t know who is human and
who is not. Instead, we will be working alongside machines, capitalizing
on their strengths and ours. So what are those strengths? The answer is
not quite as obvious as it once seemed, and our technological advances are
already shaping what we think of as most human.

● What does it mean to be human—specifically, a living human? A pulse?


Not anymore. As cardiologists have made incredible advances over the past
few decades, patients with heart failure can live for another 10 years or
more. Some are outfitted with a pump that hums rather than pulses, so if
you check their wrist, you might not feel a thing. Listening to their heart,
you hear the hum and not a beat. But surely they are no less human than
they were before their hearts gave out.

● It must be the brain, then. But here, too, there is controversy as to what is
considered alive or dead. Every year or so there’s some story in the news
about family members who disagree with doctors or each other as to
whether their comatose loved one is still technically alive or if, with limited
brain function, the loved one is past the point of no return.

● In 1968, a committee of physicians at Harvard Medical School came up


with an influential definition of brain death: unresponsiveness, lack of
receptivity, absence of movement and breathing (presumably without help),
and absence of brain stem reflexes.

206
● This definition harkens back to what the nervous system in any animal is for:
to sense the environment and act accordingly. Without the ability to sense, or
to react, the animal, human or otherwise, is no longer considered alive.

● If it were possible to keep the brain sensing and reacting with the aid of
implants or medications—what cardiology has done for the heart—would
that brain still be human?

● One way to think about this problem is to ask how much of a biological
brain we need to have in order to still be human. If we replace a few cells
with a chip, then it seems obvious that we’re still human. But what about
replacing an entire region? Or an entire hemisphere? What about 80% of
cells? Or 90%? What is the cutoff?

● If that seems to be an unanswerable question, it’s in part because


consciousness is so difficult to define.

The Turing Test and Consciousness

● Alan Turing’s famous test of whether an AI is thinking involves interaction


with a human, or at least convincing imitation. If a human interrogator
cannot tell which of 2 responders is the human and which is the machine,
the test has been passed.

● Turing himself did not use the word consciousness. He was only concerned
with whether the machine was capable of exhibiting intelligent behavior.
His view was that in order to assess whether a computer can think,
we need to define thinking and that we can only objectively do so by
evaluating actions. And there have since been machines that have passed
the Turing test by brute force and following rules, rather than thinking on
their own.

● Perhaps the Turing test is no longer sufficient or needs an update. But what
if we had a biomarker for consciousness—an objective signal that we could
measure? We do have something like this for human consciousness in the
form of brain imaging.

207
Lesson 24 Redefining What It Means to Be Human

● Using either functional magnetic resonance imaging (fMRI) or


electroencephalograms (EEG), neuroscientists have shown that you can see
a specific pattern of activity when a patient, one who perhaps has locked-in
syndrome and thus cannot move at all, can follow instructions regarding
what to think about.

● For example, if you’re asked to imagine playing tennis when a


neuroscientist says “go,” parts of your brain involved in voluntary motor
control will become active, regardless of whether you actually move your
muscles. Then, when the neuroscientist says “stop,” those parts of your
brain stop being as active. Brain activity during “go” and “stop” commands
can be compared, and if it’s different in the right ways, one could argue
that you’re conscious because you’re able to direct your thoughts.

● Of course, it’s trivial to program an AI to respond differently to different


commands. The difference is that presumably the conscious human mind
can also direct its own thoughts independently, when it wants to, without
having to respond to a command.

● What the neuroimaging findings, along with a wealth of other


neuroscientific evidence, tell us is that there is a physical signature of
conscious thought in the brain but that it’s distributed across regions and
manifestations. If the sensory relay station in the middle of the brain,
the thalamus, is damaged, then consciousness can be disrupted. But
obliterating large swaths of the brain can have no influence on a person’s
conscious awareness.

● Patients with corpus callosotomies, a surgical procedure that cuts the


connections between hemispheres to temper seizure activity, have right and left
hemispheres that can’t exchange information. But they don’t experience a split
consciousness to go with their split brains. Instead, they can find themselves
baffled by some action that their right hemisphere initiated, as it seems that the
integrated sense of self-awareness is dominated by the left hemisphere.

● What these patients demonstrate is that consciousness is only one of the


steps—or perhaps even the last step—in what it feels like to be human.
After all, the vast majority of our “thinking” happens outside of our
awareness. We’re not conscious of the fact that the cells in our retina and

208
along our visual processing stream first pull apart the visual world and
then put it back together again, seemingly instantaneously and in parallel.

● There’s another problem with laying out the neural correlates of consciousness:
Different subjective experiences have different neural signatures. Your
brain activity will look one way when you’re imagining playing tennis and
another way when you’re contemplating what it means to be human. So far,
neuroscientists have failed to find the exact signature of subjective awareness or
any universally-agreed-upon definition of consciousness.

● MIT physicist and AI futurist Max Tegmark argues that a conscious


system needs to have the following qualities:

— It needs substantial information storage and processing capacity,


though as he points out, even patients with severe amnesia, who are

209
Lesson 24 Redefining What It Means to Be Human

unable to store new information for more than a few seconds, still
experience consciousness, so it doesn’t need to last very long.

— The system needs to be fairly independent from the rest of the world
so that it can have a sense of its subjective awareness as being different
from what surrounds it. Though there is still room for embodiment, as
long as the embodiment is separable from the environment.

— The parts of the system need to be integrated; otherwise, it’s a series of


separate conscious entities, not one whole.

● We’re a long way off from building an AI that has all of these
characteristics, but what Tegmark is arguing is that we don’t need the
biology of the brain to support consciousness. So the human with 99%
of brain matter replaced by computer chips remains conscious as long as
that human meets these conditions. This gradual-replacement thought
experiment has been used in debates between AI experts and among
neuroscientists, philosophers, and theoreticians of consciousness without a
compelling solution.

● Tegmark also argues that much of what feels to us like subjective awareness,
free will, or consciousness is actually the result of activity in the brain—
call it computations, processing, or whatever new metaphor the next
technological innovation will hand us—that occurs outside of our awareness.

● When you decide whether to buy a house, your decision is not a simple
analysis of pros and cons, the product of considered, rational, conscious,
deliberate thinking. It’s the end result of a whole series of prior events,
including emotional reactions and fast, automatic, intuitive thinking
influencing your deliberate thinking. You aren’t aware of many of these
influences. You come to a decision and then you have the subjective
experience of having gotten there entirely of your own free will.

● Sure, it’s your brain doing the computations (or whatever you want to call
them), so in that sense, you do own the decision. But it’s not entirely free.
In the most sophisticated AI systems, we don’t know the outcome until
we let AI system run through the computations. In Tegmark’s argument,
the computation is the decision and subjective experience is how the

210
computing feels from the inside. So perhaps AI already knows what it’s like
to be an AI.

By virtue of comparing ourselves with artificial intelligence and


other technological tools, we learn both how we’re the same and
how we’re different—what’s uniquely human (for now) and what
is not. But perhaps even more importantly, this comparison gives
us a chance to figure out who we want to be in addition to who
we are: how we want to spend our time, if we have the choice, and
what kind of future we hope to build for our species.

Questions to Consider

1 As we merge with computers, where do we begin and end?


2 How much of the brain could you replace with machines and still be
considered human?

3 If we could upload our consciousness, how would that change how we


think about the soul, mortality, time, and identity?

211
Bibliography

Recommended Books

Botsman, Rachel. Who Can You Trust? How Technology Brought Us Together
and Why It Might Drive Us Apart. PublicAffairs, 2017.

Carr, Nicholas. The Shallows: What the Internet Is Doing to Our Brains. W. W.
Norton, 2010.

Fry, Hannah. Hello World: Being Human in the Age of Algorithms. W. W.


Norton, 2018.

Greenfield, Susan. Mind Change: How Digital Technologies Are Leaving Their
Mark on Our Brains. Random House, 2015.

Kasparov, Garry. Deep Thinking: Where Machine Intelligence Ends and Human
Creativity Begins. PublicAffairs, 2017.
Lee, Kai-Fu. AI Superpowers: China, Silicon Valley, and the New World Order.
Houghton Mifflin Harcourt, 2018.

McCulloch, Gretchen. Because Internet: Understanding How Language Is


Changing. Harvill Secker/Vintage, 2019.

Newport, Cal. Deep Work: Rules for Focused Success in a Distracted World.
Grand Central, 2016.

Tapscott, Don, and Alex Tapscott. Blockchain Revolution: How the Technology
behind Bitcoin Is Changing Money, Business, and the World. Reprint ed.
Portfolio, 2018.

Tegmark, Max. Life 3.0: Being Human in the Age of Artificial Intelligence.
Vintage, 2017.

Wolfe, Maryanne. Proust and the Squid: The Story and Science of the Reading
Brain. Reprint ed. Harper Perennial, 2008.

212
———. Reader, Come Home: The Reading Brain in a Digital World.
Harper, 2018.

Wu, Tim. The Attention Merchants: The Epic Scramble to Get Inside Our
Heads. Knopf, 2016.

Recommended Articles

Allcott, H., and M. Gentzkow. “Social Media and Fake News in the 2016
Election.” Journal of Economic perspectives 31, no. 2 (2017): 211–236.

American Psychological Association. “Resolution on Violence in Video Games


and Interactive Media.” August 2005, https://www.apa.org/about/policy/
interactive-media.pdf.

Amodio, D. M., J. T. Jost, S. L. Master, S. L. Master, and C. M. Yee.


“Neurocognitive Correlates of Liberalism and Conservatism.” Nature
Neuroscience, 2007, https://doi.org/10.1038/nn1979.

Anderson, C. A., B. J. Bushman, B. D. Bartholow, J. Cantor, D. Christakis, S.


M. Coyne, et al. “Screen Violence and Youth Behavior.” Pediatrics 140, no.
2 (2017). https://doi.org/10.1542/peds.2016-1758F.

Anderson, D. R., and K. Subrahmanyam. “Digital Screen Media and


Cognitive Development.” Pediatrics 140, no. 2 (2017). https://doi.
org/10.1542/peds.2016-1758C.

Angwin, J., J. Larson, S. Mattu, and L. Kirchner. “Machine Bias.” ProPublica,


May 23, 2016, https://www.propublica.org/article/machine-bias-risk-
assessments-in-criminal-sentencing.

APA Task Force on Violent Media. Technical Report on the Review of the
Violent Video Game Literature. August 2015. https://www.apa.org/pi/
families/review-video-games.pdf.

Bandura, A., D. Ross, and S. A. Ross. “Transmission of Aggression through


Imitation of Aggressive Models.” Journal of Abnormal and Social Psychology
63, no. 3 (1961): 575–582.

213
Bibliography

Barasch, A., G. Zauberman, and K. Diehl. “How the Intention to Share


Can Undermine Enjoyment: Photo-Taking Goals and Evaluation of
Experiences.” Journal of Consumer Research 44, no. 6 (2018): 1220–1237.

Berkowitz, L. “A Cognitive-Neoassociation Theory of Aggression.” The


Handbook of Theories of Social Psychology (2011): 99–117.

Botella, C., B. Serrano, R. M. Banos, and A. Garcia-Palacios. “Virtual


Reality Exposure-Based Therapy for the Treatment of Post-Traumatic
Stress Disorder: A Review of Its Efficacy, the Adequacy of the Treatment
Protocol, and Its Acceptability.” Neuropsychiatric Disease and Treatment 11
(2015): 2533–2545. https://doi.org/10.2147/NDT.S89542.

Bushman, B. J., and C. A. Anderson. “Violent Videogames and Hostile


Expectations: A Test of the General Aggression Model.” The Society for
Personality and Social Psychology 28, no. 12 (2002): 1679–1686.

Bushman, B. J., and R. Huesmann. “Short-Term and Long-Term Effects of


Violent Media on Aggression in Children and Adults.” Archives of Pediatrics
& Adolescent Medicine 160, no. 4 (2006): 348–352.

Campolo, A., M. Sanfilippo, M. Whittaker, and K. Crawford. “AI Now 2017


Report.” AI Now Institute, 2017, https://ainowinstitute.org/AI_Now_2017_
Report.pdf.

Cisek, P. “Beyond the Computer Metaphor: Behavior as Interaction.” Journal


of Consciousness Studies 6, no. 11–12 (1999): 125–142.

Cukier, K., and V. Mayer-Schoenberger. “The Rise of Big Data: How It’s
Changing the Way We Think about the World.” Foreign Affairs 92, no. 3
(2013): 28–40.

Doring, N. M. “The Internet’s Impact on Sexuality: A Critical Review of 15


Years of Research.” Computers in Human Behavior 25 (2009): 1089–1101.

Dunbar, R. I. M. “Do Online Social Media Cut through the Constraints


That Limit the Size of Offline Social Networks?” Royal Society Open Science
3 (2016): 150292.

214
Ekeland, A. G., A. Bowes, and S. Flottorp. “Effectiveness of Telemedicine: A
Systematic Review of Reviews.” International Journal of Medical Informatics
79, no. 11 (2010): 736–771.

Elpidorou, A. “The Bright Side of Boredom.” Frontiers in Psychology,


November 3, 2014, https://doi.org/10.3389/fpsyg.2014.01245.

Falk, E. B., S. A. Morelli, B. L. Welborn, K. Dambacher, and M. D.


Lieberman. “Creating Buzz: The Neural Correlates of Effective Message
Propagation.” Psychological Science 24, no. 7 (2013): 1234–1242.

Finkel, E. J., P. W. Eastwick, B. R. Karney, H. T. Reis, and S. Sprecher.


“Online Dating: A Critical Analysis from the Perspective of Psychological
Science.” Psychological Science in the Public Interest 13, no. 1 (2012): 3–66.
https://doi.org/10.1177/1529100612436522.

Garavan, H., J. Pankiewicz, A. Bloom, J. K. Cho, L. Sperry, T. J. Ross, et al.


“Cue-Induced Cocaine Craving: Neuroanatomical Specificity for Drug
Users and Drug Stimuli.” American Journal of Psychiatry 157, no. 11 (2000):
1789–1798.

Gentile, B., J. M. Twenge, E. C. Freeman, and W. K. Campbell. “The Effect


of Social Networking Websites on Positive Self-Views: An Experimental
Investigation.” Computers in Human Behavior 28, no. 5 (2012): 1929–1933.

Gentile, D. A., C. A. Anderson, S. Yukawa, N. Ihori, M. Saleem, L. K. Ming,


et al. “The Effects of Prosocial Video Games on Prosocial Behaviors:
International Evidence from Correlational, Longitudinal, and Experimental
Studies.” Personality and Social Psychology Bulletin 35, no. 6 (2009):
752–763.

Gentile, D. A., K. Bailey, D. Bavelier, J. Funk Brockmyer, H. Cash, S. M.


Coyne, et al. “Internet Gaming Disorder in Children and Adolescents.”
Pediatrics 140, no. 2 (2017). https://doi.org/10.1542/peds.2016-1758H.

Gilkerson, J., J. A. Richards, S. F. Warren, J. K. Montgomery, C. R.


Greenwood, D. Kimbrough Oller, et al. “Mapping the Early Language
Environment Using All-Day Recordings and Automated Analysis.”
American Journal of Speech-Language Pathology 26, no. 2 (2017): 248–265.

215
Bibliography

Greitmeyer, T., and D. O. Mugge. “Video Games Do Affect Social


Outcomes: A Meta-Analytic Review of the Effects of Violent and Prosocial
Video Game Play.” Personality and Social Psychology Bulletin 40, no. 5
(2014): 578–589. https://doi.org/10.1177/0146167213520459.

Guadagno, R. E., B. M. Okdie, and S. A. Kruse. “Dating Deception: Gender,


Online Dating, and Exaggerated Self-Presentation.” Computers in Human
Behavior 28, no. 2 (2012): 642–647.

Hald, G. M., N. M. Malamuth, and C. Yuen. “Pornography and Attitudes


Supporting Violence against Women: Revisiting the Relationship in Non-
Experimental Studies.” Aggressive Behavior 36, no. 1 (2010): 14–20.

Hawley Turner, K., T. Jolls, M. S. Hagerman, W. O’Byrne, T. Hicks, B.


Eisenstock, and K. E. Pytash. “Developing Digital and Media Literacies
in Children and Adolescents.” Pediatrics 140, no. 2 (2017). https://doi.
org/10.1542/peds.2016-1758P.

Henkel, L. A. “Point-and-Shoot Memories: The Influence of Taking Photos


on Memory for a Museum Tour.” Psychological Science 25, no. 2 (2014):
396–402.

Hesse, B. W. “The Patient, the Physician, and Dr. Google.” AMA Journal of
Ethics 14, no. 5 (2012): 398–402.

Hilton, D. L. “Pornography Addiction: A Supranormal Stimulus Considered


in the Context of Neuroplasticity.” Socioaffective Science and Psychology 3,
no. 1 (2013): 20767. https://doi.org/10.3402/snp.v3i0.20767.

Hoffman, H. G. “Virtual-Reality Therapy.” Scientific American, August 2004,


https://www.scientificamerican.com/article/virtual-reality-therapy/.

Hoge, E., D. Bickham, and J. Cantor. “Digital Media, Anxiety, and


Depression in Children.” Pediatrics 140, no. 2 (2017). https://doi.
org/10.1542/peds.2016-1758G.

Hsee, C. K., and F. Leclerc. “Products Look More Attractive When Presented
Separately or Together?” Journal of Consumer Research 25, no. 2 (1998):
175–186.

216
Hyler, S. E., D. P. Gangure, and S. T. Batchelder. “Can Telepsychiatry
Replace In-Person Psychiatrics Assessments? A Review and Meta-Analysis
of Comparison Studies.” CNS Spectrum 10, no. 5 (2005): 403–413.

Iacoboni, M., J. Freedman, and J. Kaplan. “This Is Your Brain on Politics.”


The New York Times, 2007, https://www.nytimes.com/2007/11/11/
opinion/11freedman.html.

James, C., K. Davis, L. Charmaraman, S. Konrath, P. Slovak, E. Weinstein,


and L. Yarosh. “Digital Life and Youth Well-Being, Social Connectedness,
Empathy, and Narcissism.” Pediatrics 140, no. 2 (2017). https://doi.
org/10.1542/peds.2016-1758F.

Jonas, E., and K. P. Kording. “Could a Neuroscientist Understand a


Microprocessor?” PLOS Computational Biology, January 12, 2017,
https://doi.org/10.1371/journal.pcbi.1005268.

Jost, J. T., H. H. Nam, D. M. Amodio, and J. J. Van Bavel. “Political


Neuroscience: The Beginning of a Beautiful Friendship.” Advances in
Political Psychology 35, no. 1 (2014). https://doi.org/10.1111/pops.12162.

Kanai, R., R. T. Feilden, C. Firth, and G. Rees. “Political Orientations Are


Correlated with Brain Structure and Young Adults. Current Biology 21, no.
8. https://doi.org/10.1016/j.cub.2011.03.017.

Kleim, B., J. Wysokowsky, N. Schmid, E. Seifritz, and B. Rasch. “Effects of


Sleep after Experimental Trauma on Intrusive Emotional Memories. Sleep
Research Society 39, no. 12 (2016): 2125–2132.

Konrath, S. H., E. H. O’Brien, and C. Hsing. “Changes in Dispositional


Empathy in American College Students over Time: A Meta-Analysis.
Personality and Social Psychology Review, August 5, 2010, https://doi.
org/10.1177/1088868310377395.

Kraut, R., S. Kiesler, B. Boneva, J. Cummings, V. Helgeson, and A.


Crawford. “Internet Paradox Revisited.” Journal of Social Issues 58, no. 1
(2002): 49–74.

217
Bibliography

Kross, E., P. Verduyn, E. Demiraip, J. Park, D. Seungjae Lee, N. Lin,


et al. “Facebook Use Predicts Declines in Subjective Well-Being and
Young Adults.” Plos One 8, no. 8 (2013). https://doi.org/10.1371/journal.
pone.0069841.

Kuhm, G. “Brain Structure and Functional Connectivity Associated with


Pornography Consumption: The Brain on Porn.” JAMA Psychiatry 71, no. 7
(2014): 827–834.

Lafon, B., S. Henin, Y. Huang, D. Friedman, L. Melloni, T. Thesen, et al.


“Low Frequency Transcranial Electrical Stimulation Does Not Entrain
Sleep Rhythms Measured by Human Intracranial Recordings.” Nature
Communications 8 (2017): 1199. https://doi.org/10.1038/s41467-017-
01045-x.

Landripet, I., and A. Štulhofer. “Is Pornography Use Associated with Sexual
Difficulties and Dysfunctions among Younger Heterosexual Men?” The
Journal of Sexual Medicine 12 (2015): 1136–1139.

Lapierre, M. A., F. Fleming-Milici, E. Rozendaal, A. R. McAlister, and J.


Castonguay. “The Effect of Advertising on Children and Adolescents.”
Pediatrics 140, no. 2 (2017). https://doi.org/10.1542/peds.2016-1758V.

Laumann, E. O., A. Paik, and R. C. Rosen. “Sexual Dysfunction in the


United States: Prevalence and Predictors.” JAMA 281 (1999): 537–544.

LeBourgeois, M. K., L. Hale, A. M. Chang, L. D. Akacem, H. E.


Montgomery-Downs, and O. M. Buxton. “Digital Media and Sleep in
Childhood and Adolescence.” Pediatrics 140, no. 2 (2017). https://doi.
org/10.1542/peds.2016-1758J.

Lu, J. K., A. C. Hafenbrack, P. W. Eastwick, D. J. Wang, W. W. Maddux,


and A. D. Galinsky. “‘Going Out’ of the Box: Close Intercultural
Friendships and Romantic Relationships Spark Creativity, Workplace
Intervention, and Entrepreneurship.” Journal of Applied Psychology 102, no.
7 (2017): 1091–1108.

Manyika, J., M. Chui, M. Miremadi, J. Bughin, K. George, P. Willmott,


and M. Dewhurst. “A Future That Works: Automation, Employment, and
Productivity.” McKinsey & Company, January 2017, https://www.mckinsey.

218
com/~/media/McKinsey/Featured%20Insights/Digital%20Disruption/
Harnessing%20automation%20for%20a%20future%20that%20works/
MGI-A-future-that-works_Full-report.ashx.

Mar, R. A., K. Oatley, and J. B. Peterson. “Exploring the Link between


Reading Fiction and Empathy: Ruling Out Individual Differences and
Examining Outcomes.” Communications 34 (2009): 407–428. https://doi.
org/10.1515/COMM.2009.025.

McGlashan, H. L., C. C. V. Blanchard, N. J. Sycamore, R. Lee, B. French,


and N. P. Holmes. “Improvement in Children’s Fine Motor Skills following
a Computerized Typing Intervention.” Human Movement Science 56 (2017):
29–36.

Middaugh, A., L. Schofield Clark, and P. J. Ballard. “Digital Media,


Participatory Politics, and Positive Youth Development.” Pediatrics 140, no.
2 (2017). https://doi.org/10.1542/peds.2016-1758Q.

Negash, S., N. Van Ness Sheppard, N. M. Lambert, and F. D. Fincham.


“Trading Later Rewards for Current Pleasure: Pornography Consumption
and Delay Discounting.” The Journal of Sex Research 53, no. 6 (2016):
698–700.

Nestler, E. J., M. Barrot, and D. W. Self. “DeltaFosB: A Sustained Molecular


Switch for Addiction.” Proceedings of the National Academy of Sciences of the
United States of America 98, no. 20 (2001): 11042–11046.

O’Sullivan, L. F., L. A. Brotto, E. S. Byers, J. A. Majerovich, and J. A. Wuest.


“Prevalence and Characteristics of Sexual Functioning among Sexually
Experienced Middle to Late Adolescents.” The Journal of Sexual Medicine 11
(2014): 630–641.

Ophir, E., C. Nass, and A. D. Wagner. “Cognitive Control in Media


Multitaskers.” Proceedings of the National Academy of Sciences of the United
States of America 106, no. 37 (2009): 15583–15587.

Parsons, T. D., G. Riva, S. Parsons, F. Mantovani, N. Newbutt, L. Lin, et


al. “Virtual Reality in Pediatric Psychology.” Pediatrics 140, no. 2 (2017).
https://doi.org/10.1542/peds.2016-1758I.

219
Bibliography

Patterson, K. R., M. Lundmark, V. Kiesler, S. Mukophadhyay, and T. S.


William. “Internet Paradox: A Social Technology That Reduces Social
Involvement in Psychological Well-Being?” American Psychologist 53, no. 9
(1998): 1017–1031.

Payne, S. J., G. B. Duggan, and H. Neth. “Discretionary Task Interleaving:


Heuristics for Time Allocation in Cognitive Foraging.” Journal of
Experimental Psychology 136, no. 3 (2007): 370–388.

Pew Research Center. “Mobile Fact Sheet.” https://www.pewresearch.org/


internet/fact-sheet/mobile/.

Pitchers, K. K., K. S. Frohmader, V. Vialou, E. Mouzon, E. J. Nestler, M.


N. Lehman, and L. M. Coolen. “DeltaFosB in the Nucleus Accumbens Is
Critical for Reinforcing Effects of Sexual Reward.” Genes, Brain, Behavior
9, no. 7 (2010): 831–840.

Polman, H., B. Orobio de Castro, and M. A. G. Van Aken. “Experimental


Study of the Differential Effects of Playing versus Watching Violent Video
Games on Children’s Aggressive Behavior.” Aggressive Behavior 34, no. 3
(2007). https://doi.org/10.1002/ab.20245.

Primack, B. A., A. Shensa, C. G. Escobar-Viera, E. Barrett, J. E. Sidani, J.


Colditz, and A. E. James. “Use of Multiple Social Media Platforms and
Symptoms of Depression and Anxiety: A Nationally-Representative Study
among U.S. Young Adults.” Multiple Platforms and Depression and Anxiety
69 (2017): 1–9. https://doi.org/10.1016/j.chb.2016.11.013.

Przybylski, A. K., and N. Weinstein. “Violent Video Game Engagement Is


Not Associated with Adolescents’ Aggressive Behaviour: Evidence from a
Registered Report.” The Royal Society Publishing, February 13, 2019, https://
doi.org/10.1098/rsos.171474.

Ralph, B. C., D. R. Thomson, J. A. Cheyne, and D. Smilek. “Media


Multitasking and Failures of Attention in Everyday Life.” Psychological
Research 78, no. 5 (2014): 661–669.

Ralph, B. C., D. R. Thomson, P. Seli, J. S. Carriere, and D. Smilek. “Media


Multitasking and Behavioral Measures of Sustained Attention.” Attention,
Perception, & Psychophysics 77, no. 2 (2015): 390–401.

220
Rideout, V. “The Common Sense Census: Media Use by Kids Age Zero to
Eight.” Common Sense Media, 2017, https://www.commonsensemedia.
org/research/the-common-sense-census-media-use-by-kids-age-zero-to-
eight-2017.

Robinson, T. N., J. A. Banda, L. Hale, A. Shirong Lu, F. Fleming-Milici, S. L.


Calvert, and E. Wartella. “Screen Media Exposure and Obesity in Children
and Adolescents.” Pediatrics 140, no. 2 (2017). https://doi.org/10.1542/
peds.2016-1758K.

Romer, D., and M. Moreno. “Digital Media and Risks for Adolescent
Substance Abuse and Problematic Gambling.” Pediatrics 140, no. 2 (2017).
https://doi.org/10.1542/peds.2016-1758L.

Roose, K. “The Making of a YouTube Radical.” The New York Times, 2019,
https://www.nytimes.com/interactive/2019/06/08/technology/youtube-
radical.html?auth=login-email.

Rothman, J. “Are We Already Living in Virtual Reality?” The New Yorker,


April 2, 2018, https://www.newyorker.com/magazine/2018/04/02/are-we-
already-living-in-virtual-reality.

Saleem, M., C. A. Anderson, and D. A. Gentile. “Effects of Prosocial,


Neutral, and Violent Video Games on College Students’ Affect.” Aggressive
Behavior 38 (2012): 263–271.

Scharf, C. “Defense of Metaphors in Science Writing.” Scientific American,


July 9, 2013, https://blogs.scientificamerican.com/life-unbounded/in-
defense-of-metaphors-in-science-writing/.

Semigran, H. L., J. A. Linder, C. Gidengil, and A. Mehrotra. “Evaluation of


Symptom Checkers for Self Diagnosis and Triage: Audit Study.” BMJ 351
(2015). https://doi.org/10.1136/bmj.h3480.

Skitka, L. J., K. L. Mosier, and M. Burdick. “Does Automation Bias Decision


Making?” International Journal of Human-Computer Studies 51 (1999):
991–1006.

221
Bibliography

Small, G. W., T. D. Moody, P. Siddarth, and S. Y. Bookheimer. “Your Brain


on Google: Patterns of Cerebral Activation during Internet Searching.”
American Journal of Geriatric Psychiatry 17, no. 2 (2008): 116–126.

Smith, A., and M. Anderson. “Automation in Everyday Life.” Pew Research


Center, October 4, 2017, https://www.pewresearch.org/internet/2017/10/04/
automation-in-everyday-life/.

Soares, J. S., and B. Storm. “Forget in Flash: A Further Investigation of the


Photo-Taking-Impairment Effect.” Journal of Applied Research in Memory
and Cognition 7, no. 1 (2018): 154–160.

Storm, B. C., and S. M. Stone. “Saving-Enhanced Memory:


The Benefits of Saving on the Learning and Remembering of
New Information.” Psychological Science, 2015, http://dx.doi.org/10.1177/09
56797614559285Storm.

Tang, H. Y., S. M. McCurry, B. Riegel, K. C. Pike, and M. V. Vitiello.


“Open-Loop Audiovisual Stimulation Induces Delta EEG Activity in Older
Adults with Osteoarthritis Pain and Insomnia.” Biological Research for
Nursing 21, no. 3 (2019): 307–317.

Taylor, C., and B. M. Dewsbury. “On the Problem and Promise of Metaphor
Use in Science and Science Communication.” Journal of Microbiology and
Biology Education 19, no. 1 (2018). https://doi.org/10.1128/jmbe.v19i1.1538.

Teachout, T. “Almanac: Aldous Huxley on TV and the Death of


the Amateur.” Arts Journal, 2018, https://www.artsjournal.com/
aboutlastnight/2018/07/almanac-aldous-huxley-on-tv-and-the-death-of-the-
amateur.html.

Uhis, Y. T., N. B. Ellison, and K. Subrahmanyam. “Benefits and Costs of


Social Media in Adolescence.” Pediatrics 140, no. 2 (2017). https://doi.
org/10.1542/peds.2016-1758E.

Uncapher, M. R., L. Lin, L. D. Rosen, H. L. Kirkorian, N. S. Baron, K.


Bailey, et al. “Media Multitasking and Cognitive Psychological, Neural,
and Learning Differences.” Pediatrics 140, no. 2 (2017). https://doi.
org/10.1542/peds.2016-1758D.

222
Warraich, H. “Dr. Google Is a Liar.” The New York Times, 2018, https://www.
nytimes.com/2018/12/16/opinion/statin-side-effects-cancer.html.

Wastlund, E., T. Norlander, and T. Archer. “Internet Blues Revisited:


Replication and Extension of an Internet Paradox Study.” Cyberpsychology
and Behavior 4, no. 3 (2001): 385–391.

Westen, D., P. S. Blagov, K. Harenski, C. Kilts, and S. Hamann. “Neural


Bases of Motivated Reasoning: An FMRI Study of Emotional Constraints
on Partisan Political Judgment in the 2004 U.S. Presidential Election.”
Journal of Cognitive Neuroscience 18, no. 11 (2006): 1947–1958.

Willoughby, T., P. J. C. Adachi, and M. Good. “A Longitudinal Study of


the Association between Violent Video Game Play and Aggression among
Adolescents.” Developmental Psychology 48, no. 4 (2012): 1044–1057.

223
Image Credits

3: SB/Istock//Getty Images; 8: Hayri Er/E+/Getty Images; 14:


metamorworks/iStock/Getty Images; 21: shapecharge/iStock/Getty Images;
29: Erikona/iStock/Getty Images; 42: simonapilolla/iStock/Getty Images;
51: shironosov/iStock/Getty Images; 58: andresr/E+/Getty Images; 65:
Milan_Jovic/E+/Getty Images; 78: amenic181/iStock/Getty Images Plus;
85: SDI Productions/E+/Getty Images; 93: monkeybusinessimages/iStock/
Getty Images; 102: skynesher/E+/Getty Images; 114: Ridofranz/iStock/
Getty Images; 118: monzenmachi/E+/Getty Images; 127: Tero Vesalainen/
iStock/Getty Images; 141: EmirMemedovski/E+/Getty Images; 146:
Grinvalds/iStock/Getty Images Plus; 153: recep-bg/E+/Getty Images;
161: goc/iStock/Getty Images; 171: FatCamera/E+/Getty Images; 181:
domoyega/E+/Getty Images; 192: Srdjanns74/iStock/Getty Images; 19:
PhonlamaiPhoto/iStock/Getty Images; 209: yodiyim/iStock/Getty Images

224

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy