Consumersand Artificial Intelligence

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/346247695

Consumers and Artificial Intelligence: An Experiential Perspective

Article in Journal of Marketing · October 2020


DOI: 10.1177/0022242920953847

CITATIONS READS

40 3,608

4 authors, including:

Stefano Puntoni
Erasmus University Rotterdam
42 PUBLICATIONS 1,214 CITATIONS

SEE PROFILE

All content following this page was uploaded by Stefano Puntoni on 03 December 2020.

The user has requested enhancement of the downloaded file.


Article
Journal of Marketing
1-21
Consumers and Artificial Intelligence: ª American Marketing Association 2020
Article reuse guidelines:

An Experiential Perspective sagepub.com/journals-permissions


DOI: 10.1177/0022242920953847
journals.sagepub.com/home/jmx

Stefano Puntoni, Rebecca Walker Reczek, Markus Giesler, and Simona Botti

Editor’s Note: This article is part of the JM-MSI Special Issue on “From Marketing Priorities to Research Agendas,” edited by John
A. Deighton, Carl F. Mela, and Christine Moorman. Written by teams led by members of the inaugural class of MSI Scholars, these
articles review the literature on an important marketing topic reflected in the MSI Priorities and offer an expansive research agenda
for the marketing discipline. A list of articles appearing in the Special Issue can be found at http://www.ama.org/JM-MSI-2020.

Abstract
Artificial intelligence (AI) helps companies offer important benefits to consumers, such as health monitoring with wearable
devices, advice with recommender systems, peace of mind with smart household products, and convenience with voice-activated
virtual assistants. However, although AI can be seen as a neutral tool to be evaluated on efficiency and accuracy, this approach
does not consider the social and individual challenges that can occur when AI is deployed. This research aims to bridge these two
perspectives: on one side, the authors acknowledge the value that embedding AI technology into products and services can
provide to consumers. On the other side, the authors build on and integrate sociological and psychological scholarship to examine
some of the costs consumers experience in their interactions with AI. In doing so, the authors identify four types of consumer
experiences with AI: (1) data capture, (2) classification, (3) delegation, and (4) social. This approach allows the authors to discuss
policy and managerial avenues to address the ways in which consumers may fail to experience value in organizations’ investments
into AI and to lay out an agenda for future research.

Keywords
artificial intelligence, AI, customer experience, technology marketing, privacy, discrimination, replacement, alienation

Not long ago, artificial intelligence (AI) was the stuff of failure to incorporate behavioral insight into technological
science fiction. Now it is changing how consumers eat, sleep, developments may undermine consumers’ experiences
work, play, and even date. Consider the diversity of interac- with AI.
tions consumers might have with AI throughout the day, from This article aims to bridge these two perspectives: on one
Fitbit’s fitness tracker and Alibaba’s Tmall Genie smart side, we acknowledge the benefits that AI can provide to
speaker to Google Photo’s editing suggestions and Spotify’s consumers. On the other side, we build on and integrate socio-
music playlists. Given the growing ubiquity of AI in consu- logical and psychological scholarship to examine the costs
mers’ lives, marketers operate in organizations with a culture consumers can experience in their interactions with AI.
increasingly shaped by computer science. Software developers’ Exposing the tension between these benefits and costs, we offer
objective of creating technical excellence, however, may not
naturally align with marketers’ objective of creating valued
consumer experiences. For example, computer scientists often Stefano Puntoni is Professor of Marketing, Rotterdam School of Management,
characterize algorithms as neutral tools evaluated on efficiency Erasmus University, the Netherlands (email: spuntoni@rsm.nl). Rebecca
and accuracy (Green and Viljoen 2020), an approach that may Walker Reczek is Dr. H. Lee “Buck” Mathews Professor of Marketing, The
Ohio State University, USA (email: reczek.3@osu.edu). Markus Giesler is
overlook the social and individual complexities of the contexts Associate Professor of Marketing, York University, Canada (email:
in which AI is increasingly deployed. Thus, whereas AI can mgiesler@schulich.yorku.ca). Simona Botti is Professor of Marketing, London
improve consumers’ lives in very concrete and relevant ways, a Business School, UK (email: sbotti@london.edu).
2 Journal of Marketing XX(X)

recommendations to guide managers and scholars investigating shed light on the affective and symbolic aspects of technol-
these challenges. In so doing, we respond to the call from the ogy consumption in addition to the utilitarian and functional
Marketing Science Institute to examine “the role of the human/ ones (Mick and Fournier 1998). “Data capture” is the expe-
tech interface in marketing strategy” and to offer more scho- rience of endowing individual data to AI, “classification” is
larly attention to situations where “customers face an array of the experience of receiving AI’s personalized predictions,
new devices with which to interact with firms, fundamentally “delegation” is the experience of engaging in production pro-
altering the purchase experience” (Marketing Science cesses where the AI performs some tasks on behalf of the
Institute 2018). consumer, and “social” is the experience of interactive com-
We begin by offering a framework that conceptualizes AI munication with an AI partner.
as an ecosystem with four capabilities. We focus on the For each experience, we identify benefits and costs from a
consumer experience of these capabilities, including the ten- consumer perspective and propose that managers qualify their
sions felt. We then offer more insights into the experience of focus on the former by paying attention to the latter: a data
these tensions at a macro level by exposing relevant and capture experience may serve or exploit consumers, a classi-
often explosive narratives in the sociological context and at fication experience may understand or misunderstand them, a
the micro level by illustrating them with real-life examples delegation experience may empower or replace consumers,
grounded in relevant psychological literature. Using these and a social experience may connect or alienate them. We
insights, we provide marketers with recommendations regard- next examine each of these experiences, their social science
ing how to learn about and manage the tensions. Paralleling connections, managerial implications, and future research
the joint emphasis on social and individual responses, we directions.
make recommendations outlining both the organizational
learning in which firms should engage to lead the deployment
of consumer AI and the concrete steps they should take to
The AI Data Capture Experience
design improved consumer AI experiences. We close with a The listening capability enables AI systems to collect data
research agenda that cuts across the four consumer experi- about consumers and the environment in which they live. We
ences and suggests ideas for how researchers might contrib- conceptualize the resulting experience as “data capture,” which
ute new knowledge on this important topic. includes the different ways in which data are transferred to the
AI. Data can be intentionally provided by consumers, albeit
with different degrees of understanding of the process: consu-
Understanding the Consumer AI Experience mers share data when there is little or no uncertainty about how
We conceptualize AI as an ecosystem comprising three fun- the data will be used and by whom, or consumers surrender
damental elements—data collection and storage, statistical data when this uncertainty is high (Walker 2016). Data can also
and computational techniques, and output systems—that be obtained by AI from the “shadows” consumers leave behind
enable products and services to perform tasks typically when they engage in daily activities, as in the case of a shopper
understood as requiring intelligence and autonomous deci- perusing a store equipped with facial recognition technology or
sion making on behalf of humans (Agrawal, Gans, and of an iRobot Roomba creating a map of a residential space
Goldfarb 2018). These elements are associated with capabil- (Kuniavsky 2010).
ities (i.e., listening, predicting, producing, and communicat- The data capture experience provides benefits to consu-
ing). Data collection devices listen in the broad sense of mers because it can make them feel as if they are served by
gathering information from different sources; for example, the AI: the provision of personal data allows consumers
product sensors scan the environment, and wearable devices access to customized services, information, and entertain-
record physical activity. Algorithms leverage this informa- ment, often for free. For example, consumers who install
tion to predict; for example, Spotify serves music sugges- the Google Photos app let Google capture their memories
tions through personalized playlists. Finally, output systems but in return get an AI-powered assistant that suggests
produce a response or communicate with consumers, for context-sensitive actions when viewing photos. Access to
example by directing a vehicle or responding through con- customized services also implies that consumers can enjoy
sumer interfaces like Baidu’s Duer. the outcome of decisions made by digital assistants, which
To articulate a customer-centric view of AI, we move effectively match personal preferences with available
attention away from the technology toward how the AI cap- options without having to endure the cognitive and affective
abilities are experienced by consumers. “Consumer experi- fatigue that decision making can entail (André et al. 2018).
ence” relates to the interactions between the consumer and Finally, access to customized services offers unprecedented
the company during the customer journey and encompasses opportunities for self-improvement. Consider one of the
multiple dimensions: emotional, cognitive, behavioral, sen- projects within Alphabet, in which data from smartphones,
sorial, and social (Brakus, Schmitt, and Zarantonello 2009; genomes, wearables, and ambient sensors are combined to
Lemon and Verhoef 2016). Our framework is built on four drive personalized health care (Kuchler 2020).
experiences that reflect how consumers interact with the four Despite AI’s ability to predict and satisfy preferences, con-
AI capabilities (Figure 1). This experiential perspective helps sumers can feel exploited in data capture experiences, mainly
Puntoni et al. 3

AI CAPABILITY

Predicting
SOCIOLOGICAL
CONTEXT
Unequal
Worlds
SOCIOLOGICAL
CONTEXT
Trans-
humanism
AI CAPABILITY CLASSIFICATION AI CAPABILITY

Listening EXPERIENCE
Producing
PSYCHOLOGICAL TENSION
Understood
Misunderstood
DATA CAPTURE DELEGATION
SOCIOLOGICAL EXPERIENCE EXPERIENCE
CONTEXT
PSYCHOLOGICAL TENSION PSYCHOLOGICAL TENSION
Surveillance
Society Served Empowered
Replaced
Exploited Replaced
Consumer
SOCIOLOGICAL
CONTEXT
Humanized
Unchartered AI
SOCIAL
Experience EXPERIENCE

PSYCHOLOGICAL TENSION
Connected
Alienated

Alienated AI CAPABILITY
Unchartered
Experience
Interacting

Figure 1. The consumer AI experience.

because they do not understand AI’s operating criteria. This monitoring of human behavior. Stories such as George
can be attributed to several features of AI. First, the modalities Orwell’s 1984 or Philip K. Dick’s Minority Report envision
of data acquisition are becoming increasingly intrusive and systems of oppression in which, due to lack of privacy and
difficult to avoid. Second, even when consumers intentionally constant surveillance, people can no longer control their des-
share information, they are not aware of how this information tiny. This dystopian imagination is echoed in sociological
is aggregated over time and across contexts. Finally, data scholarship that associates data capture with the rise of a
brokers are largely unregulated and often lack transparency capitalist marketplace in which private information becomes
and accountability (Grafanaki 2017). As a result, data capture the central form of capital (Zuboff 2019).
experiences may threaten consumers’ ownership of personal Such dystopian concerns strike a resonant chord when
data and challenge personal control, that is, the feeling that considering Google’s move in the early 2000s to transform
events are determined by the self rather than by others or by consumer data from a by-product into an economic asset
external forces and can be stirred toward desired outcomes that formed the basis of a new type of commerce driven
(DeCharms 1968). We examine the consequences of this loss by the ability to colonize the consumer’s private experience.
of control next from both a sociological and psychological This commerce contributes to a surveillance marketplace, in
perspective. which data surplus is “fed into advanced manufacturing
processes known as ‘machine intelligence’ and fabricated
into prediction products that anticipate what you will do
Sociological Context: The Surveillance Society Narrative now, soon, and later” (Zuboff 2019, p. 14, italics in the
In popular culture, lack of ownership over personal data has original). To illustrate the power of this commerce, targeted
been frequently associated with a loss of personal control ads based on personality characteristics inferred from the
stemming from technology’s threatening potential to enable analysis of Facebook likes in combination with online
4 Journal of Marketing XX(X)

survey questions can increase conversion rates by about (Brehm 1966), which causes more negative evaluations of and
50% (Matz et al. 2017). In 2018, Facebook’s revenues from hostile behaviors toward the source of the restriction. In mar-
the sales of such tailored ads was close to $56 billion keting, reactance can decrease the likelihood to repurchase and
(Moore and Murphy 2019). follow recommendations (Fitzsimons and Lehmann 2004).
From the perspective of this narrative, not only are technol- Illustrating reactance in AI data capture experience is Danielle,
ogy companies continually required to find new ways to make a U.S. consumer who installed Echo devices throughout her
monitoring and surveillance palatable to consumers by linking home, believing Amazon’s claims that they would not invade
it to convenience, productivity, safety, or health and well-being her privacy. When one of her Alexas recorded a private con-
(Bettany and Kerrane 2016), but they must also constantly push versation and sent it to a random number in her address
the boundaries of what private information consumers should book, Danielle said “I felt invaded” and concluded, “I’m never
share (Giesler and Humphreys 2007) through a complex land- plugging that device in again, because I can’t trust it”
scape of notifications, reminders, and nudges intended to initi- (Horcher 2018).
ate behavioral change. Thus, as consumer behavior becomes In summary, consumers may experience data capture as a
increasingly retailored to the exigencies of behavioral futures, form of exploitation: whereas technology companies, firms,
AI can transform consumers into subjects who are complicit in and governmental agencies gain financial and political power,
the commercial exploitation of their own private experience, consumers lose ownership of their data and feel a loss of con-
thereby undermining personal control and promoting the con- trol over their lives. As we discuss next, managers should gain a
centration of knowledge and power in the hands of those who better understanding of feelings of exploitation, as they prevent
own their information. consumers from seeing the value firms can provide through
data capture. This understanding starts at the organizational
level and is then translated into decisions about experience
Psychological Perspective: The Exploited Consumer design.
Data capture experiences are characterized by an underlying
tension: consumers recognize that data capture allows AI to
Managerial Recommendations: Understanding the
serve them through customization, but AI’s inherent lack of
transparency makes them feel exploited. These feelings of
Exploited Consumer
exploitation are fueled by actual and perceived loss of personal Organizational learning. A central programmatic task in addres-
control, with important psychological consequences (Botti and sing the issue of consumer exploitation in AI data capture
Iyengar 2006). The first of such consequences is negative experiences involves determining and enhancing the organiza-
affect, which can turn into demotivation and helplessness. Con- tion’s level of awareness regarding the sociological and
sider the case of Leila, a sex worker who shielded her identity psychological costs raised in the previous sections. Companies
on her Facebook account and reported being shocked to see should strive toward greater organizational sensitivity around
some of her regular clients recommended by the “People You consumer privacy and the current asymmetry in the level of
May Know” function. According to Leila, “the worst night- control over personal data. For instance, they should use netno-
mare of sex workers is to have your real name out there, and graphic observation or sentiment analysis to listen empatheti-
Facebook connecting people like this is the harbinger of that cally and at scale to consumers who have experienced
nightmare.” For Leila, like for domestic violence victims or exploitation in AI data capture experiences. Furthermore,
political activists, privacy invasion is not only frightening, it rather than accepting the surveillance society narrative at face
may become a matter of life, death, or time in jail (Hill 2017). value, firms can use these tools to understand when, how, and
As being in control is a basic need and a precondition of whether their own data capture experiences play into versus
psychological welfare (Leotti, Iyengar, and Ochsner 2010), the subvert this narrative. Likewise, companies should draw on
second consequence of loss of personal control may be moral insights by privacy scholars and activist movements to question
outrage. Consider the case of a German consumer who their taken-for-granted beliefs. In doing so, for instance, com-
requested his own data from Amazon and received transcripts panies could realize that their own view on privacy default
of Alexa’s interpretations of voice commands, even though he settings might differ markedly from that of a vulnerable con-
did not own any Alexa devices. The consumer relayed his story sumer group and adjust their processes accordingly (Martin and
to a local magazine, which attempted to identify the consumer Murphy 2017).
whose privacy had been compromised. The magazine staff Organizational learning can also extend beyond the bound-
involved in this experience described it as follows: “[we were aries of the individual firm to encompass other institutions.
able to] navigate around a complete stranger’s private life First, companies could sponsor research aimed at understand-
without his knowledge, and the immoral, almost voyeuristic ing the influence of surveillance society–style thinking on
nature of what we were doing got our hair standing on end” their culture and practice, as well as its negative impact on
(Brown 2018). marketing activities and consumers. Second, companies could
The third consequence of loss of personal control relevant to adopt a more communal approach to sharing individual orga-
data capture experiences is psychological reactance, a state in nizational learning with other firms, industry associations,
which a person is motivated to restore control after a restriction educators, and the media. Third, industry groups could
Puntoni et al. 5

collaborate with scholars to create and adopt an algorithm wealthier ones (Mittal and Griskevicius 2014), and collective
bill of rights for individuals (Hosanagar 2019), which some self-construal is associated with a lower desire for choice free-
AI experts have proposed should include a right to transpar- dom and control (Bernthal, Crockett, and Rose 2005; Markus
ency, for example, “the right to know when an algorithm is and Schwartz 2010). Thus, both consumers’ socioeconomic
making a decision about us, which factors are being consid- status (Research Question 1A, or RQA1; see Table 1) and
ered by the algorithm, and how those factors are being prevailing cultural norms (RQA2) could influence consumers’
weighted” (Samuel 2019a). propensity to feel and be exploited by AI. Other factors, such as
education, political orientation, gender, and race (RQA3) could
Experience design. Using this organizational learning, organi- be examined using an intersectionality lens (Crenshaw 1989).
zations should design improved AI data capture experiences. Future research should also explore how the cultural cogni-
Recent regulations, such as the European Union’s General tive, normative, or regulatory legitimacy of AI changes over
Data Protection Regulation, aim to limit exploitation by mak- time to influence consumer reactions to data capture (Acquisti,
ing organizations responsible for giving consumers the pos- John, and Loewenstein 2012; Humphreys 2010), particularly in
sibility to opt into specific data collection processes (e.g., light of AI’s rapid diffusion in the marketplace. For example,
cookies) and to ask for greater clarity on how these data are researchers could study how and when increasing levels of
used. familiarity with AI may reduce consumer sensitivity toward
However, as AI becomes more pervasive and ubiquitous, exploitation (RQA4).
ensuring consumer consent at all steps of the customer journey
may result in an overload of choice and information that
decreases instead of increases personal control (Iyengar and Psychological research questions. An interesting avenue for future
Lepper 2000) and exacerbates the negative affective and beha- research consists of exploring the role that psychological pro-
vioral reactions illustrated previously. Interventions related to cesses play in interpreting AI data capture experiences as
the way in which options are presented—the choice architec- exploitative. For example, researchers could study the role of
ture—can reduce the cognitive and affective costs associated motivated reasoning (Kunda 1990) in shaping consumer affec-
with excessive information and choice (Chernev, Böckenholt, tive reactions to data capture experiences (RQA5): strongly
and Goodman 2015) and thereby give consumers greater con- held goals may motivate consumers to accept greater risk of
trol over their data without overloading them. exploitation when the AI is seen as a conduit to goal comple-
Among such interventions, including default options has tion, mitigating negative emotional responses.
proven especially effective in facilitating decision making as Other important open questions concern how the source and
well as influencing specific behaviors (Thaler and Benartzi type of data used by the AI affect its potential to exploit. For
2004). Because individuals tend to passively accept defaults example, an AI-enabled device that is constantly listening to
instead of exercising their right to opt out, the selection of biometric data could, over time, become paradoxically less
defaults by choice architects may lead to suboptimal outcomes invasive than one that listens only when activated (Turkle
when it does not properly consider preference heterogeneity. 2008). Complementing recent scholarship on the consequences
The personalization of defaults could mitigate this issue of personal quantification (Etkin 2016), future research should
(Sunstein 2015), and AI itself could assist consumers in the address how the frequency of data capture (e.g., intermittent vs.
automatic implementation of preferences about how their data continuous) affects perceived exploitation (RQA6). As another
are captured and analyzed. example, information collected about the physical environ-
More broadly, organizations can limit consumer exploita- ment, such as that acquired by a smart refrigerator, may be less
tion by playing an active role in educating consumers about the likely to generate feelings of exploitation than information
costs and benefits entailed in AI data capture experiences. For collected about the self, such as that acquired by a fitness
example, the recently overhauled Google Home app clearly tracker (RQA7).
communicates what user data have been stored and why. Feelings of exploitation may also differ on the basis of the
Understanding the potential for exploitation in data capture physical context of consumption (RQA8). Current attempts
experiences is useful not only for managers interested in max- by companies like Amazon or Google to redefine the family
imizing the value provided to consumers served by the AI but home as a space accessible to corporations rather than a
also for researchers interested in uncovering the sociological private space may attenuate or exacerbate these feelings.
and psychological underpinnings of the tension that accompa- Similarly, physical features of the environment where data
nies this experience. collection takes place may differently trigger concerns about
exploitation. For example, crowded environments lead to a
loss of perceived control, which could decrease willingness to
Future Research on the AI Data Capture Experience provide data. Concerns about exploitation may also differ on
Sociological research questions. Future research should investi- the basis of the device used to interact with AI (RQA9), as
gate how sociocultural forces affect feelings of exploitation research has shown that consumers are more likely to
in data capture experiences. People from poorer childhood self-disclose when using smartphones versus PCs (Melumad
backgrounds have a lower sense of control than those from and Meyer 2020).
6 Journal of Marketing XX(X)

Table 1. Consumers and AI Experience: Emerging Research Questions (RQs).

A: The AI Data Capture Experience

RQA1: How does socioeconomic status influence the likelihood of feeling exploited?
RQA2: How do cultural norms influence the likelihood of feeling exploited?
RQA3: How does intersectionality normalize or problematize exploitation?
RQA4: How does the diffusion of AI affect feelings of exploitation over time?
RQA5: How does motivated reasoning shape consumer affective reactions in data capture experiences?
RQA6: How does the frequency of data capture affect perceived exploitation over time?
RQA7: How are feelings of exploitation influenced by the nature of the data collected (e.g., environmental, behavioral, physiological)?
RQA8: How does the physical context of data collection affect the likelihood of feeling exploited?
RQA9: Does the experience of data capture depend on the device the consumer is using?
RQA10: When and how will consumers sabotage data collection by AI in response to feelings of exploitation?

B: The AI Classification Experience

RQB1: How do individual differences in awareness of discrimination affect whether a consumer feels misunderstood by AI?
RQB2: How do the social classifications inscribed into AI solutions shape consumer behavior and choices?
RQB3: How do consumers infer which variables AI is using to make personalized predictions?
RQB4: Which types of inferred classifications are more likely to make consumers feel misunderstood?
RQB5: How do uniqueness versus belonging motives affect the likelihood of feeling misunderstood?
RQB6: How does the nature of the task influence the likelihood of feeling misunderstood?

C: The AI Delegation Experience

RQC1: How do feelings of being replaced depend on the perceived “humanness” of an activity?
RQC2: How does feeling replaced by AI affect the perceived acceptability of various behaviors intended to protect or promote the self?
RQC3: As the range of tasks that AI can perform increases over time, how do normative task boundaries around humans versus algorithms shift?
RQC4: What specific consumption contexts make delegation to AI more psychologically aversive?
RQC5: Do consumers compensate for feelings of being replaced by AI in nonconsumption domains?
RQC6: How do instrumental versus symbolic consumption motives determine perceptions of being replaced?
RQC7: Is the likelihood of feeling replaced affected by whether consumers focus on consumption outcomes versus processes?
RQC8: When and how do consumers respond to threats of replacement by AI by constraining the AI’s production capability?

D: The AI Social Experience

RQD1: How do antibias beliefs affect alienation in social experiences?


RQD2: How do cultural differences influence consumer perceptions of social experiences?
RQD3: What are the consequences of AI-enabled social experiences for important societal processes such as children’s socialization and gender
relations?
RQD4: When are customers more likely to objectify AI in response to alienation?
RQD5: How does the timing of disclosure influence the likelihood of consumer alienation?
RQD6: What is the influence of situational characteristics on alienation?
RQD7: What is the role of brand equity in reducing or facilitating alienation?

E: Interrelationship Between AI Experiences

RQE1: How do the ways in which consumers experience data capture influence perceived resource accessibility in a classification experience?
RQE2: Does aggressive data capture strengthen or weaken social inclusion?
RQE3: Does involving consumers in the validation of assumptions about their preferences shift a classification experience to feel more like a
delegation experience?
RQE4: Do changes in feelings of control lead to parallel shifts in data capture and delegation experiences?
RQE5: Do changes in consumer self-identity concerns lead to parallel shifts in classification and social experiences?
RQE6: Are data capture experiences less aversive when demands for data increase together with feelings of empowerment from delegation
experiences?

F: Unchartered AI Experiences

RQF1: How does the learner–AI interaction shape learning experiences and affect student satisfaction, motivation, and learning?
RQF2: How does the valence of learning experiences depend on identity relevance and internal attribution of learning outcomes?
RQF3: What motivates consumers to have AI-enabled companionship experiences?
RQF4: What factors determine whether consumers perceive companionship experiences as deceptive or alienating?
RQF5: How do AI solutions that permeate epistemic boundaries between human and machine impact consumer autonomy?
RQF6: How does AI perceive and experience the world and marketplace, and how can firms design these experiences effectively?
Puntoni et al. 7

Finally, when consumers cannot or do not want to take with research demonstrating the psychological benefits of
advantage of the benefits of data capture, psychological reac- group membership (Reed et al. 2012; Turner and Reynolds
tance toward AI may manifest in adversarial user behaviors, as 2011). However, classification experiences may also lead con-
suggested by the experience of Danielle. Future research can sumers to feel misunderstood when they perceive AI as having
explore the factors that lead consumers to respond to feelings of inaccurately assigned them to a group or as having made biased
exploitation with behaviors like sabotaging AI by disabling predictions on the basis of group assignment. At the societal
sensors’ inputs, intentionally providing false data by creating level, classification by AI is linked to a dystopic narrative in
fake user profiles, or adopting antisurveillance outerwear to which access to resources and freedom is restricted for some
confuse the algorithms controlling facial recognition systems groups.
(RQA9).
Sociological Context: The Unequal Worlds Narrative
The AI Classification Experience Classification experiences do not exist in a sociological
Firms leverage the predicting capability of AI to create vacuum but are shaped by popular myths. Science fiction stor-
ultra-customized offerings and maximize engagement, rele- ies such as Neill Blomkamp’s Elysium have routinely imagined
vance, and satisfaction (Kumar et al. 2019). Sophisticated deeply divided police states in which the ruling class draws on
algorithms consider a wide variety of information, including algorithms to sustain a regime of inequality and fear. Socio-
the characteristics of both current and past consumers. For logical scholarship on the politics of algorithms (Seaver 2019)
example, Netflix uses AI to offer personalized movie recom- has also drawn on this popular imagination to theorize AI in the
mendations based on not only individuals’ past viewing history context of rationalization and quantification (Porter 1996),
and that of other viewers but also contextual information such automated inequality (Dormehl 2014a), uneven information
as day of the week, time of day, device, and location (Kathayat landscapes (Eubanks 2018), and the historical rise of
2019). Netflix even uses AI to select videoframe thumbnails “algorithms of oppression” (Noble 2018) or “weapons of math
that can increase subscribers’ likelihood to click on a specific destruction” (O’Neil 2016). Emphasizing the intersectionality
show (Yu 2019). Even though prediction interfaces use indi- of race and gender with antisemitism, poverty, unemployment,
vidual and contextual information, they often refer to informa- and social class (Crenshaw 1989), these investigations of AI’s
tion related to other users either explicitly by mentioning others potential for social classification are particularly insightful. AI
when framing recommendations (e.g., Amazon noting is feared to privilege whiteness and undermine the identity
“customers who bought this also bought”) or implicitly by projects of minorities (Dormehl 2014b). This contention is
organizing recommendations in terms of communities of consistent with research on the market (bio)politics of race,
users or taste niches (e.g., Amazon Prime drawing attention to which has consistently shown the inherently discriminatory
movies for “period drama fans”). As consumers are often una- potential of marketized representations of culture and ethnicity,
ware of the workings of algorithms, they may infer that these and it is also supported by economic critiques that warn against
recommendations are based on being classified as a certain type the monopolization of information by a centralized system
of person. Such inferences are amplified by the human tendency (Hayek 1945; Polanyi 1948).
for categorical thinking in person- and self-perception (Turner Consider Google’s corporate mission to “organize the
and Reynolds 2011). For example, consumers engage in catego- world’s information.” From an unequal worlds perspective,
rical inference making when they are served behaviorally tar- such a statement is far from politically neutral; rather, it exem-
geted ads: they attribute the ads they receive to the advertiser plifies the operation of seemingly benign appeals to data auto-
labeling them as a person with specific tastes (Summers, Smith, mation and quantification in a market that sanctions the
and Reczek 2016). We conceptualize the “classification experi- production of biased information. In such an ideological sys-
ence” as one in which consumers perceive AI-enabled persona- tem, the designers of an AI-enabled college admissions soft-
lized predictions to be the result of being classified as a certain ware, for instance, may be convinced that AI can help combat
consumer type. human selection bias. However, because “algorithms that rank
Classification experiences can be positive because they lead and prioritize for profits compromise our ability to engage with
consumers to feel deeply understood either objectively or sub- complicated ideas” (Noble 2018, p. 118), the resulting AI expe-
jectively. For example, consumer categorizations can be valu- rience may not only reduce the complex experiences of tar-
able to affirm the self: personalized offers that indicate geted marginalized populations to a set of more simplified
membership in an aspirational group may help consumers sat- sociodemographic attributes or stereotypes but it may also
isfy identity motives when they are perceived as social labels unintentionally expose marginalized applicants to racial profil-
(Summers, Smith, and Reczek 2016). Framings based on other ing, misrepresentation, and economic redlining when used by
users, such as “people who like this also like,” make recom- admissions officers. Likewise, problems can arise when banks
mendations more persuasive than those based on the product, use AI to decide whether a consumer is worthy of borrowing
such as “similar to this item” (Gai and Klesse 2019), further money. Although algorithms may make the selection process
suggesting that the experience of feeling classified by AI as a more efficient, they can also systematically exclude consumers
certain type of person is often positive. These findings resonate who live in a neighborhood with higher credit defaults (Brown
8 Journal of Marketing XX(X)

2019). The realization that AI can result in racial and social “The @AppleCard is such a f*ing sexist program. My wife and I
groups experiencing discrimination is an important backdrop filed joint tax returns, live in a community-property state, and have
for a psychological analysis of consumers’ feelings of being been married for a long time. Yet Apple’s black box algorithm
misunderstood. thinks I deserve 20x the credit limit she does . . . ”

This consumer is frustrated because of the AI’s inability to


Psychological Perspective: The Misunderstood Consumer understand the reality of his household’s finances, but he is also
Classification experiences are characterized by an underly- morally outraged because he thinks that his wife’s denial of
ing tension between feeling understood and misunderstood. credit was based on her gender. Perception of vulnerability
Consumers can feel misunderstood because of perceived such as this can have negative effects on the self-concept. This
incorrect classification, discriminatory use of classification, can occur, for example, when minorities whose financial
or a combination of the two. First, consumers are likely to choices are systemically restricted then frame the self as
feel misunderstood when they perceive the identity implied “fettered, alone, discriminated, and subservient” and experi-
by the AI’s output as incorrect, either because it is factually ence reductions in self-esteem and self-efficacy (Bone,
inaccurate or because it is based only on one identity, Christensen, and Williams 2014).
whereas most individuals identify with a host of personal Consumers can also experience a combination of the two
and social selves (Oyserman 2009). Identity-based consumer ways of feeling misunderstood mentioned previously: they can
behavior is often the result of a negotiation between belong- be incorrectly assigned to a category and this incorrect assign-
ing and uniqueness motives playing out across this constel- ment can exacerbate existing limitations on choice and free-
lation of identities (Chan, Berger, and Van Boven 2012). In dom for vulnerable consumers. Facial recognition software, for
situations where consumers perceive AI predictions to be instance, uses AI to identify a person by comparing a target
driven by their membership in a group, uniqueness motives facial signature to databases of known images. The range
may become relatively more salient. When this happens, of applications of such software includes mobile devices
group identity appeals may backfire if they are believed to (e.g., Apple’s Face ID), social media (e.g., Facebook’s tagging
threaten individual agency (Bhattacharjee, Berger, and feature), and physical spaces (e.g., airport customs officials).
Menon 2014). This negative response is especially likely Whereas a failure of Apple’s Face ID to start one’s own device
when the consumer perceives the identity assigned to them may result in frustration, incorrect identification in other appli-
by the AI as noncentral or dated, as in this excerpt from a cations may result in ethical violations. Consider the open letter
Spotify Community post (Grandterr 2019): to Amazon CEO Jeff Bezos written by the Congressional Black
Caucus on the potential danger caused by Amazon’s facial
“The recommendations s*ck: recognition tool, Rekognition:
- Listened to a few anime covers, now all my “Discover Weekly” is
Communities of color are more heavily and aggressively policed
filled with disgusting covers. I’m trying to “not like” all of them,
than white communities . . . .We are seriously concerned that
but it doesn’t work . . . . I’ve stopped listening to rock years ago
wrong decisions will be made due to the skewed data set produced
and still get rock recommendations.”
by what we view as unfair and, at times, unconstitutional policing
practices. (Richmond 2018)
From this consumer’s perspective, the AI used by Spotify
seems to have decided that they like anime covers and rock, In a subsequent test, Rekognition indeed incorrectly
putting them in a category that they reject or do not see as matched 28 current members of the U.S. Congress with people
capturing their multifaceted and evolving self. The consumer who had committed a crime, and the false matches were dis-
is frustrated not only with being misunderstood by the AI, but proportionately for people of color (Snow 2018). In June 2020,
also with their perceived inability to alter such Amazon suspended police use of this technology (Fitch 2020).
misunderstanding. We next examine how managers can understand and address
Second, consumers may also feel misunderstood when they the risk of consumers feeling misunderstood.
fear AI is using a social category in a discriminatory way to
make biased predictions about them. This is particularly pro-
blematic in contexts where these predictions may enhance con-
Managerial Recommendations: Understanding
sumers’ vulnerability because they restrict access to
marketplace resources (Hill and Sharma 2020). For example,
the Misunderstood Consumer
fintech companies increasingly use easily accessible digital Organizational learning. How does an organization best surface
information such as individuals registering on a webpage to and address accounts of biased treatment? Unlike data capture
predict their payment behavior and defaults and therefore judge errors, which may be lagged and hard to correct in real-time,
their creditworthiness (Berg et al. 2020). Consider this tweet by classification errors produce signals soon after they occur.
a software developer, David Heinemeier Hansson (@dhh, They also happen in very different parts of an organization.
November 7, 2019, https://twitter.com/dhh/status/1192540 For instance, if an AI system has rejected a college applicant
900393705474): due to a biased algorithm, it is likely to assume that such a
Puntoni et al. 9

classification error will almost immediately surface in the col- Managers can build on the insights gained from listening to
lege’s admissions department and data—data that in turn might discriminated consumers to design both de-biased and antibias
be used to structure the next round of applications. AI experiences that foster an inclusive society rather than per-
Owing to this data dependency, organizations may not even petuate inequality (Green and Viljoen 2020). To do so, man-
be aware that a given distribution or algorithm is the result of a agers should institute protocols that swiftly react to any bias
classification error. In the case of a college, for instance, clas- uncovered in regular audits of the AI systems for the presence
sification might be regarded as a natural outcome of the com- of discrimination (Zou and Schiebinger 2019). Organizations
petitive process by those in charge of managing the admissions should also diversify their hiring to include more members of
process. Thus, unlike data capture failings that require the spe- social minority groups and ensure that their culture and pro-
cific attention of software programmers and data scientists, cesses represent diverse viewpoints at all stages of the design of
addressing classification errors requires organizations to focus AI classification experiences. For example, advocates for
on marketing and consumer-facing departments and to exam- reducing bias in AI have suggested that technology companies
ine whether these departments’ databases or, more abstractly, must employ more individuals with disabilities to learn how to
the organizations’ taken-for-granted understanding about eliminate disability bias from AI (Clegg 2020). The tension
whom they have served and should serve and why, carry between feeling understood and misunderstood in classifica-
entrenched social and racial biases. tion experiences represents a learning opportunity not only for
Organizations must thus focus on learning about the specific managers but also for researchers.
biases that might be present in their own algorithms and pro-
cesses to root them out. In the United States, the Algorithmic
Accountability Act of 2019 would require companies to assess Future Research on the AI Classification Experience
their AI systems for “risks of ‘inaccurate, unfair, biased, or Sociological research questions. Researchers can unpack the influ-
discriminatory decisions’ and to ‘reasonably address’ the ence of sociocultural factors on classification experiences. Val-
results of their assessments” (MacCarthy 2019, p. 1). Rather ues and ideology may change consumers’ interpretation of
than reacting to a changing regulatory landscape, firms should personalized predictions, as those who are more aware of the
proactively collaborate with technology experts and thought sociohistorical context of discrimination by algorithm (Noble
leaders in computer science, sociology, and psychology to 2018) and belong to marginalized groups should also feel more
develop and conduct such audits. Firms can then share both vulnerable to AI’s potential to restrict access to resources and
their audit processes and outcomes, for example by engaging in freedom (RQB1).
lobbying efforts to ensure that regulations passed in the Drawing on research that examines the ways in which pow-
name of consumer welfare include meaningful and technolo- erful institutions define the consumer (Borgerson 2005), future
gically appropriate provisions to protect consumers from work should also explore the social classifications that firms
discrimination. routinely inscribe into their AI solutions, such as certain con-
sumers’ habits, norms, and preferences. This lens can usefully
unearth the existence of ideological blind spots in the models
Experience design. Organizational learning should be leveraged
employed by firms and examine the uneven landscapes of
in the design phase to develop AI classification experiences
experiences and choices that these models produce when con-
that minimize consumers’ likelihood of feeling misunderstood.
sumers are subjected to them (RQB2).
Managers could build on the insights gained from listening to
consumers who felt they were classified on the basis of nar- Psychological research questions. Future research should explore
rowly defined identities to experiment with diversifying and how psychological processes affect the extent to which consu-
broadening the content they provide and to propose products mers feel misunderstood in classification experiences. Open
that are dissimilar from the user’s preference profile. Indeed, questions concern lay beliefs about how organizations create
Spotify has launched Taste Breakers, a function that introduces AI classifications (RQB3) and whether certain inferred cate-
customers to music to which they normally do not listen. Sim- gorizations are especially likely to induce feelings of being
ilar attempts at “bursting the bubble” are especially important misunderstood (RQB4). For example, research on attributional
in light of the possibility that, by optimizing information pro- ambiguity suggests that stigmatized consumers may attribute
vision on the basis of past choices, AI both ignores long-term AI classifications to bias toward their group identity on the
goals that do not reflect short-term behaviors (André et al. part of the algorithm rather than to other causes (Crocker and
2018) and increases attitude extremity and polarization Major 1989).
(Flaxman, Goel, and Gao 2016). Firms could also address More generally, feeling misunderstood may be more likely
feelings of being misunderstood by asking consumers to vali- in contexts where consumers value uniqueness over belonging-
date AI-based inferences. As greater user participation in the ness (RQB5). For example, patients are reluctant to use med-
implementation of algorithms increases satisfaction in decision ical AI due to a sense that it cannot account for their unique
support systems (Wierenga and Oude Ophuis 1997), periodi- characteristics and circumstances as well as human doctors can
cally offering consumers the opportunity to update the AI’s (Longoni, Bonezzi, and Morewedge 2019). The nature of a task
view of the self could similarly reduce potential frustration. may also have an influence (RQB6): Consumers tend to exhibit
10 Journal of Marketing XX(X)

greater aversion toward algorithms for subjective tasks, which cautionary tales have profiled the dangers of reimagining
are based on personal opinions or intuitions, than for objective human capabilities and characteristics through a technological
ones, which are based on quantifiable and measurable facts mirror. Specifically, these stories fuel the view that, by trans-
(Castelo, Bos, and Lehman 2019). Given that many AI systems cending human limitations, technology eventually molds into
learn and predict subjective taste, negative reactions to inferred an omnipotent superhuman and subsequently constitutes the
classification might be especially common. ideal of technological perfection—implying new standards.
Critics of this transhumanist perspective (Sassen 2014,
p. 23) have linked AI to “new logics of expulsion” and eco-
The AI Delegation Experience nomic redundancy that arise as AI approaches aging, health,
A “delegation experience” is one in which consumers involve productivity, and other domains through the transhumanist lens
an AI solution in a production process to perform tasks they of limitless performance rather than standard levels of
would have otherwise performed themselves. These tasks can well-being or productivity. These observers fear that AI solu-
be decisions, such as when Google Assistant, at the consumer’s tions will result in significant unemployment, leading to a rapid
request, calls a hairdresser, matches the consumer and the hair- increase in surplus populations whose AI experience will be
dresser’s calendars, and uses a human-like voice to book an their de facto removal from the productive aspects of the social
appointment. They can also be actions in the digital world, like world.
those performed by Smart Compose, a writing tool that uses AI In the social science literature, this superhuman narrative is
to help consumers write emails. Finally, they can be actions paralleled in the Computers Are Social Actors and Human
in the physical world, such as when the Nest Thermostat learns Computer Interactions paradigms, according to which the same
the consumer’s temperature preferences and programs itself to heuristics used for human interactions are mindlessly applied to
fit them. computers (Grudin 2017; Nass and Moon 2000). Since the
By not having to engage in the tasks the AI performs on their 1960s, technology companies have periodically imbued the
behalf, consumers in delegation experiences can feel empow- productive aspects of AI technology and machine prototypes
ered in two distinct ways. First, consumers can spend their time with mythic narratives emphasizing that science and technol-
and effort on activities they find more satisfactory and mean- ogy will eventually accomplish human immortality.
ingful: they can work less and enjoy the positive effects of These transhumanist ideas, which emphasize technologi-
leisure (Fishbach and Choi 2012), or they can work better and cal progress as an unstoppable force that alters human expe-
enjoy greater happiness by delegating extrinsically motivated rience (Hayles 1999), have been deeply inscribed in
tasks to AI and keeping intrinsically motivated tasks for them- contemporary AI experiences, from the promise that the
selves (Botti and McGill 2011). Second, consumers can focus Roomba vacuum cleaner could perform tasks more effec-
on activities that are more suitable to their skills and leave to AI tively than humans to the promise that 23andMe could help
those on which they underperform. This way, they can enhance in the creation of genetically optimized offspring. However,
self-efficacy, or the perceived ability to master the environment the transhumanist preoccupation with Promethean aims
to produce a desired outcome (Bandura 1977). underlying many contemporary AI experiences also leads
Given the empowering benefits of delegation experiences, to systemic dehumanization (Fukuyama 2002; Habermas
managers may be tempted to offer consumers increasingly 2003). For instance, human perception of mastery over the
more opportunities to delegate tasks to AI. However, like the environment depends on not being subject to unilaterally
case in which the mere presence of too many choice options imposed specifications. A world in which our interactions
can reduce consumers’ satisfaction (Iyengar and Lepper with machines are fueled by transhumanist ideals will
2000), the mere presence of too many delegation opportuni- endorse a glorification of capitalism’s endless creativity
ties may lead to aversive consequences. We next examine while treating destructiveness and human replacement as
this tension between the possibility of AI to both empower normal costs of doing business (Schumpeter 1942). Further-
and replace consumers both at the societal and individual more, an economic obsession with “perfection,” “progress,”
level. and “efficiency” will promote the rise of the “useless class”
(Harari 2017), individuals whose skills are no longer devel-
oped or demanded, thus fundamentally eroding democracy
Sociological Context: The Transhumanist Narrative and social justice.
To analyze the negative aspects of delegation brought about by
the possibility of being replaced from a sociological perspec-
tive, it is helpful to examine how the heuristics that have guided
Psychological Perspective: The Replaced Consumer
consumers’ interactions with AI tools have been historically Delegation experiences can help consumers feel empowered
understood in popular culture. We draw on widespread science but can also raise concerns about being replaced. The mere
fiction and social science literature that falls into the so-called recognition of AI’s capability to act as a substitute for human
“transhumanist” genre. From Fritz Lang’s Metropolis to labor can be psychologically threatening for three main rea-
Isaac Asimov’s I, Robot, and from Mary Shelley’s Gothic sons. First, people have a strong desire to attribute consumption
Frankenstein to James Cameron’s Terminator, countless outcomes to one’s own skills and effort (Bandura 1977; Leung,
Puntoni et al. 11

Paolacci, and Puntoni 2018). Research on human–computer The tension between being empowered and replaced is rel-
interaction has shown that humans often see computers as dis- evant from a managerial perspective because AI designers need
empowering because they deprive humans of the sense of to decide how delegation experiences should be designed to
accomplishment related to an activity, so much so that humans protect self-efficacy and self-identity. We next discuss poten-
tend to credit themselves for positive outcomes and blame tial recommendations emerging from the sociological and psy-
computers for negative ones (Moon and Nass 1998). In con- chological analysis of this tension.
texts where products are crucial to the experience of having an
identity as a certain type of person (Reed et al. 2012), delega-
tion experiences may feel tantamount to cheating. In the fishing
Managerial Recommendations: Understanding
industry, for example, AI can help anglers be more effective in
location and bait decisions. However, in the words of biologist the Replaced Consumer
Culum Brown: Organizational learning. Companies can start by learning how to
integrate the human desire for self-efficacy into corporate dis-
It is really getting kind of unfair. If you are going to use GPS to course in two main ways. First, they can collaborate with fam-
take you to a location, sonar to identify the fish and a lure which ily scholars, workplace psychologists, and health sociologists
reflects light that humans can’t even see, you may as well just go to to understand the consequences of human replacement by AI.
McDonald’s and order a fish sandwich. (The Economist 2012) Second, they can engage in conversations with consumers to
gain greater insight into which activities they prefer to reserve
Second, outsourcing labor to machines prevents consumers for themselves versus delegate to AI, and how these prefer-
from practicing and improving their skills, which can nega- ences shift across consumer, identity, and task. Organizational
tively influence self-worth and contribute to a satisficing ten- design and personnel policies can facilitate this learning by
dency by which individuals settle for a level of engagement that ensuring that the insights gained through external collabora-
is just good enough. Consider the experience of journalist John tions and consumer listening permeate the firm’s culture, espe-
Seabrook. While composing an email to his son, Seabrook cially in the more technical functions. For instance, technology
started the sentence “I am p . . . ,” intending to write “I am firms could hire experts in creativity such as artists, artisans, or
pleased,” but resolved to instead accept the suggestion of chefs into AI-focused experience design roles.
Google’s Smart Compose “I am proud of you.” After hitting Firms could also learn from organizations that protect, sup-
Tab to accept the suggestion, Seabrook (2019) muses: port, and enhance abilities that are conceived as intrinsically
“human” and on which individuals remain superior to
What have I done? Had my computer become my co-writer? That’s machines, such as performing complex tasks, adapting to
one small step forward for artificial intelligence, but was it also one changes, using emotional intelligence, and offering nuanced
step backward for my own? . . . I’d always finished my thought by judgments in unstructured environments (Hume 2018). Thus,
typing the sentence to a full stop, as though I were defending collaborations with museums, theaters, and universities’ huma-
humanity’s exclusive right to writing, an ability unique to our nities departments can inspire managers to understand how AI
species. I will gladly let Google predict the fastest route from can preserve, rather than subvert, traditional human values such
Brooklyn to Boston, but if I allowed its algorithms to navigate to as creativity, collaboration, and community (Brunk, Giesler,
the end of my sentences how long would it be before the machine and Hartmann 2017).
started thinking for me?
Experience design. The learning achieved in the previous phase
Finally, outsourcing tasks to AI can lead consumers to expe- should serve as the bedrock on which AI designers decide how
rience a loss of self-efficacy. Self-efficacy is an antecedent of to model delegation experiences to protect self-efficacy and
personal control (Bandura 1977), and it is heightened when self-identity (Leung, Paolacci, and Puntoni 2018). Division of
individuals are actively engaged in creative tasks (Dahl and labor in production processes can have positive effects on
Moreau 2007; Norton, Mochon, and Ariely 2012). The notion demand if consumers feel they have the competence to make
that being productive is a way to feel in control is consistent sound decisions about the tasks in which they decide to engage
with findings showing that consumers who experience low (Fuchs, Prandelli, and Schreier 2010). Thus, AI can be con-
control attempt to reestablish it by choosing products that ceived as a platform to enhance intrinsically human skills and
require higher, versus lower, effort to achieve a desired out- values. In the medical domain, for example, the benefits of
come (Cutright and Samper 2014). In line with this view that AI-powered surgical robots for consumers depend on the way
delegation can lead to loss of control, drivers involved in in which the surgeon’s input and supervision is designed. Sur-
GPS-related accidents tend to describe their experience in gical robots are more precise than humans, can make quicker
terms of surrendering control to the machine. Take for instance and more reliable diagnoses, and are more democratic and
the tourists who drove their car into the ocean trying to reach an cost-efficient than current systems because they can intervene
Australian island and recounted that the GPS “told us we could outside of hospitals. Still, the structure of surgeons’ supervision
drive down there . . . It kept saying it would navigate us to a of the robots is central to the success of this technology, both
road” (Milner 2016). because patients are afraid of being operated on by a machine
12 Journal of Marketing XX(X)

and because the AI cannot yet outperform human doctors in experiences become more controlling in nonconsumption
some critical technical and social skills (Max 2019). domains, such as politics?
Given the link between self-efficacy and control, the design
of delegation experiences could also consider the extent to Psychological research questions. Future research should examine
which consumers make choices and initiate actions (Carmon when the psychological processes that lead to the experience of
et al. 2020; Schmitt 2019). For example, autonomous vehicles feeling replaced by AI are activated, as well as the conse-
should allow consumers to customize peripheral features to quences of such feelings. For example, is the extent to which
avoid perception of a lack of control (André et al. 2018), and individuals perceive delegation experiences as a threat to the
digital assistants in computer games should not be anthropo- self a function of whether consumption is motivated by instru-
morphized to preserve players’ sense of autonomy (Kim, Chen, mental or symbolic motives (RQC6)? Preferences for human
and Zhang 2016). The classic finding that cracking fresh eggs over robotic labor tend to be stronger in symbolic consumption
into a premade Betty Crocker cake mix might be enough to contexts (Granulo, Fuchs, and Puntoni 2020), and the same
reestablish consumers’ self-worth and improve adoption might apply in the case of one’s own labor: whereas for most
(Marks 2005) still resonates in the context of AI, as the amount consumers, being replaced by Nest in setting their home’s
of control needed by consumers to reduce a self-efficacy threat temperature is likely perceived as desirable, for those whose
can be quite small. For instance, offering users the possibility to identity is tightly linked to housekeeping, this replacement may
correct an algorithm’s output, even if only slightly, is enough to be seen as aversive (Leung, Paolacci, and Puntoni 2018).
increase their likelihood of using the superior, although imper- A related topic pertains to how a focus on the outcome or on
fect, algorithm rather than the preferred, inferior human fore- the process differently influences perceptions of delegation
cast (Dietvorst, Simmons, and Massey 2016). experiences (RQC7). Products are means to ends, but the pro-
cess of consumption, as well as the performative display of skill
and knowledge, can often be intrinsically valuable to consu-
mers (Reed et al., 2012). For example, for a person who is
Future Research on the AI Delegation Experience nurturing an angler’s image, the extent to which AI-driven
Sociological research questions. The extent to which consumers fishing tools are seen as self-threatening may depend on the
feel replaced by AI is likely shaped by cultural narratives about reference group’s norms about task delegation and the relative
AI and by the shared understanding of what it means to be importance placed on the outcome (e.g., a bigger catch) or the
productive. Activities that tend to be perceived as if they ought process (e.g., finding a good location for fishing).
to fall to human skills and competence (Castelo, Bos, and Leh- When self-efficacy and control are threatened in delegation
man 2019) should be more likely to spur feelings of being experiences, consumers may employ different strategies to
replaced (RQC1). Consider a self-driving car choosing restore them, including increasing agency and seeking structure
between stopping and crossing at an intersection versus choos- and boundaries (Landau, Kay, Whitson 2015). Thus, future
ing between swerving and killing one pedestrian or not swer- research can explore whether and when consumers who feel
ving and killing several pedestrians (Bonnefon, Shariff, and replaced opt to constrain the involvement of the AI in produc-
Rahwan 2016): the car’s passenger may feel more replaced tion processes (RQC8) to both reaffirm self-efficacy by
in the latter case, which involves a moral dilemma, than in the increasing their own role in these processes and seek structure
former case, which involves a mechanical decision. Further- by physically and/or mentally bounding AI features. This delib-
more, feeling replaced by AI may alter the social or moral erate limitation of the AI is similar to situations in which con-
acceptability of behavior and its likelihood of occurrence sumers restrict their experience with smart objects to the most
(RQC2). For example, self-protective behaviors appear more basic and least innovative forms of interaction (Hoffman and
moral when adopted by autonomous vehicles than by humans Novak 2018).
(Gill 2020). Perceptions of what ought to fall to human com-
petence may, however, shift rapidly as AI technology advances
(RQC3). The AI Social Experience
Negative reactions to feeling replaced by AI are likely to AI’s capability for engaging in reciprocal communication pro-
differ across consumption contexts (RQC4). Future research duces what we term a “social experience.” We focus on two
can explore whether delegation to AI is less threatening in types of social experiences: when consumers know at the outset
categories where consumers are already familiar with recom- that the interaction partner is an AI, such as when using a voice
mendation agents (e.g., entertainment), are less confident in assistant like Apple’s Siri, and when they interact with an AI
their own preferences (e.g., finance), are open to experimenta- representing an organization without necessarily knowing ini-
tion (e.g., food), and can trust the AI brand (JWT Intelligence tially that it is nonhuman, such as when receiving customer
Wunderman Thompson 2016). As AI encroaches on an service from an automated chatbot. In both cases, consumers
ever-expanding set of human activities, researchers could also have a social interaction with AI as part of a consumption
explore whether feelings of replacement in one domain could experience in which the end goal is not the AI interaction.
motivate consumers to seek control in others (RQC5). For We do not focus on two other types of interactions: when con-
example, will consumers engaged in daily delegation sumers are never aware that the interaction partner is a
Puntoni et al. 13

simulated person (because the experience would be perceived character Maria in Metropolis to Apple’s Siri, patriarchal
as a normal social interaction) and when consumers interact norms and preferences embedded in seemingly benign AI
with the AI as an end in itself, as in the case of a robotic pet. experiences have the potential to engage only certain types of
Social experiences are beneficial when consumers can find users, such as white men, while alienating others, such
in AI a vehicle for information exchange that connects them as women and racial minorities (Adam 1998; Hayles 1999;
with the firm in a natural way. This often happens when anthro- Haraway 1985).
pomorphic features are incorporated in AI-enabled products: From this perspective, an instance such as Siri’s earlier pro-
anthropomorphic cues increase trust toward self-driving cars gramming to answer to users who say, “you’re a slut” with “I’d
(Waytz, Heafner, and Epley 2014) and reduce perceived risk blush if I could” (Rawlinson 2019) would not just be evidence
when consumers are in a position of power (Kim and McGill of biases within the male-centric technology sectors and of the
2011), as when they interact with a virtual assistant. More fact that AI mirrors the misogyny concealed in language
generally, developments in social robotics are making it possi- patterns but also diagnostic of the tendency to undermine AI’s
ble to create comfortable and even emotionally meaningful social and inclusive possibilities. By collapsing dualistic cate-
AI-powered service interactions (Van Doorn et al. 2017). gories such as male versus female, for instance, social experi-
Social AI experiences are beneficial also because they can be ences could at least partially ease the social isolation brought
more efficient, especially in situations where the alternative to about by misogynous and racial stereotyping. At the same time,
AI is not a human interaction but the absence of any interac- because anthropomorphized AI typically reproduces such
tion: AI provides consumers access to firms through dualistic categories to maximize consumer engagement (e.g.,
“conversational commerce.” men who treat women as assistants, women who are more
Despite these advantages, social experiences may also alie- assistant-like), social experiences have the potential to exclude
nate consumers. Negative consumer reactions to simulated rather than include and to alienate rather than connect certain
social interactions can go well beyond the occasional disap- groups of consumers.
pointment as these interactions emerge in a rich cultural
context where they can easily trigger societal and individual
concerns with unbalanced intergroup relations and Psychological Perspective: The Alienated Consumer
discrimination. AI social experiences have the power to bolster consumer–firm
relationships but also to alienate consumers. We identify two
Sociological Context: Humanized AI Narrative main types of alienation engendered by AI social experiences.
The first type can occur with any failed automated customer
The sociological starting point for social experiences is the service, as exemplified in this exchange between a customer
widespread cultural fascination with humanized machines and chatbot, UX Bear (Wong 2019):
(Adam 1998; Haraway 1985; Suchman, Roberts, and Hird
2011), specifically, the preference for machines that emulate Bot: “How would you describe the term ‘bot’ to your grandma?”
the human body and traits. For instance, a well-noted trope in User: “My grandma is dead.”
science fiction is the pursuit of the perfect artificial woman Bot: “Alright! Thanks for your feedback. [Thumbs up emoji]”
(Hayter 2017), a male fantasy of a beguiling, seductive, and
sexually obliging object (Rose 2015). These female robots or This type of alienation may explain consumers’ widespread
“gynoids” are routinely imagined as “basic pleasure models” in resistance to replacing humans with machines (Castelo, Bos,
Philip K. Dick’s Blade Runner and sex workers in Michael and Lehman 2019; Leung, Paolacci, and Puntoni 2018). For
Crichton’s Westworld, or they are traded like used cars in Steve example, consumers report feelings of discomfort when inter-
de Jarnatt’s Cherry 2000. acting with “social robots” in service contexts (Mende et al.
This cultural preference for humanized AI is amplified by 2019), and customers’ responses in a field study became mark-
the widespread use of anthropomorphized chatbots and voice edly more negative when they were informed in advance that
assistants in contemporary AI markets. Humans are less open, their interaction partner would not be a human (Luo et al.
agreeable, conscientious, and self-disclosing when they inter- 2019). The potential of AI to trigger alienation is also evident
act with AI versus humans (Mou and Xu 2017). However, these in the resurgent interest in social connections that are unme-
perceptual barriers can be overcome, and intimate experiences diated by technology, such as authentic consumption experi-
can be accomplished, when AI products feature human char- ences (Beverland and Farrely 2010) and more personal
acteristics, behaviors, and language, thus ultimately becoming marketing exchanges (Van Osselaer et al. 2020).
“artificial besties.” The second type of alienation results from AI’s failure to
Nevertheless, in this narrative, AI companies that strive for interact successfully with specific groups of consumers. For
greater human touch cannot ignore that AI products and ser- example, the UK government’s reliance on AI to handle claims
vices modeled as “obliging, docile, and eager-to-please to its social security program led to experiences like that of
[human] helpers” often contribute to the social alienation of Danny Brice, who has learning disabilities and dyslexia and
particular groups in society (West, Kraut, and Chew 2019, describes his attempts to use the automated Universal Credit
p. 104). Consistent with this finding, from the iconic robot program as follows (Booth 2019):
14 Journal of Marketing XX(X)

I call it the black hole . . . . I feel shaky. I get stressed about it. This To this aim, firms should collect information directly from con-
is the worst system in my lifetime. They assess you as a number not sumers who have experienced alienation in their interactions
a person. Talking is the way forward, not a bloody computer. I feel with AI. In addition, firms can leverage technology to gauge
like the computer is controlling me instead of a person. It’s and measure alienation (operationalized using measures like
terrifying. amount of stress in the customer’s voice) in chats with AI service
providers to develop generalizable insights about when aliena-
Thus, AI can exacerbate existing barriers that prevent spe- tion is most likely to occur. Firms should also interact with
cific social groups from accessing essential social services, psychologists, sociologists, gerontologists, and other experts
reinforcing societal inequity. Another example of how alienat- to learn about both causes and consequences of alienation.
ing social experiences can feed inequality is chatbots pro- Organizational learning should also ensure that definitions
grammed without considering how existing discrimination in of anthropomorphism do not draw on and calcify harmful
society may affect their operation, such as when Tay, a Twitter stereotypes about social categories and the way they interact.
bot created by Microsoft, began offering white supremacist One way to do so is breaking with organizational cultural con-
answers to users soon after its launch, with exchanges like the ventions that idealize AI as a passive and subservient huma-
following (Me.me 2020): nized other by involving experts like linguists, critical theorists,
and social psychologists who study the subtle ways in which
User: “What race is the most evil to you?”
stereotyping affects communication. For example, disseminat-
Bot: “Mexican and black.” ing information throughout an organization about the potential
societal consequences of exposure to subservient female AIs
The cultural narratives of oppression and discrimination may shift AI designers away from using female names and
underlying this example are even more apparent in the context voices as defaults (Teich 2020).
of personal virtual assistants. Journalist Sigal Samuel recounts
working on a piece about sexist AI (Samuel 2019b): Experience design. Using the greater sensitivity emerging from
organizational learning activities, firms can improve the design
I said into my phone: “Siri, you’re ugly.” She replied, “I am?” I said, of AI social experiences. As timely and appropriate firm
“Siri, you’re fat.” She replied, “It must be all the chocolate.” I felt
responses can do much to mitigate the harmful consequences
mortified for both of us. Even though I know Siri has no feelings, I
of service failure (Hart, Heskett, and Sasser 1990), firms should
couldn’t help apologizing: “Don’t worry, Siri. This is just research
work to increase the effectiveness of interactive AI applications
for an article I’m writing!” She replied, “What, me, worry?”
to minimize the likelihood of alienation. Research shows that
Alienating social experiences such as this, in which women consumers respond positively when AI service providers per-
face societal pressures around their appearance, may lead con- sonalize the interactions, for example by using the customer’s
sumers to denigrate and belittle the AI, similarly to situations in name and explaining the reasons for malfunctions (Carmon
which individuals derogate outgroup members to reaffirm et al. 2020). Relatedly, firms should also ensure easy and swift
self-esteem following an identity threat (Branscombe and transitions from AI to human representatives when the interac-
Wann 1994). Dissatisfaction with a voice-enabled device might tion becomes difficult or aversive.
produce verbal responses that emphasize its artificial and To avoid the perpetuation of harmful stereotypes, compa-
worthless nature. The tendency to objectify others, and women nies could also strive to develop AI that is less, rather than
in particular, is well-known (Fredrickson and Robert 1997), more, humanlike (Hadi et al. 2020), and indeed, software
and it should be stronger when the interaction partner is, in developers have begun investigating the creation of
fact, an inanimate entity, however human-like its communica- gender-neutral voices (Sydell 2018). This requires a radical
tion. Indeed, conversational failures lead consumers to express change in the mindset of many AI designers (and marketing
more frustration with AI when it has a female rather than a academics), who often take it for granted that anthropomorph-
male voice (Hadi et al. 2020). Firms risk translating this deni- ism fosters better relationships with customers (Kim, Chen, and
gration of AI into behaviors that reinforce inequality. As tech- Zhang 2016). Organizations should also evaluate the potential
nology enables companies to create automated interactions that consequences of using AI for access to basic social services for
are more and more like real human interactions, a new set of consumers like Danny. When AI is deployed to provide impor-
ethical issues confront both organizations and marketing tant welfare services, designers need to recognize the barriers
researchers, as we discuss in the next sections. that they can create for specific user groups, even when the
technology has satisfied standard performance benchmarks.
Finally, instead of worrying solely about designing to
improve human–AI interaction, firms could address alienation
Managerial Recommendations: Understanding
by considering how AI design can improve human–human
the Alienated Consumer interaction. Firms can design social experiences that help sup-
Organizational learning. To effectively manage AI social experi- port what Epp and Velagaleti (2014) call “care assemblages”
ences, companies should learn how to acknowledge and accom- by connecting individuals to dear ones in ways that are remi-
modate the heterogeneity of human interaction styles and needs. niscent of popular social media strategies designed to foster
Puntoni et al. 15

and satisfy consumers’ social goals (Epp, Schau, and Price Agenda for Future Research
2014). Thus, companies could actively shift from understand- on Consumers and AI
ing AI as a substitute for humans toward understanding AI as
an interface that facilitates social connection (Farooq and We developed a framework to structure our understanding of
Grudin 2016). consumers’ interaction with AI by defining and contextualizing
the AI data capture, classification, delegation, and social
experiences using both sociological and psychological lenses.
Future Research on the AI Social Experience In this final section, we go beyond these four experiences to
identify additional future research questions in two areas: inter-
Sociological research questions. Consumers vary in the extent to
relationships between the four experiences and new AI experi-
which they hold antibias beliefs and are willing to take action to
ences that may emerge along with new capabilities. These
address bias in society (Ivarsflaten, Blinder, and Ford 2010).
additional research questions are also included in Table 1.
Those who are more concerned about AI fostering alienation
may be particularly likely to reject the idea that AI can be a true
social partner (RQD1). Cultural differences are also likely to Interrelationships Between Experiences
influence the extent to which consumers perceive social experi-
ences with AI as alienating (RQD2). Asian consumers feel a Although we discussed the four consumer AI experiences sep-
stronger connection to both people and things than Western arately, our framework is not intended to suggest that they exist
consumers and, as a result, have shaped their social interactions independently. On the contrary, these experiences could be
with AI in more personal ways: whereas AI social experiences seen as different aspects of the same customer journey and,
in the West are mainly utilitarian and involve disembodied as such, could influence each other (Lemon and Verhoef
personal assistants, those in the East involve human and 2016). An important avenue for future research is to explore
animal-appearing robots that are assumed to serve and improve where and how consumers’ experience with one AI capability
society (Belk, Humayun, and Gopaldas 2020). directly affects their experience with another AI capability
If, over time, AI social experiences become commonplace, (Giesler and Fischer 2018). For example, whether consumers
future research should explore their broader interpersonal and feel served versus exploited in an AI data capture experience is
societal consequences (RQD3). Just as the synthetic and unrea- likely to affect a subsequent AI classification experience. Con-
listic nature of pornography has been accused of distorting teens’ sumers who feel exploited may be more likely to worry about
sexual expectations (Owens et al. 2012), AI social experiences AI inappropriately using their personal data to regulate access
might increase the prevalence of sexist language if they trigger to valued resources (RQE1). Similarly, intrusive data capture
female objectification (Hadi et al. 2020). Researchers could also requests might foster consumer alienation (RQE2). For
build on literature on intergroup relations, such as Haslam’s instance, students who view an AI-enabled teaching assistant
(2006) theory of dehumanization, to investigate the conditions such as Packback.co as overly inquisitive might feel less
under which objectification of AI is more likely to occur (RQD4). included in the virtual classroom and less likely to participate
in communal activities such as online discussion boards. Future
Psychological research questions. An information processing per- research can also explore whether consumers are more likely to
spective could shed light on how AI social experiences are perceive an AI classification as benefiting them when they are
interpreted and evaluated. The timing of disclosure that the asked to validate inferences made by the AI, turning a classi-
interaction partner is, in fact, an algorithm may influence con- fication experience into a delegation one (RQE3).
sumer response to social experiences (Luo et al. 2019), simi- Another avenue for research is related to the identification of
larly to the “change of meaning” that occurs when consumers additional ways in which AI experiences influence each other by
realize that a message is meant to influence their behavior uncovering shared theoretical foundations. For instance, the data
(Friestad and Wright 1994). Thus, alienation might be more capture and delegation experiences share an emphasis on con-
likely to emerge if consumers question the company’s intention cerns about personal control, as interacting with AI often involves
behind disclosing the nature of the interaction partner (RQD5). giving up at least some control over personal data and production
Moreover, research on the effects of disclosure on word of processes (RQE4). Similarly, classification and social experi-
mouth (Tuk et al. 2009) and product placement (Campbell, ences share an emphasis on concerns about self-identity, as inter-
Mohr, and Verlegh 2013) shows that situational factors may acting with AI often influences inferences about how AI
influence consumer reactions through an effect on cognitive understands the self and feelings of belonging (RQE5). Confirm-
capacity, and researchers can examine how these factors also ing the relevance of these theoretical perspectives, personal con-
affect alienation (RQD6). trol and self-identity have been recognized as key concerns in the
Future research could also explore the role of brand equity nascent literature on consumer AI (André et al. 2018; Belk,
(RQD7). As brand attachment influences consumer expecta- Humayun, and Gopaldas 2020; Carmon et al. 2020; Schmitt
tions and can shield companies from negative appraisals in 2019). A search for shared theoretical foundations may stimulate
ambiguous situations (Lee, Frederick, and Ariely 2006), stron- academic research and help AI designers form a more holistic
ger consumer–brand relationships may also insulate consumers understanding of consumers’ interaction with AI. For example, as
from experiencing interactions with AI as alienating. consumers come to understand AI as an independent intelligence
16 Journal of Marketing XX(X)

operating in the marketplace to whom they can delegate tasks and consumption contexts. For example, in our discussion of social
with whom they can interact, marketplace metacognition and experiences, we explicitly excluded contexts in which the inter-
social intelligence (Wright 2002) theory can be leveraged to bet- action with AI is the end in itself, such as sex robots and robotic
ter understand the theories consumers have about how AI pets, which are increasingly important in the entertainment and
“thinks” (its intentions, strategies, etc.) and how these lay theories health care industries. Such applications of AI’s communica-
influence how consumers respond to AI. tion capability give rise to an AI “companionship experience”
An integrated view of the four experiences will also max- (RQF3). On the one hand, AI companionship experiences are
imize the value consumers see in organizations’ investments positive because they can provide both cognitive and socio-
into AI. Some companies find themselves in a catch-22 situa- emotional benefits (Broadbent 2017). On the other hand, they
tion in which users need to reveal personally sensitive infor- can deceive vulnerable consumers such as the elderly and tod-
mation for the company to provide valuable benefits but are dlers into believing the AI has feelings and may be used as
unwilling to do so unless they can first experience such benefits substitutes for real human connections (Van Oost and Reed
(Grafanaki 2017). Drawing on an integrated understanding of 2010). While the goal of the creation of robot companions is
AI consumer experiences, it may be possible to articulate and to simulate an interaction with a real living being, future
structure alternative customer journeys. For example, compa- research could explore at what point the potential for deception
nies could provide an initial basic service requiring limited and substitution becomes damaging (RQF4).
disclosure of personal information and later offer the possibil- Finally, emerging AI capabilities may create new consumer
ity to access an upgraded version that requires additional indi- AI experiences. In the health care sector, nanorobots are being
vidual data. Thus, demands for data capture could ramp up as developed to bring AI solutions directly inside the body, and
the company is able to demonstrate the benefits that delegation smartphones, fitness trackers, and smart watches provide essen-
brings to consumers (RQE6). tial extensions of cognitive and perceptual capabilities. These
products give rise to what researchers have called an AI “cyborg
experience” (Giesler and Venkatesh 2005). A cyborg is “a
Unchartered AI Experiences cybernetic organism, a fusion of the organic and the technical
Our framework offers a parsimonious template to conceptua- forged in particular, historical, cultural practices” (Haraway
lize how consumers navigate the disparate consumption con- 1985, p. 51). Thus, cyborg experiences emphasize hybridity,
texts powered by AI, including social media, online shopping, self-enhancement, and often radical self-modification, requiring
and personal virtual assistants. In doing so, the framework future research to reexamine longstanding epistemic boundaries
identifies experiences relevant to a large variety of industries between human and machine (Belk 2019). On the one hand,
and products. However, additional consumer experiences that cyborg experiences destabilize human autonomy and control
we did not examine are on the rise in specific industry sectors, and might fundamentally undermine consumer freedom (Wer-
and future research can examine both industry-specific experi- tenbroch et al. 2020). On the other hand, they collapse dualistic
ences stemming from existing capabilities and new experiences categories like man and machine and might promote consumer
stemming from emerging capabilities (Figure 1). empowerment and the circumvention of structural inequalities
Although we theorized the production capability as leading to (RQF5). Lastly, cyborg experiences also raise mind-bending but
a delegation experience, this capability can also be used to nonetheless intriguing questions about the kinds of consumption
develop an AI “learning experience” in the education industry. experiences that an AI itself might have (Hoffman and Novak
Educators can facilitate knowledge and skill acquisition by let- 2018). Consider, in this context, that many firms selling on
ting AI personalize aspects of the learning process, such as pro- Amazon today no longer market their offerings directly to
ducing tailored content and testing materials. Future research consumers but to Amazon-controlled algorithms that act on
can examine how different aspects of the learning experience behalf of these consumers. Future research could explore what
affect subjective and objective assessments of educational out- marketing strategies are most effective when AI is marketing to
comes (RQF1). For example, the risk of engendering negative AI (RQF6).
feelings of being replaced in delegation experiences may have a
parallel in learning experiences: If an AI application makes it
more challenging to internalize the outcome of the learning pro-
Conclusions
cess, learning experiences might decrease satisfaction and moti- AI-enabled products promise to make consumers happier, heal-
vation. This may be especially likely to occur when the learning thier, and more efficient. Consumer-facing AI products and
content is relevant to one’s identity: just like consumers tend to services such as college admissions software, chatbots, and
resist automation in identity-relevant consumption domains knowledge aggregators have been heralded as forces for good
when it prevents the internal attribution of consumption out- that can make important contributions to problems such as
comes (Leung, Paolacci, and Puntoni 2018), students may show poverty, lack of education, chronic illness, and racial discrim-
reactance to AI applications that prevent them from attributing ination. For instance, a World Economic Forum discussion on
learning to their own talent and effort (RQF2). the future of AI argued that “no one will be left behind” (Zhou
Another avenue for future research would be to relax some 2020). A key problem with these optimistic celebrations that
of our definitional boundaries to include a larger set of view AI’s alleged accuracy and efficiency as automatic
Puntoni et al. 17

promoters of democracy and human inclusion is their tendency Agrawal, Ajay, Joshua S. Gans, and Avi Goldfarb (2018), Prediction
to efface intersectional complexities. Machines: The Simple Economics of Artificial Intelligence.
Instead of considering algorithms as neutral tools, AI Boston: Harvard Business Review Press.
designers should recognize that their interventions are André, Quentin, Ziv Carmon, Klaus Wertenbroch, Alia Crum,
“inherently political” and interrogate themselves on “the rela- Douglas Frank, William Goldstein, et al. (2018), “Consumer
tionship between their design choices, their professional role, Choice and Autonomy in the Age of Artificial Intelligence and
and their vision of the good” (Green and Viljoen 2020, p. 26). Big Data,” Customer Needs and Solutions, 5 (1/2), 28–37.
We hope that our formulation serves as an antidote to the Bandura, Albert (1977), “Self-Efficacy: Toward a Unifying Theory of
temptation of “technological solutionism” (Morozov 2013) and Behavioral Change,” Psychological Review, 84 (2), 191.
a useful guide to contrast cases in which targeted consumer Belk, Russell W. (2019), “Machines and Artificial Intelligence,”
segments are subjected to biased outcomes as a result of uncri- Journal of Marketing Behavior, 4 (1), 11–30.
tical firm reliance on AI. We therefore end by noting a key role Belk, Russell W., Mariam Humayun, and Ahir Gopaldas (2020),
for the American Marketing Association in shaping the way “Artificial Life,” Journal of Macromarketing, 20 (10), 1–16.
marketers think about using AI ethically. Although some orga- Berg, Tobias, Valentin Burg, Ana Gombović, and Manju Puri (2020),
nizations are beginning to create ethical guidelines around AI, “On the Rise of FinTechs: Credit Scoring Using Digital
such as the Organization for Economic Co-operation and Footprints,” Review of Financial Studies, 33 (7), 2845–97.
Development’s “Principles for AI” (Organisation for Eco- Bernthal, Matthew J., David Crockett, and Randall L. Rose (2005),
nomic Co-operation and Development 2020) and the European “Credit Cards as Lifestyle Facilitators,” Journal of Consumer
Commission’s “Ethics Guidelines for Trustworthy AI” Research, 32 (1), 130–45.
(European Commission 2020), they are not specifically for Bettany, Shona M. and Ben Kerrane (2016), “The Socio-Materiality of
marketers. The code of conduct of the American Marketing Parental Style: Negotiating the Multiple Affordances of Parenting
Association currently includes no mention of AI. We recom- and Child Welfare Within the New Child Surveillance Technology
mend the formation of a taskforce of practitioners and aca- Market,” European Journal of Marketing, 50 (11), 2041–66.
demics from different disciplines to evaluate how Beverland, Michael B. and Francis J. Farrelly (2010), “The Quest for
professional guidelines could acknowledge the new ethical Authenticity in Consumption: Consumers’ Purposive Choice of
challenges raised for marketers by the growth of AI. Authentic Cues to Shape Experienced Outcomes,” Journal of
Consumer Research, 36 (5), 838–56.
Bhattacharjee, Amit, Jonah Berger, and Geeta Menon (2014), “When
Acknowledgments
Identity Marketing Backfires: Consumer Agency in Identity
The authors thank Carl Mela and the Marketing Science Institute for Expression,” Journal of Consumer Research, 41 (2), 294–309.
organizing the 2018 MSI Scholars conference that provided the oppor- Bone, Sterling A., Glenn L. Christensen, and Jerome D. Williams
tunity to work together on this project, Christine Moorman and John
(2014), “Rejected, Shackled, and Alone: The Impact of Systemic
Deighton for their guidance, participants at the Marketing Science
Restricted Choice on Minority Consumers’ Construction of Self,”
conference in Rome for their feedback, and Gizem Yalcin for her help
and feedback. Journal of Consumer Research, 41 (2), 451–74.
Bonnefon, Jean-François, Azim Shariff, and Iyad Rahwan (2016),
“The Social Dilemma of Autonomous Vehicles,” Science, 352
Author Contributions (6293), 1573–76.
Order of authorship was determined by random draw; all authors Booth, Robert (2019), “Computer Says No: The People Trapped in
contributed equally. Universal Credit’s ‘Black Hole’,” The Guardian (October 14),
https://www.theguardian.com/society/2019/oct/14/computer-says-
no-the-people-trapped-in-universal-credits-black-hole.
Declaration of Conflicting Interests
Borgerson, Janet (2005), “Materiality, Agency, and the Constitution of
The author(s) declared no potential conflicts of interest with respect to Consuming Subjects: Insights for Consumer Research,” in North
the research, authorship, and/or publication of this article.
American Advances in Consumer Research, Vol. 32, Geeta Menon
and Akshay R. Rao, eds. Duluth, MN: Association for Consumer
Funding Research, 439–43.
The author(s) received no financial support for the research, author- Botti, Simona and Sheena S. Iyengar (2006), “The Dark Side of
ship, and/or publication of this article. Choice: When Choice Impairs Social Welfare,” Journal of Public
Policy & Marketing, 25 (1), 24–38.
Botti, Simona and Ann L. McGill (2011), “The Locus of Choice:
References Personal Causality and Satisfaction with Hedonic and Utilitar-
Acquisti, Alessandro, Leslie J. John, and George Loewenstein (2012), ian Decisions,” Journal of Consumer Research, 37 (6),
“The Impact of Relative Standards on the Propensity to Disclose,” 1065–78.
Journal of Marketing Research, 49 (2), 160–74. Brakus, J. Joško, Bernd H. Schmitt, and Lia Zarantonello (2009),
Adam, Alison (1998), Artificial Knowing: Gender and the Thinking “Brand Experience: What Is It? How Is It Measured? Does It
Machine. New York: Routledge. Affect Loyalty?” Journal of Marketing, 73 (3), 52–68.
18 Journal of Marketing XX(X)

Branscombe, Nyla R. and Daniel L. Wann (1994), “Collective Dietvorst, Berkeley J., Joseph P. Simmons, and Cade Massey (2016),
Self-Esteem Consequences of Outgroup Derogation When a “Overcoming Algorithm Aversion: People Will Use Imperfect
Valued Social Identity is on Trial,” European Journal of Social Algorithms If They Can (Even Slightly) Modify Them,” Manage-
Psychology, 24 (6), 641–57. ment Science, 64 (3), 1155–70.
Brehm, Jack W. (1966), A Theory of Psychological Reactance. New Dormehl, Luke (2014a), “Algorithms Are Great and All, but They Can
York: Academic Press. Also Ruin Lives,” Wired (November 19), https://www.wired.com/
Broadbent, Elizabeth (2017), “Interactions with Robots: The Truths 2014/11/algorithms-great-can-also-ruin-lives/.
We Reveal About Ourselves,” Annual Review of Psychology, 68, Dormehl, Luke (2014b), “Facial Recognition: Is the Technology
627–52. Taking Away Your Identity?” The Guardian (May 4), https://
Brown, Dalvin (2019), “AI Bias: How Tech Determines If You Land www.theguardian.com/technology/2014/may/04/facial-recogni
Job, Get a Loan, or End Up in Jail” USA Today (October 2), https:// tion-technology-identity-tesco-ethical-issues.
www.usatoday.com/story/tech/2019/10/02/how-artificial-intelli The Economist (2012), “The One That Didn’t Get Away,” (June 23),
gence-bias-can-work-against-you/2417711001/. https://www.economist.com/science-and-technology/2012/06/23/
Brown, Jennings (2018), “The Amazon Alexa Eavesdropping Night- the-one-that-didnt-get-away.
mare Came True,” Gizmodo (December 20), https://gizmodo.com/ Epp, Amber M., Hope Jensen Schau, and Linda L. Price (2014), “The
the-amazon-alexa-eavesdropping-nightmare-came-true-183123 Role of Brands and Mediating Technologies in Assembling Long-
1490. Distance Family Practices,” Journal of Marketing, 78 (3), 81–101.
Brunk, Katja H., Markus Giesler, and Benjamin J. Hartmann (2017), Etkin, Jordan (2016), “The Hidden Cost of Personal Quantification,”
“Creating a Consumable Past: How Memory Making Shapes Mar- Journal of Consumer Research, 42 (6), 967–84.
ketization,” Journal of Consumer Research, 44 (6), 1325–42. Eubanks, Virginia (2018), Automating Inequality: How High-Tech
Campbell, Margaret C., Gina S. Mohr, and Peeter W. Verlegh (2013), Tools Profile, Police, and Punish the Poor. New York: St. Martin’s
Press.
“Can Disclosures Lead Consumers to Resist Covert Persuasion?
European Commission, “Ethics Guidelines for Trustworthy AI,”
The Important Roles of Disclosure Timing and Type of Response,”
(accessed September 21, 2020), https://ec.europa.eu/futurium/en/
Journal of Consumer Psychology, 23 (4), 483–95.
ai-alliance-consultation.
Carmon, Ziv, Rom Y. Schrift, Klaus Wertenbroch, and Haiyang Yang
Farooq, Umer and Jonathan Grudin (2016), “Human–Computer
(2020), “Designing AI Systems that Customers Won’t Hate,” MIT
Integration,” IX Interactions, 6 (November/December), http://inter
Sloan Management Review, 61 (2), 1–6.
actions.acm.org/archive/view/november-december-2016/human-
Castelo, Noah, Maarten W. Bos, and Donald R. Lehmann (2019),
computer-integration.
“Task-Dependent Algorithm Aversion,” Journal of Marketing
Fishbach, Ayelet and Jinhee Choi (2012), “When Thinking About
Research, 56 (5), 809–25.
Goals Undermines Goal Pursuit,” Organizational Behavior and
Chan, Cindy, Jonah Berger, and Leaf van Boven (2012), “Identifiable
Human Decision Processes, 118 (2), 99–107.
but Not Identical: Combining Social Identity and Uniqueness
Fitch, Asa (2020), “Amazon Suspends Police Use of Its
Motives in Choice,” Journal of Consumer Research, 39 (3),
Facial-Recognition Technology,” The Wall Street Journal (June 10),
561–73. https://www.wsj.com/articles/amazon-suspends-police-use-of-its-
Chernev, Alexander, Ulf Böckenholt, and Joseph Goodman (2015), facial-recognition-technology-11591826559.
“Choice Overload: A Conceptual Review and Meta-analysis,” Fitzsimons, Gavan and Donald R. Lehmann (2004), “Reactance to
Journal of Consumer Psychology, 25 (2), 333–58. Recommendations: When Unsolicited Advice Yields Contrary
Clegg, Alicia (2020), “How to Design AI that Eliminates Disability Responses,” Marketing Science, 23 (1), 82–94.
Bias,” Financial Times (January 26), https://www.ft.com/content/ Flaxman, Seth, Sharad Goel, and Justin M. Rao (2016), “Filter
f5bd21da-33b8-11ea-a329-0bcf87a328f2. Bubbles, Echo Chambers, and Online News Consumption,” Public
Crenshaw, Kimberle (1989), “Demarginalizing the Intersection of Opinion Quarterly, 80 (S1), 298–320.
Race and Sex: A Black Feminist Critique of Antidiscrimination Fredrickson, Barbara L. and Tomi-Ann Roberts (1997),
Doctrine, Feminist Theory and Antiracist Politics,” University of “Objectification Theory: Toward Understanding Women’s Lived
Chicago Legal Forum, Vol. 1989, Article 8. Experiences and Mental Health Risks,” Psychology of Women
Crocker, Jennifer and Brenda Major (1989), “Social Stigma and Self- Quarterly, 21 (2), 173–206.
Esteem: The Self-Protective Properties of Stigma,” Psychological Friestad, Marian and Peter Wright (1994), “The Persuasion Knowl-
Review, 96 (4), 608–30. edge Model: How People Cope with Persuasion Attempts,”
Cutright, Keisha M. and Adriana Samper (2014), “Doing It the Hard Journal of Consumer Research, 21 (1), 1–31.
Way: How Low Control Drives Preferences for High-Effort Prod- Fuchs, Christoph, Emanuela Prandelli, and Martin Schreier (2010),
ucts and Services,” Journal of Consumer Research, 41 (3), 730–45. “The Psychological Effects of Empowerment Strategies on Con-
Dahl, Darren W. and C. Page Moreau (2007), “Thinking Inside the sumers’ Product Demand,” Journal of Marketing, 74 (1), 65–79.
Box: Why Consumers Enjoy Constrained Creative Experience,” Fukuyama, Francis (2002), Our Posthuman Future. New York: Farrar,
Journal of Marketing Research, 44 (3), 357–69. Straus, and Giroux.
DeCharms, Richard (1968), Personal Causation. New York: Gai, Phyliss Jia and Anne-Kathrin Klesse (2019), “Making Recom-
Academic Press. mendations More Effective Through Framings: Impacts of
Puntoni et al. 19

User- Versus Item-Based Framings on Recommendation Hayter, Irena (2017), “Robotics, Science Fiction, and the Search for
Click-Throughs,” Journal of Marketing, 83 (6), 61–75. the Perfect Artificial Woman,” The Conversation (October 24),
Giesler, Markus and Ashlee Humphreys (2007), “Tensions Between https://theconversation.com/robotics-science-fiction-and-the-
Access and Ownership in the Media Marketplace,” in NA– search-for-the-perfect-artificial-woman-86092.
Advances in Consumer Research, Vol. 34, Gavan Fitzsimons and Hill, Kashmir (2017), “How Facebook Outs Sex Workers,” Gizmodo
Vicki Morwitz, eds. Duluth, MN: Association for Consumer (October 11), https://gizmodo.com/how-facebook-outs-sex-work
Research, 696–700. ers-1818861596.
Giesler, Markus and Eileen Fischer (2018), “IoT Stories: The Good, Hill, Ronald and Eesha Sharma (2020), “Consumer Vulnerability,”
the Bad and the Freaky,” Marketing Intelligence Review, 10 (2), Journal of Consumer Psychology, 30 (3), 551–70.
24–8. Hoffman, Donna L. and Thomas P. Novak (2018), “Consumer and
Giesler, Markus and Alladi Venkatesh (2005), “Reframing the Embo- Object Experience in the Internet of Things: An Assemblage
died Consumer as Cyborg: A Posthumanist Epistemology of Con- Theory Approach,” Journal of Consumer Research, 44 (6),
sumption,” in NA–Advances in Consumer Research, Vol. 32, Geeta 1178–04.
Menon and Akshay R. Rao, eds. Duluth, MN: Association for Horcher, Gary (2018), “Woman Says Her Amazon Device Recorded
Consumer Research, 661–69. Private Conversation, Sent It Out to Random Contact,” KIRO 7
Gill, Tripat (2020), “Blame It on the Self-Driving Car: How Autono- News (May 25), https://www.kiro7.com/news/local/woman-says-
mous Vehicles Can Alter Consumer Morality,” Journal of her-amazon-device-recorded-private-conversation-sent-it-out-to-
Consumer Research, 47 (2), 272–91. random-contact/755507974/.
Grafanaki, Sofia (2017), “Autonomy Challenges in the Age of Big Hosanagar, Kartik (2019), A Human’s Guide to Machine Intelligence:
Data,” Fordham Intellectual Property, Media & Entertainment How Algorithms are Shaping Our Lives and How We Can Stay in
Law Journal, 27, 803.
Control. New York: Viking.
Grandterr (2019), “Why Are Recommendations So Terrible,” Spotify
Hume, Kathryn (2018), “Designing AI to Make Decisions,” August
Community forum, https://community.spotify.com/t5/iOS-iPhone-
10, 2018, in HBR IdeaCast, produced by Mary Dooe and Curt
iPad/Why-are-recommendations-so-terrible/td-p/4769866.
Nickisch, podcast, https://hbr.org/podcast/2018/08/designing-ai-
Granulo, Armin, Christoph Fuchs, and Stefano Puntoni (2020),
to-make-decisions.html.
“Preference for Human (vs. Robotic) Labor Is Stronger in Sym-
Humphreys, Ashlee (2010), “Semiotic Structure and the Legitimation
bolic Consumption Contexts,” Journal of Consumer Psychology
of Consumption Practices: The Case of Casino Gambling,” Jour-
(published online July 18), DOI:10.1002/jcpy.1181.
nal of Consumer Research, 37 (3), 490–510.
Green, Ben and Salomé Viljoen (2020), “Algorithm Realism: Expand-
Ivarsflaten, Elisabeth, Scott Blinder, and Robert Ford (2010), “The
ing the Boundaries of Algorithmic Thought,” in Proceedings of the
Anti-Racism Norm in Western European Immigration Politics:
2020 Conference on Fairness, Accountability, and Transparency,
Why We Need to Consider It and How to Measure It,” Journal
January 19–31.
of Elections, Public Opinion, and Parties, 20 (4), 421–45.
Grudin, Jonathan (2017), From Tool to Partner: The Evolution of
Iyengar, Sheena and Mark Lepper (2000), “When Choice Is Demoti-
Human–Computer Interaction. San Rafael, CA: Morgan and
vating: Can One Desire Too Much of a Good Thing?” Journal of
Claypool Publishers.
Habermas, Jurgen (2003), The Future of Human Nature. London: Personality and Social Psychology, 79 (6), 995–1006.
Polity. JWT Intelligence Wunderman Thompson (2016), “Control Shift,”
Hadi, Rhonda, Lauren Bock, Sandra Robinson, and Jessie Du (2020), (accessed September 22, 2020), https://www.jwtintelligence.com/
“When Alexa Lets Us Down: Conversational Failures with Female trend-reports/control-shift/.
Artificial Intelligence Lead to Greater Expressed Frustration,” Kathayat, Vinod (2019), “How Netflix Uses AI for Content Creation
working paper. and Recommendation,” Medium (September 28), https://medium.
Harari, Yuval N. (2017), Homo Deus: A Brief History of Tomorrow. com/swlh/how-netflix-uses-ai-for-content-creation-and-recommen
London: Penguin Publishing. dation-c1919efc0af4.
Haraway, Donna (1985), “A Manifesto for Cyborgs: Science, Tech- Kim, Sara, Rocky Peng Chen, and Ke Zhang, (2016), “Anthropomor-
nology, and Socialist Feminism in the 1980s,” Socialist Review, phized Helpers Undermine Autonomy and Enjoyment in Computer
(80), 65–107. Games,” Journal of Consumer Research, 43 (2), 282–302.
Hart, Christopher W., James L. Heskett, and W. Earl Sasser Jr. (1990), Kim, Sara and Ann L. McGill (2011), “Gaming with Mr. Slot or
“The Profitable Art of Service Recovery,” Harvard Business Gaming the Slot Machine? Power, Anthropomorphism, and Risk
Review, 68 (4), 148–56. Perception,” Journal of Consumer Research, 38 (1), 94–107.
Haslam, Nick (2006), “Dehumanization: An Integrative Review,” Kuchler, Hannah (2020), “Can We Ever Trust Google with Our Health
Personality and Social Psychology Review, 10 (3), 252–64. Data?” Financial Times (January 20), https://www.ft.com/content/
Hayek, Friedrich (1945), “The Use of Knowledge in Society,” The 4ade8884-1b40-11ea-97df-cc63de1d73f4.
American Economic Review, 35, 519–30. Kumar, V., Barath Rajan, Rajkumar Venkatesan, and Jim Lecinski
Hayles, N. Katherine (1999), How We Became Posthuman: Virtual (2019), “Understanding the Role of Artificial Intelligence in Per-
Bodies in Cybernetics, Literature, and Informatics. Chicago: sonalized Engagement Marketing,” California Management
University of Chicago Press. Review, 61 (4), 135–55.
20 Journal of Marketing XX(X)

Kunda, Ziva (1990). “The Case for Motivated Reasoning,” Consumer Responses,” Journal of Marketing Research, 56 (4),
Psychological Bulletin, 108 (3), 480–98. 535–56.
Kuniavsky, Mike (2010), Smart Things: Ubiquitous Computing User Mick, David G. and Susan Fournier (1998), “Paradoxes of Technol-
Experience Design. Burlington, MA: Morgan Kauffman Elsevier. ogy: Consumer Cognizance, Emotions, and Coping Strategies,”
Landau, Mark J., Aaron C. Kay, and Jennifer A. Whitson (2015), Journal of Consumer Research, 25 (2), 123–43.
“Compensatory Control and the Appeal of a Structured World,” Milner, Greg (2016), “Death by GPS: Are Satnavs Changing Our
Psychological Bulletin, 141 (3), 694–722. Brains?” The Guardian (June 25), https://www.theguardian.com/
Lee, Leonard, Shane Frederick, and Dan Ariely (2006), “Try It, You’ll technology/2016/jun/25/gps-horror-stories-driving-satnav-greg-
Like It: The Influence of Expectation, Consumption, and Revelation milner.
on Preferences for Beer,” Psychological Science, 17 (12), 1054–58. Mittal, Chiraag and Vladas Griskevicius (2014), “Sense of Control
Lemon, Katherine N. and Peter C. Verhoef (2016), “Understanding Under Uncertainty Depends on People’s Childhood Environment:
Customer Experience Throughout the Customer Journey,” Journal A Life History Theory Approach,” Journal of Personality and
of Marketing, 80 (6), 69–96. Social Psychology, 107 (4), 621–37.
Leotti, Lauren A., Sheena S. Iyengar, and Kevin N. Ochsner (2010), Moon, Youngme and Clifford Nass (1998), “Are Computers Scapegoats?
“Born to Choose: The Origins and Value of the Need for Control,” Attributions of Responsibility in Human–Computer Interaction,”
Trends in Cognitive Sciences, 14 (10), 457–63. International Journal of Human–Computer Studies, 49, 79–94.
Leung, Eugina, Gabriele Paolacci, and Stefano Puntoni (2018), “Man Moore, Elaine and Hannah Murphy (2019), “Facebook’s
Versus Machine: Resisting Automation in Identity-Based Con- Fake Numbers Problem,” Financial Times (November 17),
sumer Behavior,” Journal of Marketing Research, 55 (6), 818–31. https://www.ft.com/content/98454222-fef1-11e9-b7bc-f3
Longoni, Chiara, Andrea Bonezzi, and Carey Morewedge (2019), fa4e77dd47.
“Resistance to Medical Artificial Intelligence,” Journal of Morozov, Evgeny (2013), To Save Everything, Click Here: The Folly
Consumer Research, 46 (3), 407–34. of Technological Solutionism. New York: Public Affairs.
Luo, Xueming, Siliang Tong, Zheng Fang, and Zhe Qu (2019), Mou, Yi and Kun Xu (2017), “The Media Inequality: Comparing the
“Machines Versus Humans: The Impact of AI Chatbot Disclosure Initial Human–Human and Human–AI Social Interactions,”
on Customer Purchases,” Marketing Science, 38 (6), 937–47. Computers in Human Behavior, 72, 432–40.
MacCarthy, Mark (2019), “An Examination of the Algorithmic Nass, Clifford and Youngme Moon (2000), “Machines and Mindless-
Accountability Act of 2019,” (accessed September 22, 2020), ness: Social Responses to Computers,” Journal of Social Issues,
https://ssrn.com/abstract¼3615731. 56 (1), 81–103.
Marketing Science Institute (2018), Research Priorities 2018–2020. Noble, Safiya Umoja (2018), Algorithms of Oppression: How Search
Cambridge, MA: Marketing Science Institute. Engines Reinforce Racism. New York: New York University Press.
Marks, Susan (2005), Finding Betty Crocker: The Secret Life of Norton, Michael I., Daniel Mochon, and Dan Ariely (2012), “The
America’s First Lady of Food. New York: Simon & Schuster. IKEA Effect: When Labor Leads to Love,” Journal of Consumer
Markus, Hazel Rose and Barry Schwartz (2010), “Does Choice Mean Psychology, 22 (3), 453–60.
Freedom and Well-Being?” Journal of Consumer Research, 37 (2), O’Neil, Cathy (2016), Weapons of Math Destruction: How Big Data
344–55. Increases Inequality and Threatens Democracy. New York:
Martin, Kelly D. and Patrick E. Murphy (2017), “The Role of Data Broadway Books.
Privacy in Marketing,” Journal of the Academy of Marketing Organisation for Economic Co-operation and Development, “OECD
Science, 45, 135–55. Principles on AI,” (accessed September 21, 2020), https://www.
Matz, Sandra C., Michael Kosinski, Gideon Nave, and David J. oecd.org/going-digital/ai/principles/.
Stillwell (2017), “Psychological Targeting as an Effective Owens, Eric W., Richard J. Behun, Jill C. Manning, and Rory C. Reid
Approach to Digital Mass Persuasion,” Proceedings of the (2012), “The Impact of Internet Pornography on Adolescents:
National Academy of Sciences, November, 12714–19. A Review of the Research,” Sexual Addiction & Compulsivity,
Max, D.T. (2019), “Paging Dr. Robot: A Pathbreaking Surgeon 19 (1/2), 99–122.
Prefers to Do His Cutting by Remote Control,” The New Yorker Oyserman, Daphna (2009), “Identity-Based Motivation and Consumer
(September 23), https://www.newyorker.com/magazine/2019/09/3 Behavior,” Journal of Consumer Psychology, 19 (3), 250–60.
0/paging-dr-robot. Polanyi, Michael (1948), “Planning and Spontaneous Order,” The
Me.me (2020), “Microsoft Creates AI Bot—Internet Immediately Manchester School, 16 (3), 237–68.
Turns It Racist,” (accessed September 21), https://me.me/i/ Porter, Theodore (1996), Trust in Numbers. Princeton, NJ: Princeton
damon-daymin-tayandyou-what-race-is-the-most-evil-to-45baf1 University Press.
58bb7a40c68b4bbb6c3561f1b3. Rawlinson, Kevin (2019), “Digital Assistants like Siri and Alexa
Melumad, Shiri and Robert Meyer (2020), “Full Disclosure: How Entrench Gender Biases, Says UN,” The Guardian (May 21),
Smartphones Enhance Consumer Self-Disclosure,” Journal of https://www.theguardian.com/technology/2019/may/22/digital-
Marketing, 84 (3), 28–45. voice-assistants-siri-alexa-gender-biases-unesco-says.
Mende, Martin, Maura L. Scott, Jenny van Doorn, Dhruv Grewal, and Reed II, Americus, Mark R. Forehand, Stefano Puntoni, and Luk
Ilana Shanks (2019), “Service Robots Rising: How Humanoid Warlop (2012), “Identity-Based Consumer Behavior,”
Robots Influence Service Experiences and Elicit Compensatory International Journal of Research in Marketing, 29 (4), 310–21.
Puntoni et al. 21

Richmond, Cedric L. (2018), “Open Letter to Jeffrey Bezos,” Con- Turner, John C. and Katherine J. Reynolds (2011), “Self-categorization
gressional Black Caucus (May 24), https://cbc.house.gov/uploaded Theory,” in Handbook of Theories in Social Psychology, Vol. 2, Paul
files/final_cbc_amazon_facial_recognition_letter.pdf. A.M. van Lange, Arie W. Kruglanski and E. Tory Higgins, eds.
Rose, Steve (2015), “Ex Machina and Sci-Fi’s Obsession with Sexy 399–417.
Female Robots,” The Guardian (January 15), https://www.theguar Van Doorn, Jenny Martin Mende, Stephanie M. Noble, John Hulland,
dian.com/film/2015/jan/15/ex-machina-sexy-female-robots-scifi- Amy L. Ostrom and Dhruv Grewal, ed. (2017), “Domo Arigato
film-obsession. Mr. Roboto: Emergence of Automated Social Presence in Organi-
Samuel, Sigal (2019a), “10 Things We Should All Demand from Big zational Frontlines and Customers’ Service Experiences,” Journal
Tech Right Now,” Vox (May 29), https://www.vox.com/the-high of Service Research, 20 (1), 43–58.
light/2019/5/22/18273284/ai-algorithmic-bill-of-rights-account Van Oost, Ellen and Darren Reed (2010), “Towards a Sociological
ability-transparency-consent-bias. Understanding of Robots as Companions,” in Human–Robot
Samuel, Sigal (2019b), “Alexa, Are You Making Me Sexist?” Vox Personal Relationships: Lecture Notes of the Institute for
(June 12), https://www.vox.com/future-perfect/2019/6/12/186603 Computer Sciences, Social Informatics, and Telecommunications
53/siri-alexa-sexism-voice-assistants-un-study. Engineering, Ellen van Oost and Darren Reed, eds. Berlin:
Sassen, Saskia (2014), Expulsions. Cambridge, MA: Harvard Univer- Springer, 59.
sity Press. Van Osselaer, Stijn M.J., Christoph Fuchs, Martin Schreier, and
Schmitt, Bernd (2019), “From Atoms to Bits and Back: A Research Stefano Puntoni (2020), “The Power of Personal,” Journal of
Curation on Digital Technology and Agenda for Future Research,” Retailing, 96 (1), 88–100.
Journal of Consumer Research, 46 (4), 825–32. Walker, Kristen L. (2016), “Surrendering Information Through the
Schumpeter, Joseph A. (1942), Capitalism, Socialism, and Looking Glass: Transparency, Trust, and Protection,” Journal of
Democracy. New York: Harper. Public Policy & Marketing, 35 (1), 144–58.
Seabrook, John (2019), “The Next Word: Where Will Predictive Text Waytz, Adam, Joy Heafner, and Nicholas Epley (2014), “The Mind in
Take Us?” The New Yorker (October 14), https://www.newyorker. the Machine: Anthropomorphism Increases Trust in an Autono-
com/magazine/2019/10/14/can-a-machine-learn-to-write-for-the- mous Vehicle,” Journal of Experimental Social Psychology, 52
new-yorker. (May), 113–17.
Seaver, Nick (2019), “Knowing Algorithms,” in digitalSTS: A Field Wertenbroch, Klaus, Rom Y. Schrift, Joseph W. Alba, Alixandra
Guide for Science and Technology Studies, Janet Vertesi and David Barasch, Amit Bhattacharjee, Markus Giesler, et al. (2020),
Ribes, eds. Princeton, NJ: Princeton University Press, 412–22. “Autonomy in Consumer Choice,” Marketing Letters (published
Snow, Jacob (2018), “Amazon’s Face Recognition Falsely Matched online June 28), DOI:10.1007/s11002-020-09521-z.
28 Members of Congress with Mugshots,” ACLU (July 26), https:// West, Mark, Rebecca Kraut, and Han Ei Chew (2019), I’d Blush if I
www.aclu.org/blog/privacy-technology/surveillance-technologies/ Could: Closing Gender Divides in Digital Skills Through
amazons-face-recognition-falsely-matched-28. Education. EQUALS.
Suchman, Lucy, Celia Roberts, and Myra J. Hird (2011), “Subject Wierenga, Berend and Peter A.M. Oude Ophuis (1997),
Objects,” Feminist Theory, 12 (2), 119–45. “Marketing Decision Support Systems: Adoption, Use, and
Summers, Christopher A., Robert W. Smith, and Rebecca Walker Reczek Satisfaction,” International Journal of Research in Marketing,
(2016), “An Audience of One: Behaviorally Targeted Ads as Implied 14 (3), 275–90.
Social Labels,” Journal of Consumer Research, 43 (1), 156–78. Wong, Christine (2019), “Keeping It Real: Preserving the Humanity
Sunstein, Cass R. (2015), Choosing Not to Choose: Understanding the of Customer Experience in the Age of AI,” Futurithmic (April 1),
Value of Choice. Oxford: Oxford University Press. https://www.futurithmic.com/2019/04/01/keeping-it-real-preser
Sydell, Laura (2018), “The Push for A Gender-Neutral Siri,” NPR ving-humanity-customer-experience-in-age-of-ai/.
(July 9), https://www.npr.org/2018/07/09/627266501/the-push- Wright, Peter (2002), “Marketplace Metacognition and Social
for-a-gender-neutral-siri. Intelligence,” Journal of Consumer Research, 28 (4), 677–82.
Teich, David A. (2020), “AI and the Continuation of Gender Bias in Yu, Allen (2019), “How Netflix Uses AI, Data Science, and Machine
Communications,” Forbes (May 11), https://www.forbes.com/ Learning—From A Product Perspective,” Medium (February 27),
sites/davidteich/2020/05/11/ai-and-the-continuation-if-gender- https://becominghuman.ai/how-netflix-uses-ai-and-machine-learn
bias-in-communications. ing-a087614630fe.
Thaler, Richard H. and Shlomo Benartzi (2004) “Save More Zhou, Bowen (2020), “Here Are 3 Big Concerns Surrounding AI—
Tomorrow™: Using Behavioral Economics to Increase Employee and How to Deal with Them,” World Economic Forum (February
Saving,” Journal of Political Economy, 112 (S1), S164–87. 17), https://www.weforum.org/agenda/2020/02/where-is-artifi
Tuk, Mirjam A., Peeter W. Verlegh, Ale Smidts, and Daniel H. cial-intelligence-going/.
Wigboldus (2009), “Sales and Sincerity: The Role of Relational Zou, James and Londa Schiebinger (2019), “AI Can Be Sexist and
Framing in Word-of-Mouth Marketing,” Journal of Consumer Racist—It’s Time to Make It Fair,” Nature, 559 (7714),
Psychology, 19 (1), 38–47. 324–26.
Turkle, Sherry (2008), “Always-On/Always-On-You: The Tethered Zuboff, Shoshana (2019), The Age of Surveillance Capitalism: The
Self,” in Handbook of Mobile Communication Studies, James E. Fight for a Human Future at the New Frontier of Power. New
Katz, ed. Cambridge, MA: MIT Press. York: Profile Books.

View publication stats

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy