1 s2.0 S1471772723000520 Main

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Information and Organization 34 (2024) 100498

Contents lists available at ScienceDirect

Information and Organization


journal homepage: www.elsevier.com/locate/infoandorg

From worker empowerment to managerial control: The devolution


of AI tools’ intended positive implementation to their
negative consequences
Emmanuel Monod a, Anne-Sophie Mayer b, Detmar Straub c, Elisabeth Joyce d, *,
Jiayin Qi e
a
Institute of AI and Change Management, Shanghai University of International Business and Economics, 620 Gubei Road, 200336 Shanghai, China
b
KIN Center for Digital Innovation, Vrije Universiteit Amsterdam, De Boelelaan 1105, 1081 HV Amsterdam, the Netherlands
c
J. Mack Robinson Distinguished Professor of Information Systems, Georgia State University, 35 Broad Street, Atlanta, GA 30303-3083, USA
d
Pennsylvania Western University-Edinboro, 323 Reeder Hall. 219 Meadville St., Edinboro, PA 16444, USA
e
The Cyberspace Institute of Advanced Technology, 230 Wai Huan Xi Road, Guangzhou University, Guangzhou 510006, PR China

A R T I C L E I N F O A B S T R A C T

Keywords: AI tools are increasingly being deployed in organizations with the promise to support workers,
Artificial intelligence enable faster and more accurate processes, and thus to contribute to enhanced organizational
Management control outcomes. However, in practice, the introduction of AI tools often fails to meet these expectations
Augmentation
and results in negative consequences, such as worker resistance and dissatisfaction. Yet we have
Automation
Design approach
little understanding of the process of how and why initially positive design intentions of AI tools
Surveillance result in negative consequences. Building on a qualitative in-depth case study of a Chinese firm
Coordination introducing an AI tool in sales, we found that whereas the AI tool’s initial design seemingly
Domination intended to lead to salespeople’s empowerment and first achieved respective outcomes, over time
the tool was appropriated for managerial control. We show that this devolution emerged
organically from a growing managerial awareness of the affordances that the AI tool offered
managers to perform their work better. Our study contributes to the literature on AI by high­
lighting the potential dangers of AI tools and emphasizing the importance of including workers in
the AI tool’s design and implementation phases.

1. Introduction

With rapid technological advancement, organizations increasingly introduce artificial intelligence (AI) tools across a wide spec­
trum of areas and tasks, including customer service (e.g., Nicolescu & Tudorache, 2022; Xu, Shieh, van Esch, & Ling, 2020), human
resources (e.g., Van den Broek, Sergeeva, & Huysman, 2021), marketing (e.g., Jain & Aggarwal, 2020), and cybersecurity (e.g., Sarker,
Furhad, & Nowrozy, 2021). AI tools rely on algorithms that harness statistical models to predict outcomes through the analysis of large
and complex datasets (Berente, Gu, Recker, & Santhanam, 2021). In contrast to traditional information systems, AI tools are char­
acterized by their ability to learn, to autonomously perform tasks, and to make decisions that previously required human expertise and
input (Benbya, Davenport, & Pachidi, 2020; Berente et al., 2021; Faraj, Pachidi, & Sayegh, 2018).

* Corresponding author.
E-mail addresses: a.s.mayer@vu.nl (A.-S. Mayer), dstraub@gsu.edu (D. Straub), ejoyce@pennwest.edu (E. Joyce).

https://doi.org/10.1016/j.infoandorg.2023.100498
Received 11 October 2021; Received in revised form 14 October 2023; Accepted 25 November 2023
Available online 14 December 2023
1471-7727/© 2023 Elsevier Ltd. All rights reserved.
E. Monod et al. Information and Organization 34 (2024) 100498

Based on these unique characteristics, AI tools promise novel benefits for organizations by providing more efficient, accurate, and
objective processes and decisions (Agrawal, Gans, & Goldfarb, 2018; Davenport & Kirby, 2016; Raisch & Krakowski, 2021). Moreover,
AI tools are associated with various benefits for workers, such as the empowerment and emancipation of employees (e.g., Strich,
Mayer, & Fiedler, 2021), the relief from simple and repetitive tasks (e.g., Acemoglu & Restrepo, 2018), and the opportunity to provide
enhanced customer service by gaining superior insights into customers’ needs (e.g., Mayer, Strich, & Fiedler, 2020).
Despite AI’s potential and associated benefits, prior research has pointed at serious issues that have often occurred as unintended
consequences of organizations’ goals to augment employees’ work with AI. Prominent examples of such negative consequences include
workers’ loss of autonomy (e.g., Mayer et al., 2020), the undermining of professional expertise (e.g., Anthony, 2021; Lebovitz, Lifshitz-
Assaf, & Levina, 2022), and attempts by workers to outsmart the AI tool (e.g., Strich et al., 2021). Interestingly, in some cases positive
and negative consequences have seemed to occur simultaneously. For instance, Strich et al. (2021) showed how the same AI tool
caused the deskilling and degradation of highly skilled loan consultants while simultaneously empowering low-skilled employees.
These opposing and sometimes even paradoxical findings emphasize that despite the often positive intentions when designing a
technology, the actual use of a technology in organizations may result in varying outcomes (DeLone & McLean, 1992; DeSanctis &
Poole, 1993; Petter, DeLone, & McLean, 2008). For instance, while AI tools in customer service are designed to relieve workers from
simple and repetitive tasks (Acemoglu & Restrepo, 2018), their use may result in the increased dissatisfaction of workers, as tasks
become more monotonous and mechanical (Umair, Conboy, & Whelan, 2019; Van Nimwegen, Burgos, van Oostendorp, & Schilf,
2006). However, whereas an extensive stream of literature has emphasized that the gap between a technology’s intended design goals
and its actual use is a major trigger for unintended consequences (e.g., Bailey & Barley, 2020), we know little about the process of how
and why initially positive design intentions of AI tools result in negative consequences. Moreover, we have little understanding to what
extent these tensions between positive and negative consequences may be reconciled. These questions are critical to gain more in-
depth insights into the implications of AI tools in organizations and to enhance our understanding of how potential negative conse­
quences may be prevented. Our study therefore seeks to explore the following research questions: How and why do positive intentions of
AI tool implementations have negative consequences? How and to what extent might tensions between AI’s positive and negative consequences
be reconciled in practice?
To explore these research questions, we conducted an in-depth case study of a large Chinese firm that introduced an AI sales as­
sistant, i.e., an AIA, in its customer service (CS) unit to support salespeople’s work. Whereas salespeople were previously responsible
for handling all customer requests and inquiries on their own, they are now supported by an AIA that is able to generate answers to
customer requests and questions autonomously. Moreover, the AIA is able to generate novel insights about customers, which promises
to help salespeople in providing more accurate and individualized services to their customers. Building on extensive archival data and
36 interviews with CS managers, we show that while the AIA was designed to support CS salespeople through automation and
augmentation, in practice it devolved into a tool to control and manage them to a greater extent.

2. Theoretical background

2.1. AI in organizations

AI tools have emerged as a transformative force reshaping the landscape of organizations across industries and domains. With
increasing technological advancement, AI tools have been introduced for a wide range of domains and tasks, including critical
decision-making in hiring (Van den Broek, Sergeeva, & Huysman, 2021), legal matters (Christin, 2020), policing (Brayne, 2017), and
medical diagnoses (Lebovitz et al., 2022). AI refers to the simulation of human tasks and decision-making practices by machines
(Berente et al., 2021), including learning, reasoning, and problem-solving. Building on learning algorithms, AI tools are able to process
large amounts of data, recognize patterns, and adapt their responses based on historical behavior and experiences (Benbya et al., 2020;
Faraj et al., 2018). This learning capability sets AI apart as a tool that amplifies human capabilities instead of merely automating
routine tasks (Berente et al., 2021).
AI tools can be introduced either to automate human work by keeping the human worker out of the loop or to augment human work
by using AI tools to support and assist workers in their tasks and decision-making practices (Lebovitz et al., 2022; Raisch & Krakowski,
2021). Whereas initial predictions suggested that the advancement of AI tools will replace many jobs, prior research indicates that AI
tools are primarily introduced to augment professional work and, thus, to combine human and machine capabilities (Benbya et al.,
2020; Benbya et al., 2021).

2.2. Benefits of AI

The use of AI tools for augmenting professional work promises to derive several benefits for organizations and workers. On an
organizational level, AI tools have the potential to enhance the efficiency, accuracy, and objectivity of processes and decision-making
practices (Jarrahi, 2018). As a result, organizations aim to save costs (Li, Bonn, & Ye, 2019; Nam, Dutt, Chathoth, Daghfous, & Khan,
2021), gain competitive advantages (Makridakis, 2017), and increase customer satisfaction (Prentice, Dominique Lopes, & Wang,
2020). On an individual level, AI tools are associated with the promise to enhance human capabilities by “combining computational
potential of human beings and computers” (Tegmark, 2017, p. 1) with a superior outcome that would be impossible if either humans or
computers worked independently. The AI discerns patterns in the data, which are not necessarily apparent to the worker (Kellogg,
Valentine, & Christin, 2020), and suggests actions in response to these patterns (Rosenblat, 2018). This approach is what Citron and
Pasquale (2014) referred to as “human-in-the-loop,” where the AI identifies patterns and produces recommendations out of those

2
E. Monod et al. Information and Organization 34 (2024) 100498

patterns, but the human implements the recommendations only if the recommendations are deemed appropriate. This human-AI
collaboration is essential to ensure that the AI’s recommendations are accurate (Wilson & Daugherty, 2018). Citron and Pasquale
(2014) used AI-derived credit scoring as an example of necessary human oversight. They dealt with a case where algorithms utilized to
calculate credit scores are granted undue weight, thereby creating confusion and uncertainty about how the credit scores are calcu­
lated or how to address the credit scores’ inconsistencies.
Moreover, AI tools promise to relieve workers from repetitive and simple tasks, thereby enabling the workers to focus on more
complex and creative tasks (Zanzotto, 2019). At the same time, with rapid technological advancement, AI tools are also increasingly
capable of performing complex decisions and tasks that require emotional intelligence and creativity (Brynjolfsson & Mitchell, 2017).
As a result, prior studies have emphasized that AI tools can contribute to the upskilling and empowerment of low-skilled workers who
are enabled by the AI tool to work in more prestigious jobs that previously required specialized expertise (e.g., Jarrahi, 2018; Mayer,
Strich, & Fiedler, 2020). For instance, Jarrahi (2018) showed that AI tools can support humans in their decision-making such that
workers can process highly complex data under conditions of extreme uncertainty.

2.3. Downsides of AI

Extant literature indicates that the introduction and use of AI in organizational settings also come with potential dangers, and
therefore have a negative side (e.g., Faraj et al., 2018; Giermindl et al., 2022; Mayer et al., 2020). On an organizational level, these
potential dangers often relate to ethical dilemmas caused by AI’s opacity and, thus, a lack of explainability (Burrell, 2016; Hafermalz &
Huysman, 2019), unintended biases and discrimination (Mayer et al., 2020), and employee resistance to use AI tools (e.g., Van den
Broek, Sergeeva, & Huysman, 2021). On an individual level, potential negative aspects of AI are often associated with the process of
decoupling. Decoupling, i.e., the transformation of tasks into more simplified units, has been witnessed over and over since the advent
of Taylorism in the early 20th century and now has dramatic ramifications in the interactions of intelligent machines and employees.
As Braverman (1998) noted:
The mass of workers gain nothing from … the decline in their command over the labor process …[and] the increasing command
on the part of managers and engineers. On the contrary, not only does their skill fall in an absolute sense ([losing] craft and
traditional abilities without gaining new abilities adequate to compensate the loss), but it falls even more in a relative sense. The
more science is incorporated into the labor process, the more sophisticated an intellectual product the machine becomes, the
less control … the worker has (pp. 294–295).
In decoupling the worker from the larger unit of work, the worker becomes more fungible (Berente et al., 2021). At the same time,
since the work is transformed into smaller units, it requires more focused skill sets, and consequently, workers become more replicable
and portable (Robert et al., 2020). This shift comes about because the task becomes more important than the worker. In addition, due
to this emphasis on the task instead of the person, organizational indifference to the worker becomes institutionalized such that
workers are made legible, controllable, and available on demand (Kellogg et al., 2020).
Moreover, an associated danger of AI tools is often related to AI’s opacity, which hinders workers and sometimes even developers
from understanding how the AI tools have derived a certain outcome (Burrell, 2016). This opacity often results in workers questioning
their expertise (e.g., Lebovitz et al., 2022), blindly following an AI output (e.g., Anthony, 2021) or, conversely, utterly rejecting and
ignoring that output (e.g., Lebovitz et al., 2022). The use of AI tools therefore poses a risk in undermining workers’ expertise (Pak­
arinen & Huising, 2023).
Finally, the use of AI tools can further pose a danger to workers through domination and control, which is apparent in such ac­
tivities as tracking and tailoring workers’ behavior (Kellogg et al., 2020). In tracking workers, AI tools use learning algorithms to break
down tasks that employees perform and then re-compose those tasks into a working whole. The algorithms subsequently assign work
by matching expertise to tasks and by flagging individuals whose work causes bottlenecks or does not live up to expectations (Faraj
et al., 2018). Algorithms make work more visible to co-workers through surveillance, sending reminders to slow or inadequate per­
formers (Faraj et al., 2018; Schildt, 2017). Similarly, in tailoring the behavior of workers, workers who are knowledgeable in a narrow
expertise domain are likely to become confined to a limited way of thinking and reasoning since this scrutiny of work segments focuses
on the minutiae of tasks. This siloing of task scope can also limit innovation and prevent employees from gaining an understanding and
appreciation of other epistemic viewpoints that might give rise to novel knowledge combinations (Faraj, von Krogh, Monteiro, &
Lakhani, 2016).

3. Research methodology

We employed an in-depth case study approach to explore a fairly novel phenomenon and to understand relevant underlying
mechanisms and relationships (Eisenhardt, 1989; Gioia, Corley, & Hamilton, 2013). Building on insights from a large, leading service
company in China that introduced an AIA in their CS unit, we shed light on how and why the AIA’s initially positive intentions turned
into negative consequences over time.

3.1. AIA in customer service

Organizations introduce AIAs in their CS units to enhance efficiency and accuracy in handling customer requests and, thus, to
contribute to an overall improved customer experience (Nicolescu & Tudorache, 2022; Xu et al., 2020). In contrast to AI chatbots that

3
E. Monod et al. Information and Organization 34 (2024) 100498

are directly used by customers but often lack personalized answers or accuracy in providing information (Ben Mimoun, Poncin, &
Garnier, 2017; Etemad-Sajadi & Ghachem, 2015), AIAs are designed to augment CS salespersons’ work by combining benefits of
human and AI capabilities. Thus, whereas AI chatbots often automate CS work and replace CS jobs, AIAs are presumably used to
augment the work of CS salespersons by helping them better serve customers (Xu et al., 2020).
AIAs streamline customer queries and requests, develop and maintain comprehensive customer records, and assess data re­
positories to anticipate and predict customer needs and interests (Guenzi & Habel, 2020; Singh et al., 2019). Many responsibilities of
salespeople can be automated through AIAs, such as generating leads (Chatterjee et al., 2020), making recommendations, handling
customer complaints (Singh et al., 2019), forecasting sales (Saura, Ribeiro-Soriano, & Palacios-Marqués, 2021), and assigning accounts
(Zoltners et al., 2021). In addition, by digitizing individual salesperson’s knowledge sets, it is possible to build a highly useful
organization-level repository (Hofacker & Corsaro, 2020; Singh et al., 2019). Hence, AIAs are associated with various benefits for
organizations and salespeople, such as increased productivity and efficiency (Zoltners et al., 2021), more salesperson support, better
customer relationships (Alavi & Habel, 2021), and greater competitive advantage (Graca, Barry, & Doney, 2014).

3.2. Case description

Our case company, Alpha (pseudonym), is a large company in China that provides technological services. The company is state
owned, employs more than 100,000 people, and is located in all main cities of China.
Traditionally, Alpha’s CS department relied only on human call centers, i.e., on outsourced human CS salespeople. However, under
this regime call centers were poorly administered, producing hundreds of thousands of errors per day. Alpha therefore launched the
AIA in 2015 to “improve customer satisfaction and reduce costs” (Interview, AIA Project Development Manager (PDM)). The AIA used
natural language processing (NLP) to solve customer issues through human-machine interactions via chatbots. In 2016, the AIA was
embedded in a new online platform devoted to solving customer questions and complaints. In October 2018, the AIA went live in firm
branches in eleven provinces. The AIA includes a cloud platform built on natural language understanding, a sub-technology of NLP; it
further includes speech recognition, text-to-speech technology, and an AI-assisted training and authorization system by tier, vendor
management, and service policy management. Fig. 1 gives an overview of the AIA.

3.3. Data collection

We collected our data between September 2018 and July 2020. We chose our case company, as it represents a real use case where AI
had been actively utilized in CS for a significant period of time. During our research, we had access to the IT team that designed the AI
and to the CS managers.
Informants were chosen according to standard guidelines for purposive sampling (Lincoln & Guba, 1985; Trochim, Donnelly, &
Arora, 2016). Informant status included different hierarchical levels in both the company and CS departments. Unfortunately, we
could not interview any CS salespeople because the CS task had been outsourced to another company and Alpha managers would not
permit access to this source. Directly accessing CS salespeople was therefore infeasible.
Not having access to CS salespeople was a major limitation in the purposive sampling; it constrained our ability to distinguish
clearly between the ideas and intentions of managers, salespeople, and designers. However, in spite of the research team’s lack of
direct access to the CS salespeople, extensive information about the impact on the CS salespeople’s work practices was de facto
available. The CS managers provided information about the current practices and were open about their future plans. Furthermore, the
AIA designers and the technical documentation about initial system goals demonstrated a user-oriented design approach such that
salespeople’s welfare was well represented in the case data. We therefore felt strongly that we discovered substantial enough salient
discrepancies between the AIA design and the executive users’ use of the tool to justify a thorough exploration of our research
questions. The data that was revealed as evidence for these discrepancies included semi-structured face-to-face interviews, follow-up
interviews via phone and Wechat (text messages), and extensive archival documentation about the AIA development projects.
Based on the analysis of the archival documentation, we developed interview guides for conducting semi-structured, face-to-face
interviews with the AI tool design team at the Alpha headquarters in September 2019. Preparatory interviews were telephonically

Fig. 1. Overview of the AIA (Technical Document, 2018).

4
E. Monod et al. Information and Organization 34 (2024) 100498

conducted with the AI PDM six months before data collection, and closing interviews followed in the ten-month period after the on-site
headquarters visit.
In late 2019 and early 2020, interviews with CS managers were conducted by telephone and Wechat, with a duration between one
and three hours. We relied on the designed interview guide that had been tested with academic colleagues, and subsequently modified
during the research process. The initial interview used a largely standardized interview guide for all informants, though it included two
distinct guides for the design team and for the CS managers. Subsequent managerial interviews (i.e., those by telephone and Wechat)
became progressively more structured as themes emerged from the data, such as the lack of worker empowerment and the focus on
managerial control. This thematic focusing progressively prompted more targeted data collection to identify patterns across in­
formants and (in)consistencies between the system’s design and its actual use, as well as tentative relationships among concepts.
In sum, we conducted 36 interviews. With the respondents’ permission, the interviews were recorded and transcribed verbatim.
Table 1 provides an overview of the interviewees.
Based on the same interview guide, different members of the research team conducted the interviews in order to benefit from
diverse perspectives. After each interview, the team discussed their impressions and shared ideas on possible adjustments to the
interview guide and the way of conducting the interviews.
In addition to interviews, the study relied on archival data, such as Alpha’s internal technical documents. This document described
the AI technologies used, the projected timelines, and each sub-project objective. It included tool artifacts, namely representations of
the tool’s potential benefits, and image artifacts, such as the tool architecture and required infrastructure. Documentation also
included electronic documents concerning strategic and operational aspects of the company, and marketing materials.
Although documentation is often considered to be a supplemental data source (Jick, 1979), in this study it became a key data source
due to the pivotal role it played as a tool in designing the interview guides and engaging informants regarding the AIA objectives. In the
research process, discrepancies between these archival sources, on the one hand, and the interviews, on the other hand, became wider
and wider. These discrepancies were especially visible with regard to the aforementioned goal conflict between worker empowerment
and managerial control. This particular conflict was not articulated in the archival sources, but both the design team and the CS
managers subsequently claimed it.
We made further adjustments to data collection instruments, such as adding questions related to these emerging themes to the
interview guide (Eisenhardt & Graebner, 2007). Overall, principles developed during the earlier data collection co-determined the
sampling and content focus of the later data collection we did via follow-up telephone and Wechat interviews and in the comparative
archival documentation analysis. Moreover, these principles provided the basis for clearly delineating themes and aggregate di­
mensions related to the potential benefits and adverse impacts of AI in the actual use of the system, which were not obvious in the
archival documents.

3.4. Data analysis

Following Gioia et al. (2013), our detailed data analysis consisted of three steps: (1) identifying first-order codes, (2) defining
theoretical subcategories and categories, and (3) aggregating theoretical dimensions. In a first step, every author coded the data
independently and then, in a second step, they discussed and revised their coding until all agreed on the codes and their allocation.
First-order codes consisted of statements and descriptions drawn from the archival documentation and interview transcripts. These
included the identification and categorization of initial concepts (Locke, 2001; Van Maanen, 1979). During the initial stage, the focus
was on finding consistency in issues, relationships, and other themes (Corley & Gioia, 2004). This technique is similar to open coding,
relying as often as possible on the language the informants used or on a simple descriptive phrase when an in-vivo code was not
available (Glaser & Strauss, 1967). From this step, tentative themes, concepts, and possibly even relationships between codes began to

Table 1
Overview of interviewees.
Interviewee position No. of face-to-face interviews No. of Telephone, WeChat, WeChat video, and informal communications

AIA Project Development Manager (PDM) 1 1


AIA Operation Manager 1
AIA Implementation Project Manager (AIA IPM) 1 1
SMS Robot PDM 1 1
SMS Robot IPM 1 1
Physical Robot PDM 1
Physical Robot IPM 2 2
Text analysis PDM 1
Text analysis IPM 1
IVR (Intelligent Voice Recognition) PDM 1 1
IVR IPM 2
Semantic analysis PDM 1
Semantic analysis IPM 1
AR VR / active service PDM 1 1
AR VR / active service IPM 1
CS manager 6 5
Sub-total 23 13
Total 36

5
E. Monod et al. Information and Organization 34 (2024) 100498

emerge through axial coding (Strauss & Corbin, 1990).


The next step of this iterative process was to categorize the emergent first-order concepts into second-order themes. In doing so, we
moved back and forth between the emerging themes and the literature on AI. For instance, we summarized statements that described
how the AIA initially aimed to provide customers with faster answers to their requests and to receive more accurate responses to create
the second-order theme enhanced customer support.
Finally, we aggregated the derived second-order concepts into overarching dimensions. This process was not linear, but, instead, a
non-recursive, process-oriented, analytic procedure (Locke, 2001) that continued until the emerging theoretical relationships and
additional interviews no longer revealed new data relationships (Corley & Gioia, 2004). Overall, we employed an iterative research
process and data analysis, which involved frequent backward and forward iterations between steps.

4. Findings

Our analysis revealed that the AIA was initially designed to augment salespeople’s work and, thus, to generate novel benefits for the
organization and salespeople. These associated benefits were realized in the beginning of the AIA introduction. However, over time,
managers’ thinking and behavior overshadowed these positive outcomes, triggering a shift from positive to negative consequences of
the AIA.

4.1. The AIA’s intended benefits

The intention of the AIA design was to augment CS salespeople’s work by supporting them in their daily tasks and processes. As a
result, the organization hoped to benefit from the following potentials, which were realized in the very early stages of implementation:
enhanced (1) customer support, (2) sales, and (3) customer satisfaction. First, the AIA was designed to enhance customer support by
providing salespeople with in-depth insights on customers’ profiles, preferences, and behaviors:
“As soon as the customer is connected to the CS salesperson, the AI provides [the salesperson with] all the customer’s de­
mographic data, historical purchases, and consumption behavior.”
(Interview, AIA PDM)
By having all necessary details concerning the context and interaction required for each customer situation, this information helped
CS salespeople understand customers’ needs and interests better. For instance, based on customers’ detailed demographic data, the AIA
could immediately decide in which language customers’ requests should be answered or which pronoun of address (e.g., first or last
name) customers preferred:
“The AI suggests responses to customers’ inquiries and additional promotions on current products and services. Then, the AI
provides the CS salespeople with information about available products if she or he clicks a button.”
(Interview, AIA PDM)
Similarly, the official CS technical documentation defined the AIA’s “service target” as:

“Intelligent services such as service reminders and service recommendations are provided to the Customer Service salespeople
through phonetic and semantic recognition technologies, in the hope of boosting our staff efficiency while securing high-quality
services for our customers.”
(Technical Document, 2018, p. 9)
As a result, CS salespeople were able to provide more individualized answers to customers’ requests and inquiries, thereby
enhancing their support to customers.
Second, the AIA was designed to enhance sales by providing salespeople with individual product recommendations and predictive
insights on customers’ purchase preferences. These insights were intended to help CS salespeople determine more easily which services
and products might be of interest to the customer:
“From the data about the customer, the AI provides predictions about the customer’s potential purchasing behavior”
(Interview, AIA PDM)
Moreover, the AIA was designed to convert the voices of the customer and the CS salesperson into text, and to relate the text of the
conversation to existing knowledge about company products and services. Thereby, CS salespeople could gain additional insights into
customers’ preferences:
“Here is the screen used by the CS [salesperson]… Here you can see the question asked orally by a customer and converted into
text by the AI. This text is often incomplete. So, the AI will complete it. The AI then identifies key words in the customer’s text
and relates them to the company’s products.”
(Interview, AIA PDM)
As a result of this individualized and customer-specific target approach, the organization hoped to increase CS salespeople’s sales.
Third, the AIA was designed to increase CS salespeople’s efficiency in answering customer requests and inquiries faster and more
accurately, which aimed to enhance customer satisfaction. The technical document claimed, for example, that “[t]he objective of the

6
E. Monod et al. Information and Organization 34 (2024) 100498

AI Assistant is to boost CS salespeople’s efficiency” (Technical Document, p. 8); the company’s marketing information states that the
AIA would support the service target of “efficient frontline support.”
The underlying idea was to relieve CS salespeople from simple and repetitive questions and tasks and thus to help them in focusing
on more complex issues. As the AIA PDM emphasized:
“The AIA covers the frequently asked questions, allowing CS salespeople to focus on higher complexity cases and product
recommendations.”
Moreover, the AIA was intended to help salespeople have more time to focus on customers’ individual demands and to provide
services of higher quality:
“Another objective of the AI was to secure high-quality services for customers through both the AI and the CS salespeople.”
(Interview, AIA PDM)

4.2. The emergence and domination of the AIA’s negative consequences

Although the AIA tool’s design process focused on the organization’s intended goal to enhance CS salespeople’s work, the situation
reversed itself shortly after implementation, as the features of the AIA sparked CS managers’ interest in using the AIA to control and
monitor salespeople. This downstream interest emerged after the AIA had been implemented and CS managers could explore and test
the AIA’s features. In this process, CS managers realized that the AIA is useful not only in terms of enhancing salespeople’s work but
also to enhance their own work, in that the AIA makes sure that salespeople perform well. As a result, managers started to take
advantage of the AIA’s features that supported greater control over the salespeople, leading to a shift from the initial positive con­
sequences to the dominance of the AIA’s negative consequences. These negative consequences involved the misuse of (1) coordination,
(2) power, and (3) embedded biases over the salespeople. In the following, we elaborate on each negative consequence and emphasize
how managers’ thinking and behavior contributed not only to the AIA’s negative consequences overtaking the positive consequences,
but also to the retreat and ultimate dissolution of the initially realized positive ones.
The first misuse of the AIA involved coordination. The CS managers struggled with high costs in their unit, which were mainly
caused by employee salaries. Since the AIA was able to automate many tasks that were previously performed by salespeople, CS
managers tried to replace some salespeople with the AIA after they had realized how efficiently and accurately the tool performed
tasks. As a CS manager admitted:
“[The AIA can be used to] lift performance by decreasing the number of employees.”
With the AIA in the hands of CS managers, it seemed that, in their minds, human ability and character were easily replaceable. One
of the CS managers explained:
“The objective is to reduce dependence on CS salespeople to make up for [CS salespeople] losses [i.e., to make up for the
turnover].”
The underlying logic appears to have been that work and worker need to be decoupled such that, in the interest of the greater
efficiency the AIA could provide, it would be possible to reduce the workforce.
This goal to save costs by replacing salespeople with the AIA escalated over time. In the beginning, CS managers stated that the AIA
can take over tasks that are “simple and have clear rules,” and that it can also take over “basic query services, and functional reporting
services,” “basic interactive guidance and simple business handling,” “complaint recording,” and “problem detection in fault and
accounting and other scenarios service.” However, in later interviews, CS managers emphasized their ultimate goal, namely, to have
the AIA take over all CS processes and, thus, to fully automate salespeople’s work. One of the CS managers, for example, said that “[i]t
will take some time for the AIA to fully cover all human operations,” strongly insinuating that the AIA would, in fact, finally and
completely take over from human salespeople. That salespeople are still employed, therefore, seemed to be simply due to the
implementation lag. However, in the final stage of this transformation, the workforce is intended to be fully decoupled from the
organization:
“[F]or the moment, the AIA requires a lot of manpower for training. We increase investment in AI development to meet the
requirement of reducing the number of CS salespeople.”
(CS manager)
The two contradictions of this situation were: (1) that the CS salespeople were essential in training the AIA that would replace them
and (2) that management, while they openly admitted to fewer employees as the goal of the AIA, hid this goal in their discourse. In
order to cut the workforce and move CS to the AIA, the CS salespeople had to do more work:
“Yes, there will be a lot of data precipitation in the process of using an AI tool. Employees need to analyze these data and utilize
the value of these data, for example, capturing new demands of users, analyzing the service effect of existing service schemes,
focusing on the change of service proportion, etc. All the analysis results will also be used to optimize our AI tool.”
(Interview, CS manager)
Moreover, in order to use the AIA, the employees needed to be trained:

7
E. Monod et al. Information and Organization 34 (2024) 100498

“Employees have to learn the process design of the AIA, and also routine training and other maintenance work for AI
applications.”
(Interview, CS manager)
For instance, the AIA required an extensive training set for machine learning’s predictions in order to update those predictions over
time. Thus, CS managers intentionally used their power to make salespeople contribute to their own replacement—without the
salespeople being aware of this situation. CS managers used this approach to fulfil their own goals—which saved costs and enhanced
the efficiency of CS processes.
In recognizing the irony of this situation where employees train the system and learn new skills, managers muted the harshness of
the future outcomes by making it appear that the job loss could mean freedom for the employees, as suggested by CS managers in
interviews:

• “Part of the workload of employees can be liberated and those employees can be assigned to other tasks.”
• “The ultimate goal of AI applications is to free up people and put more humans into complex and challenging jobs.”

The second negative consequence of the AIA was the misuse of power, seen in this case through enhanced surveillance. Surveillance
involved the monitoring and tracking of salespeople’s behavior. Traditionally, CS managers struggled with measuring and monitoring
salespeople’s work quality in a qualitative environment, which forced managers to rely on judgement calls with regard to how well
their CS salespeople handled customers. The AIA allowed CS managers to:
“monitor the CS salespeople…, includ[ing] their speed of speech, their attitudes, [and] their emotions during calls with the
customers.”
(Interview, CS manager)
For instance, the AIA would assess the level of tension in the CS salespeople’s interactions with customers. In order to do so, it
would measure the speed of the people’s voices to gauge attitude and emotion in the communication. When the AIA would identify
moments of tension, it would alert the CS salespeople to speak more slowly. As a CS manager said:
“Thanks to the AIA, a part of the operations can be tracked. If a CS salesperson’s attitude is not good in all respects, the AIA can
send the person a warning in real time.”
The attitude referred to is a metric created by combining the CS salesperson’s voice speed with semantic indicators of her/his
emotional presence during a call.
Moreover, we found that surveillance was not just monitoring what the workers did, but also assessing the quality of their work.
Traditionally, CS managers often did not know who performed well and who did not. Even if they suspected someone of weak per­
formance, many CS managers did not dare to confront salespeople with their impressions because they did not have any clear evidence.
Now, the AIA assessment phase flagged those CS salespeople that the AIA identified as weaker, not just in productivity but also in their
attitudes. Moreover, all those assessments are delivered to the CS manager:
“They are displayed on [her/his] dashboard. Key performance indicators (KPIs) are created from the data collected from each
CS salesperson for the purpose of monitoring.”
(CS manager)
Up-to-date and regularly refreshed data on workers’ attitudes place workers under constant scrutiny, obligating them to suppress
their emotions so as to avoid triggering the system’s warnings and alerts.
Overall, the AIA allowed CS managers to make the salespeople themselves and their work more visible. As a result, CS managers
could use the AIA to confront salespeople who caused bottlenecks and whose work did not live up to expectations, and to take
respective actions.
The third negative consequence of the implementation of the AIA that emerged was the embedded bias of the managers. In order to
maintain the subordination of the workers, they maintained a system of inequity and opacity. In this situation, inequity ensures that
workers have less access to detailed knowledge than either the AIA or CS managers. While the design goal was for employees to have "a
general understanding of each module" (Cs manager), and while CS managers know everything relevant to CS processes, salespeople
are limited to the knowledge of their particular task or subtask. Managers also have greater access to and understanding of the AIA. In
fact, Alpha’s CS salespeople have so little access to the system that other humans train them to use it; the trainers therefore serve as an
additional barrier to the system itself, reinforcing its opacity. One of the CS managers put it as follows:
“To teach the robots, we use a human-machine model. To monitor CS salespeople, we use a human-human model.”
A human-human model devalues employee consensus-building processes because the employees are presented with the system as it
is, without the opportunity or ability to understand how the system functions. In this case, experts who are not workers set the norms
for the workers, without any interaction with or validation from them.
The managers seemed to realize quite clearly that if salespeople could overcome their own reluctance, AIAs could improve the
customer experience dramatically. Indeed, over the implementation periods, as one CS manager said:

8
E. Monod et al. Information and Organization 34 (2024) 100498

“[E]mployees begin to understand the application of data and that communication is actually the exchange of data. Through the
opportunity of AI application, users began to think about how to better serve users and transform the old service methods. It was
no longer a regular system service thinking, but a case of thinking from the user’s point of view. The optimization scheme based
on this is also more in line with the service effect expected by the user.”
In fact, the CS manager knew that:
“there is employee resistance. There may be some reasons [for this]. First, they don’t have enough perception and under­
standing, and they don’t use tools very often. Second, they feel the crisis arising from artificial intelligence. Third, the bugs
generated by the AI cause them an extra burden.”
Managers did not only have access to the same knowledge as they had before the AIA implementation; they actually ended up
gaining more operational knowledge than before. On the one hand, Alpha’s AIA offered managers new affordances, while, on the other
hand, it restricted the knowledge and actions available to workers. As emphasized earlier, managers could gain access to negative
information that denigrates current (or even prospective) employees, but since the AIA rules the scene, workers could not negotiate
practices associated with their work (Ajunwa & Greene, 2019).
In our CS managerial interviews, management disclosed that this disparity in knowledge generated some resistance to the AIA
implementation. As another manager mentioned:
“There is employee resistance, because they don’t have enough insight and understanding.”
Yet another interviewee disparaged the sales people in saying that:
“Resistance depends on the employees’ understanding of AI. If they don’t know enough about AI, they can’t [appreciate it], but
[this is] not necessarily caused by the AI. The difficulty could lie in the difference between the users’ ability and their hier­
archical level.”
A third interviewed CS manager suggested that this knowledge inequity could be addressed:
“(1) to make up for the lack of perception and cognition, (2) to make up for technical deficiencies through strategies to improve
AI service capability, [and] (3) to enhance employees’ sense of gain.”
In spite of those insights, in the final analysis the CS managers generally remained in favor of replacing all salespeople with
computers.

5. Discussion

Our case study illustrates how an AI tool, which was designed with the intention to support workers in their work by providing
enhanced insights on customer needs and interests and relieving workers from simple and repetitive tasks, was eventually used to
surveil and ultimately replace their work. As a result of this replacement, managers realized after implementation that AI’s unique
characteristics, such as the ability to analyze and measure human behavior and autonomously make predictions about it, can be used
not only to enhance workers’ work and increase customer satisfaction, but also to control and monitor them. These opportunities
helped managers in fulfilling their own goals and needs, such as reducing staff costs and increasing workers’ output. The managers’
initial focus during the AI design phase on supporting workers therefore shifted after the AI implementation toward supporting the
goals and needs of the managers themselves. Importantly, managers concealed their shift in thinking and behavior from the workers,
and let them believe that the AI tool was introduced only to help them. This tactic was important to managers to ensure that workers
would continue to use the AI tool and feed it with the necessary data. Overall, this extreme case illustrates how the implementation of
an AI tool was designed with positive intentions to support workers’ work and that, although these positive outcomes were first
realized, they shifted over time to a domination by AI’s negative consequences.
Our study makes the following contributions to the literature on AI. First, our paper contributes to prior research that emphasized
the importance of studying the design and use of technologies holistically, AI tools in particular (e.g., Bailey & Barley, 2020; Waar­
denburg & Huysman, 2022), in order to cross what Leonardi (2009) referred to as the “implementation line.” Empirical research has
shown that managers and workers should be involved in the design phase of the AI tool to better situate associated promises and goals
with the AI implementation (e.g., Mayer et al., 2020; Van den Broek, Sergeeva, & Huysman, 2021). Thereby, these studies argue, it
becomes possible to reduce the gap between intentions in the design phase and actual use after implementation. As our case shows,
because the only stakeholders involved in the design phase were managers, a natural result in the implementation phase was the
emergence of AI’s negative consequences. Managers could take advantage of the AI tool to fulfil their own goals and interests, because
only they were aware of the AI’s characteristics and the resulting affordances.
Second, our research expands the discussion of how bias impacts an AI tool’s design and use (Forsythe, 2001). Implicit in this design
and explicit in its implementation are the managers’ embedded beliefs and ideologies that workers need to be kept in their place as
subordinates and that the system needs to support that subordinated position. This subordination is evident in at least two ways in this
AIA system: through inequity and through opacity. From this case study it is clear that the opacity of AI’s “black box” not only makes it
impossible for AI users to make sense of the AI outcome (e.g., Lebovitz et al., 2022; Strich et al., 2021; Waardenburg & Huysman,
2022), but the AI users’ lack of comprehension leads to resistance to the tool itself. The inequity of the tool derives partly from the
contrast between those who do understand the tool and those who do not, and also from the distinction between those who are being

9
E. Monod et al. Information and Organization 34 (2024) 100498

dominated through surveillance and monitoring and those who carry out those surveilling and monitoring activities. Surveillance
affordances, in particular, have proved to reinforce inequity and to mask that feature in the tool’s “objective” nature (Brayne, 2017). As
such, this finding also moves research toward a greater understanding of the importance of power in the discussion around the design
and use of technology (Bailey & Barley, 2020). This is because managers used their position and power to deploy the AI tool in order to
support their own interests, not simply in terms of increased efficiencies, but also in terms of the maintenance of hierarchies, a
structure at risk in instances of emerging technology (Barrett et al., 2012).
What is fascinating and also novel about the findings in our case study is that managers did not appear to fully understand the extent
to which the AIA could not only replace humans in the sales equation, but also enhance their own ability to control the overall sales
setting. As time went on, managers’ attitudes seemed to change, moving away from the original design and focusing more on their
enhanced control. What at first was a healthier respect for worker empowerment, eventually morphed into a scenario of tightened
managerial control.
Third, our research extends the discussion around organizational coordination in the context of AI, in particular, augmentation and
automation (Raisch & Krakowski, 2021). Previous work has emphasized that it is important to uphold the paradox between
augmentation and automation (Raisch & Krakowski, 2021) in order to maintain the benefits of each while avoiding their drawbacks.
Equally important is the need to maximize such positive outcomes, for example, efficiency and customer satisfaction, while minimizing
error and cost. Instead of dealing with this paradox through a balanced approach, managers reconciled the tension between AI’s
potential to support as well as harm workers by using workers to enhance the AI tool and then to ultimately automate their work and
replace their jobs with the improved AI tool.
Our research also yields important practical implications. First, the design of AI tools should include all stakeholders (Monod et al.,
2022). This approach would ensure that the goals of any AI tool would be implemented according to a balanced design. In the case of
the AIA studied here, the design phase gave evidence of support for performance enhanced by AIA assistance, and of greater worker
empowerment. This indicates that the goals may have been well intentioned to start with but became more duplicitous over time as
managers stopped sharing their evolving intentions. It is deceptive to offer workers hope that a new AI tool will aid them in doing a
better job while simultaneously planning to retrench and increase surveillance. Therefore, an attendant practical implication is that
design principles should extend across the AI tool’s implementation and even across its lifetime, as proposed by value-sensitive design,
whereby stakeholders representing all factions within the organization are active participants in the evaluation and modifications of
the system over time.
Efforts should therefore be made to recognize the contradictions inherent in AIA implementations. The lesson this study holds for
managers who have begun or who plan to implement an AIA at this moment of the “sales renaissance” in AI (Syam & Sharma, 2018) is
that they will need to address these conundrums. Most starkly, the managerial choices would be between: (1) workforce reduction or
worker empowerment, (2) higher performance through job satisfaction or intensive maximizing of efficiency, (3) innovation
outsourcing or online community building, and (4) top-down or user-oriented change. It hardly bears mentioning that researchers
should determine which of these stark choices would lead to a more equitable workplace where organizations value human effort at all
hierarchical levels and reward symbiotic partnerships that make the most of the differing human and digital technology strengths (Gal,
Jensen, & Stein, 2020).
Having identified this set of options, we caution that there might be ways to combine the two intelligently or use them as tensions to
be resolved (Ciriello, Richter, & Schwabe, 2019) or maintained in irresolution (instead of mutually exclusive choices as shown in
organizational justice theory (Robert et al., 2020)). For example, a balance between augmentation and automation should be main­
tained. As appealing as automation might be to a manager, in its reduction of repetitive and low-level tasks, too much emphasis on
automation might lead to the AI tool completely replacing the workers. Alternatively, while augmentation is equally appealing in its
ability to support and enhance a worker’s more complex activities, it will lead to higher overhead costs because of the system’s need to
learn constantly, resulting in persistent and niggling changes across the platform (Raisch & Krakowski, 2021).
Since managers in this case concealed their true interests and also the affordances of the AI system that focused on surveillance,
workers continued working with the system, believing that it would be used to support their work. Thus, our research emphasizes the
crucial need to unpack AI’s black box—not only for workers to work effectively with the AI system, but also to help them in protecting
personnel rights and preventing AI’s misuse. In this context, this research adds to recent discussions around AI regulations and the
importance of ethical guidelines when implementing AI technologies (e.g., Regulation, 2023).

6. Conclusion

Our study provides an important case that illustrates how the introduction of AI tools can cause serious harm to workers. Although
the AI tool was designed to empower workers, the study’s findings suggest a disconnect between potential benefits and managerial
unwillingness to cede some control. Whereas the justification of the AI tool was the admirable goal of worker empowerment and job
enrichment, it was eventually only used for greater management domination and oversight. These insights contribute to debates
around AI tools and their implications for organizations, workers, and the wider society.

CRediT authorship contribution statement

Emmanuel Monod: Conceptualization, Methodology. Anne-Sophie Mayer: Writing – original draft, Writing – review & editing.
Detmar Straub: Writing – original draft, Writing – review & editing. Elisabeth Joyce: Writing – original draft, Writing – review &
editing. Jiayin Qi: Conceptualization, Methodology, Data curation.

10
E. Monod et al. Information and Organization 34 (2024) 100498

References

Acemoglu, D., & Restrepo, P. (2018). Artificial intelligence, automation, and work. In A. Agrawal, J. Gans, & A. Goldfarb (Eds.), The economics of artificial intelligence:
An agenda (pp. 197–236). University of Chicago Press.
Agrawal, A. K., Gans, J. S., & Goldfarb, A. (2018). Exploring the impact of artificial intelligence: Prediction versus judgment. National Bureau of Economic Research. http://
www.nber.org/papers/w24626.
Ajunwa, I., & Greene, D. (2019). Platforms at work: Automated hiring platforms and other new intermediaries in the organization of work. In S. P. Vallas, &
A. Kovalainen (Eds.), Work and Labor in the Digital Age (pp. 61–91). https://doi.org/10.1108/S0277-283320190000033005
Alavi, S., & Habel, J. (2021). The human side of digital transformation in sales: Review and future paths. Journal of Personal Selling and Sales Management, 41(2),
83–86.
Anthony, C. (2021). When knowledge work and analytical technologies collide: The practices and consequences of black boxing algorithmic technologies.
Administrative Science Quarterly, 66(4), 1173–1212.
Bailey, D. E., & Barley, S. R. (2020). Beyond design and use: How scholars should study intelligent technologies. Information and Organization, 30, Article 100286.
https://doi.org/10.1016/j.infoandorg.2019.100286
Barrett, M., Oborn, E., Orlikowski, W., & Yates, J. (2012). Reconfiguring boundary relations: Robotic innovations in pharmacy work. Organization Science, 23(5),
1448–1466.
Ben Mimoun, M. S., Poncin, I., & Garnier, M. (2017). Animated conversational agents and e-consumer productivity: The roles of agents and individual characteristics.
Information and Management, 54, 545–559.
Benbya, H., Davenport, T. H., & Pachidi, S. (2020). Artificial intelligence in organizations: Current state and future opportunities. MIS Quarterly Executive, 19(4), 9–21.
Benbya, H., Pachidi, S., & Jarvenpaa, S. (2021). Special issue editorial: Artificial Intelligence in organizations: Implications for Information Systems research. Journal
of the Association for Information Systems, 22(2).
Berente, N., Gu, B., Recker, J., & Santhanam, R. (2021). Special issue editor’s comments: Managing artificial intelligence. MIS Quarterly, 45(3), 1433–1450.
Braverman, H. (1998). Labor and monopoly capital: The degradation of work in the twentieth century. NYU Press.
Brayne, S. (2017). Big data surveillance: The case of policing. American Sociological Review, 82(5), 977–1008. https://doi.org/10.1177/0003122417725865
Brynjolfsson, E., & Mitchell, T. (2017). What can machine learning do? Workforce implications. Science, 358(6370), 1530–1534.
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1). https://doi.org/10.1177/
2053951715622512
Chatterjee, S., Nguyen, B., Ghosh, S. K., Bhattacharjee, K. K., & Chaudhuri, S. (2020). Adoption of artificial intelligence integrated CRM system: An empirical study of
Indian organizations. The Bottom Line, 33(4), 359–375. https://doi.org/10.1108/BL-08-2020-0057
Christin, A. (2020). The ethnographer and the algorithm: Beyond the black box. Theory and Society, 49, 897–918. https://doi.org/10.1007/s11186-020-09411-3
Ciriello, R. F., Richter, A., & Schwabe, G. (2019). The paradoxical effects of digital artefacts on innovation practices. European Journal of Information Systems, 28(2),
149–172.
Citron, D. K., & Pasquale, F. (2014). The scored society: Due process for automated predictions. Washington Law Review, 89(1). https://digitalcommons.law.uw.edu/
wlr/vol89/iss1/2.
Corley, K. G., & Gioia, D. A. (2004). Identity ambiguity and change in the wake of a corporate spin-off. Administrative Science Quarterly, 49(2), 173–208.
Davenport, T. H., & Kirby, J. (2016). Only humans need apply: Winners and losers in the age of smart machines. Harper Business.
DeLone, W. H., & McLean, E. R. (1992). Information system success: The quest for the independent variable. Information Systems Research, 3(1), 60–95. https://doi.
org/10.2753/MIS0742-1222290401
DeSanctis, G., & Poole, M. S. (1993). Capturing the complexity in advanced technology use: Adaptive structuration theory. Organization Science, 5(2), 121–147.
Eisenhardt, K. M. (1989). Building theories from case study research. The Academy of Management Review, 14(4), 532–550.
Eisenhardt, K. M., & Graebner, M. E. (2007). Theory building from cases: Opportunities and challenges. Academy of Management, 50(1), 25–32.
Etemad-Sajadi, R., & Ghachem, L. (2015). The impact of hedonic and utilitarian value of online avatars on e-service quality. Computers in Human Behavior, 52, 81–86.
Faraj, S., Pachidi, S., & Sayegh, K. (2018). Working and organizing in the age of the learning algorithm. Information and Organization, 28(1), 62–70. https://doi.org/
10.1016/j.infoandorg.2018.02.005
Faraj, S., von Krogh, G., Monteiro, E., & Lakhani, K. R. (2016). Online community as space for knowledge flows. Information Systems Research, 27(4), 668–684. https://
doi.org/10.1287/isre.2016.0682
Forsythe, D. E. (2001). Studying those who study us: An anthropologist in the world of artificial intelligence. Stanford University Press.
Gal, U., Jensen, T. B., & Stein, M.-K. (2020). Breaking the vicious cycle of algorithmic management: A virtue ethics approach to people analytics. Information and
Organization, 30(2), Article 100301. https://doi.org/10.1016/j.infoandorg.2020.100301
Giermindl, L. M., Strich, F., Christ, O., Leicht-Deobald, U., & Redzepi, A. (2022). The dark sides of people analytics: Reviewing the perils for organisations and
employees. European Journal of Information Systems, 31(3), 410–435. https://doi.org/10.1080/0960085X.2021.1927213
Gioia, D. A., Corley, K. G., & Hamilton, A. L. (2013). Seeking qualitative rigor in inductive research: Notes on the Gioia methodology. Organizational Research Methods,
16(10), 15–31. https://doi.org/10.1177/1094428112452151
Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research. Aldine Transaction.
Graca, S. S., Barry, J. M., & Doney, P. M. (2014). Performance outcomes of behavioural attributes in buyer-supplier relationships. The Journal of Business and Industrial
Marketing, 30(7), 805–816.
Guenzi, P., & Habel, J. (2020). Mastering the digital transformation of sales. California Management Review, 62(4), 57–85. https://doi.org/10.1177/
0008125620931857
Hafermalz, E., & Huysman, M. (2019). Please explain: Looking under the hood of explainable AI. In 11th process symposium on organization studies (PROS), Crete.
Hofacker, C. F., & Corsaro, D. (2020). Dystopia and utopia in digital services. Journal of Marketing Management, 36(5–6), 412–419. https://doi.org/10.1080/
0267257X.2020.1739454
Jain, P., & Aggarwal, K. (2020). Transforming marketing with artificial intelligence. International Research Journal of Engineering and Technology, 7(7), 3964–3976.
Jarrahi, M. H. (2018). Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business Horizons, 61, 577–586.
Jick, T. D. (1979). Mixing qualitative and quantitative methods: Triangulation in action. Administrative Science Quarterly, 24, 602–611.
Kellogg, K., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366–410.
Lebovitz, S., Lifshitz-Assaf, H., & Levina, N. (2022). To engage or not to engage with AI for critical judgments: How professionals deal with opacity when using AI for
medical diagnosis. Organization Science, 33(1), 126–148.
Leonardi, P. M. (2009). Crossing the implementation line: The mutual constitution of technology and organizing across development and use activities. Communication
Theory, 19, 278–310.
Li, J. J., Bonn, M. A., & Ye, B. H. (2019). Hotel employee’s artificial intelligence and robotics awareness and its impact on turnover intention: The moderating roles of
perceived organizational support and competitive psychological climate. Tourism Management, 73, 172–181.
Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Sage.
Locke, K. (2001). Grounded theory in management research. Sage.
Makridakis, S. (2017). The forthcoming artificial intelligence (AI) revolution: Its impact on society and firms. Futures, 90, 46–60.
Mayer, A.-S., Strich, F., & Fiedler, M. (2020). Unintended consequences of introducing AI systems for decision making. MIS Quarterly Executive, 19(4), 239–257.
https://doi.org/10.17705/2msqe.00036
Monod, E., Watson-Mannheim, M.-B., Qi, I., Joyce, E., Santoro, F., & Mayer, A. (2022). (Un)intended consequences of AI sales assistants. Journal of Computer
Information Systems, 63(2), 436–448. https://doi.org/10.1080/08874417.2022.2067794

11
E. Monod et al. Information and Organization 34 (2024) 100498

Nam, K., Dutt, C. S., Chathoth, P., Daghfous, A., & Khan, M. S. (2021). The adoption of artificial intelligence and robotics in the hotel industry: Prospects and
challenges. Electronic Markets, 31, 553–574.
Nicolescu, L., & Tudorache, M. T. (2022). Human-computer interaction in customer service: The experience with AI chatbots—A systematic literature review.
Electronics, 11(10), 1579.
Pakarinen, P., & Huising, R. (2023). Relational expertise: What machines can’t know. Journal of Management Studies. https://doi.org/10.1111/joms.12915
Petter, S., DeLone, W., & McLean, E. (2008). Measuring information systems success: Models, dimensions, measures, and interrelationships. European Journal of
Information Systems, 17, 236–263. https://doi.org/10.1057/ejis.2008.15
Prentice, C., Dominique Lopes, S., & Wang, X. (2020). The impact of artificial intelligence and employee service quality on customer satisfaction and loyalty. Journal of
Hospitality Marketing & Management, 29(7), 739–756.
Raisch, S., & Krakowski, S. (2021). Artificial intelligence and management: The automation-augmentation paradox. Academy of Management Review, 46(1), 192–210.
Regulation, A9-0188/2023.. Artifical intelligence act. European Parliament, Council of the European Union. https://www.europarl.europa.eu/doceo/document/TA-
9-2023-0236_EN.html.
Robert, L. P., Pierce, C., Marquis, L., Kim, S., & Alahmad, R. (2020). Designing fair AI for managing employees in organizations: A review, critique and design agenda.
Human-Computer Interaction, 35(5–6), 545–575. https://doi.org/10.1080/07370024.2020.1735391
Rosenblat, A. (2018). Uberland: How algorithms are rewriting the rules of work. University of California Press.
Sarker, I. H., Furhad, M. H., & Nowrozy, R. (2021). Ai-driven cybersecurity: An overview, security intelligence modeling and research directions. SN Computer Science,
2, 1–18.
Saura, J. R., Ribeiro-Soriano, D., & Palacios-Marqués, D. (2021). From user-generated data to data-driven innovation: A research agenda to understand user privacy in
digital markets. International Journal of Information Management, 102331(2), 1–13. https://doi.org/10.1016/j.ijinfomgt.2021.102331
Schildt, H. (2017). Big data and organizational design: The brave new world of algorithmic management and computer augmented transparency. Innovation, 29(1),
23–30. https://doi.org/10.1080/14479338.2016.1252043
Singh, J., Flaherty, K., Sohi, R. S., Deeter-Schmelz, D., Habel, J., Le Meunier-FitzHugh, K., … Onyemah, V. (2019). Sales profession and professionals in the age of
digitization and artificial intelligence technologies: Concepts, priorities, and questions. Journal of Personal Selling and Sales Management, 39(1), 2–22. https://doi.
org/10.1080/08853134.2018.1557525
Strauss, A., & Corbin, J. (1990). Basics of qualitative research: Grounded theory procedures and techniques. Sage.
Strich, F., Mayer, A.-S., & Fiedler, M. (2021). What do i do in a world of artificial intelligence? Investigating the impact of substitutive decision-making ai systems on
employees’ professional role identity. Journal of the Association for Information Systems, 22(2). https://doi.org/10.17705/1jais.00663
Syam, N., & Sharma, A. (2018). Waiting for a sales renaissance in the fourth industrial revolution: Machine learning and artificial intelligence in sales research and
practice. Industrial Marketing Management, 69, 135–146. https://doi.org/10.1016/j.indmarman.2017.12.019
Tegmark, M. (2017). Life 3.0. Vintage.
Trochim, W., Donnelly, J., & Arora, K. (2016). Research methods: The essential knowledge base (2nd ed.). Cengage Learning.
Umair, A., Conboy, K., & Whelan, E. (2019). Understanding the influence of technostress on workers’ job satisfaction in gig-economy: An exploratory investigation. In
ECIS 2018 Proceedings. https://aisel.aisnet.org/ecis2019_rip/34.
Van den Broek, E., Sergeeva, A., & Huysman, M. (2021). When the machine meets the expert: An ethnography of developing AI for hiring. MIS Quarterly, 45(3),
1557–1580.
Van Maanen, J. (1979). Reclaiming qualitative methods for organizational research: A preface. Administrative Science Quarterly, 24(4), 520–526.
Van Nimwegen, C. C., Burgos, D., van Oostendorp, H. H., & Schilf, H. H. J. M. (2006). The paradox of the assisted user: Guidance can be counterproductive. In CHI’06
(pp. 917–926). https://doi.org/10.1145/1124772.1124908
Waardenburg, L., & Huysman, M. (2022). From coexistence to co-creation: Blurring boundaries in the age of AI. Information and Organization, 32(4), Article 100432.
https://doi.org/10.1016/j.infoandorg.2022.100432
Wilson, H. J., & Daugherty, P. (2018). Collaborative intelligence: Humans and AI are joining forces. Harvard Business Review, (July-August 2018), 114–123.
Xu, Y., Shieh, C. H., van Esch, P., & Ling, I. L. (2020). AI customer service: Task complexity, problem-solving ability, and usage intention. Australasian Marketing
Journal, 28(4), 189–199.
Zanzotto, F. M. (2019). Human-in-the-loop artificial intelligence. Journal of Artificial Intelligence Research, 64, 243–252.
Zoltners, A. A., Sinha, P., Sahay, D., Shastri, A., & Lorimer, S. (2021). Practical insights for sales force digitalization success. Journal of Personal Selling and Sales
Management, 41(7), 1–16. https://doi.org/10.1080/08853134.2021.1908144

12

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy