Applsci 10 06628

Download as pdf or txt
Download as pdf or txt
You are on page 1of 36

applied

sciences
Article
Continuance Use of Cloud Computing in Higher
Education Institutions: A Conceptual Model
Yousef A. M. Qasem * , Rusli Abdullah * , Yusmadi Yaha and Rodziah Atana
Department of Software Engineering and Information System, Faculty of Computer Science and Information
Technology, Universiti Putra Malaysia, Serdang 43400, Malaysia; yusmadi@upm.edu.my (Y.Y.);
rodziah@upm.edu.my (R.A.)
* Correspondence: y.alsharaei@gmai.com (Y.A.M.Q.); rusli@upm.edu.my (R.A.)

Received: 29 July 2020; Accepted: 1 September 2020; Published: 23 September 2020 

Abstract: Resource optimization is a key concern for Higher Education Institutions (HEIs).
Cloud Computing, as the recent generation in computing technology of the fourth industrial
revolution, has emerged as the main standard of service and resource delivery. As cloud computing
has grown into a mature technology and is being rapidly adopted in many HEIs across the world,
retaining customers of this innovative technology has become a challenge to the cloud service
providers. Current research trends on cloud computing have sought to study the acceptance or
adoption of technology; however, little research has been devoted to the continuance use in an
organizational setting. To address this gap, this study aims to investigate the antecedents of cloud
computing continuance use in HEIs. Hence, drawing on the prior literature in organizational-level
continuance, this research established a conceptual model that extends and contextualizes the IS
continuance model through the lens of the TOE framework (i.e., technological, organizational,
and environmental influences). The results of a pilot study, conducted through a survey with
information and communications technology (ICT) decision makers, and based on the proposed
conceptual model, indicate that the instrument is both reliable and valid, and so point the way
towards further research. The paper closes with a discussion of the research limitations, contribution,
and future directions.

Keywords: cloud computing; post adoption; continuance use; IS continuance; educational


technologies; higher education institutions

1. Introduction
Cloud computing (CC) is increasingly becoming a springboard for digital innovation and
organizational agility. Higher education institutions (HEIs) are facing the problems with the increasing
of participants, growing need of IT and infrastructure, education quality of provision, and affordable
education services [1,2]. With the high rate at which IT technology changes, resource management
optimization is a key concern for HEIs [3], not least because on-premise systems can only operate
effectively when they receive adequate initial funding and resources, as well as dedicated and
systematic maintenance regimes [4,5]. Institutions looking to compete in the new world need a flexible
yet comprehensive digital transformation blueprint that integrates various technologies across the
institution with CC being at its foundation. CC, as the current generation in computing technology of
the fourth industrial revolution (IR 4.0), has emerged as the main standard of service and resource
delivery [6], in which it has become an excellent alternative for HLIs to support cost reduction, quality
improvement and, through this, educational sustainability [7] by providing the required infrastructure,
software, and storage as a service [3]. Thus, CC has been adopted rapidly in both private and public
organizations, including HEIs [3,8,9]. As the fifth most frequently used utility after gas, electricity,

Appl. Sci. 2020, 10, 6628; doi:10.3390/app10196628 www.mdpi.com/journal/applsci


Appl. Sci. 2020, 10, 6628 2 of 36

water, and telephone lines [10], the CC tracking poll from the International Data Corporation indicates
that USD 370bn in CC will be used by 2022, corresponding to an increase in 22.5% in terms of 5-year
compound annual growth [11].
However, while the subscription model of cloud services contributes to the growth of the
overall market and makes it accessible for HEIs, a new set of challenges have arisen. This research
addresses one such challenge that the cloud service providers face. The possibility of making such a
decision to discontinue a cloud service provider is exacerbated by the low cost of switching between
applications [12] and in general the competitive markets [4]. Therefore, the conceptualization of CC
service in HEIs changes to a decision on ‘continuance’, rather than ‘adoption’.
Moreover, in subscription models offered via cloud-based education systems, it is possible for HEIs
to switch vendors if they perceive greater benefits elsewhere. Thus, it is essential to understand the
conceptual differences between adoption and continuance [13]. Hence, research on CC continuance have
practical and artifact-specific motivations. Furthermore, theoretical research on organizational-level
continuance is also scarce [14], particularly in HEIs [9,15]. Typically, continuance research has been
undertaken at the individual user level; however, organizational continuance decisions are often made
by senior IS executives or others in the organization who may not be intense users of the service in
question [14]. For many of these executive decision makers, a strong influence may be attributed
to factors that are insignificant for individual users (e.g., lowering organizational costs or shifting a
strategic goal) [16].
Thus, to contribute to the body of knowledge on the organizational-level continuance, this study
draws on the prior literature in organizational-level continuance to establish a conceptual model
that extends and contextualizes the IS continuance model to improve our understanding of the
determinants of CC continuance use in HEIs through the lens of the TOE framework (i.e., environmental,
organizational, and technological influences). Following the establishment of the conceptual model [15],
the model was validated by conducting a pilot study with senior decision makers, all of whom were
asked about aspects of their organizations relating to CC services. Therefore, this study sought to
address was the following: “What constructs influence the organizational-level continuance of CC
in HEIs?” To address this question, our research relies on a positivist quantitative-empirical research
design. In this research, the unit of analysis (i.e., a basic element of observation indicating who or what
the researcher has generalized) [17] is the organization, and the organization-level phenomenon will be
observed by individuals involved in organizational CC subscription decisions at an organization [17].
In general, we contribute to research in different ways. First, the most important contribution
of this research constitutes to the body of knowledge within the IS field surrounding continuance
phenomenon. In practical settings, many are concerned with reducing capital IT expenses [3,18,19]
and IS services allow client organizations to select from various services that they can continue using
or discontinue using [8,9]. Therefore, conducting research that focuses on the continuance of IS will
play a critical role in theory and practice. Second, we develop a conceptual model that provides a clear
perspective through which HEIs can answer the question related to their use of CC services: “should
we go, or should we stay?” Third, the results of the full-scale research will assist IT decision makers
in CC when seeking to optimize institutional resource utilization, or to commission and market CC
projects. As a case in point, the results can serve as guidelines that cloud service providers will use
to focus their efforts towards retaining customers. These can also be leveraged by clients to guide
routine assessments over whether the use of a specific CC service should be discontinued. Fourth,
the study is expected to contribute to developing the literature in the best available organizational-level
continuance models for HEI settings. Last, in providing a model for CC continuance use, we provide
a new explanation for organizations’ continuance use of novel technologies. Measuring the model
constructs not only reflectively but also formatively would add little to the practical contribution of the
study. Thus, further quantitative, and qualitative research about the conceptualized model and its
relationships is needed.
Appl. Sci. 2020, 10, 6628 3 of 36

The structure of the rest of this study is as follows, a literature review is given in the next section.
Theoretical models are then identified and analyzed, a conceptual model for exploring continuance of
CC in HEIs is proposed. In turn, the method is explained, and the preliminary results of the study
are presented. Finally, the study’s results are discussed, their implications are examined, and the
contributions of the research are outlined.

2. Background and Related Work


As a term, CC is defined diversely in the literature. In this paper, the definition of CC used
is comparable to NISTs [6], which regards CC as the set of aspects that are common across all CC
services. Hence, from the perspective of this paper, CC relates to the applications and shared services
involved in the surveyed institutions through subscription-based models, whereby shared data servers
or application activities are accessed.
In HEIs, CC has been identified as a transformative technological development [3]. This is because
CC benefits from rapid IT implementation, especially for research, which compares favorably when
considered against legacy software systems. Additionally, CC solutions can be exploited to assist in
implementing socially oriented theories of learning, as well as cooperative learning [20]. CC resources
can be used to create e-learning platforms, infrastructure, and educational services through the
centralized provision of data storage, virtualization, and other facilities [21]. With these considerations
in mind, CC, for certain HEIs, is essential, and many institutions rely on the technology to reduce
costs, remain competitive, and satisfy learner and teacher requirements [22]. The accessibility and
transparency of CC services mean that HEIs can utilize existing knowledge to their mutual benefit [23].
To examine CC use in HEIs, a systematic literature review (SLR) was undertaken. The following
electronic databases were included in the literature search: Web of Science, IEEE Xplore, ScienceDirect,
Scopus, ACM Digital Library, Emerald, and Springer [9]. Additionally, the following search terms
were entered into each of these databases: (cloud OR “cloud computing”) AND (adoption OR usage)
AND (education OR teaching OR learning)). The literature search revealed that many studies had
been published in this area, and that the rate of publication had been increasing for the past few years.
IS researchers have tended to investigate CC use in HEIs from the perspective of individuals [7,24–49]
or from the perspective of organizations [50–65].
For the last three decades, many researchers have sought to evaluate the success of information
systems (IS). Theoretical and practical contributions have been discussed, many of which indicate
that the factors underpinning IS success are multidimensional. Furthermore, several studies have
contributed to CC continuance use in highly different settings [14,66–76]. Nonetheless, the factors that
drive institutions to continue or end a CC subscription have yet to be clarified [14,16,66], specifically
in HEIs [9]. Given that almost all CC service models rely on subscriptions [6], this is an unexpected
finding. Therefore, the purpose of the current research was to assess the main factors that HEIs consider
when deciding whether to continue their use of CC services. In Table 1, evidence pertaining to the
continuance use of CC is synthesized.

Life Cycle of an Information System


This study belongs to the well-established stream of literature that has examined the phenomenon
of “technology adoption”, which was initially defined by Rogers [81] as a five-step process. Since then,
a range of models has been developed to extend Rogers’ preliminary work [82–84]. According
to some researchers, adoption is a multi-phase process rather than a binary decision [85–88],
and for some theoreticians, adoption occurs over seven rather than five stages [89,90]. However,
most researchers agree that technology adoption operates across the following five stages: awareness,
interest, evaluation, trial, and continuance. Several researchers advocated a four-phase model, which
involved initiation, adoption, decision, and implementation (e.g., [81,91,92]). However, certain studies
have concentrated their empirical understanding of technology adoption into a single phase (e.g.,
adoption or pre-/post-adoption) [14]. This perspective is consistent with the wide-ranging stages of
Appl. Sci. 2020, 10, 6628 4 of 36

technology adoption that have been investigated in previous studies [93–95] (see Figure 1). Certain
studies have reported that, in terms of the methods that are available in technology adoption research,
many are limited because they do not differentiate between changes in the significance of factors in the
Appl. Sci.
different 2020, 10,
phases ofxadoption
FOR PEER REVIEW 4 of 36 for
[96]. Therefore, opportunities, as well as suitable research settings
exploring continuance as the last phase of adoption, have been limited.
[80] √ √ √ √ √
[49] √ √ √ √
Table 1. Literature on CC Continuance.
[14] √ √ √ √ √
SUM Level of
4 Analysis
10 2
Adoption 13
Phase 5 6Theoretical
3 Perspective
2 3 12 1
Type
This IND ORG PRE POST ISC ISS ISD TOE OTH EMP THEO
Research √ √ √
√ √ √√ √ √ √
[14]
√ √ √ √ √
[72]Legend: IND = Individual; ORGA = Organizational; PRE = Pre-Adoption; POST = Post-Adoption; ISC
√ √ √ √
[75]= Information System Continuance; ISS = IS Success Model; ISD = IS Discontinuance Model; TOE =
√ √ √ √ √ √
[76] *
√ √
Technology–Organization–Environment √ Framework; OTH = Others; √ THEO √ =
[71] **
√ EMP = Empirical.
Theoretical/Conceptual; √ * Study examines√ adopters’ √
and non-adopters’
[77]
√ √ √ √
[45] intention
* to increase the level of sourcing; thus, it is√categorized as adoption. ** Study
√ √ √ √
[78]examines adopters’ intention at individual and organizational levels; thus, it is categorized
√ √ √ √ √
[79]
[70]
as an individual. √ √ √ √
√ √ √ √ √
[80]
√ √ √ √
Life Cycle of an Information
[49]

System √ √ √ √
[14]
SUMThis study 4 belongs10 to the2 well-established
13 5 stream6 of literature
3 2that has
3 examined
12 the
1
√ √ √ √ √
phenomenon
This Research of “technology adoption”, which was initially defined by Rogers [81] as a five-step
Legend: IND =
process. Since then, a range of
Individual; models
ORGA = has been developed
Organizational; = extend
PRE to Rogers’ POST
Pre-Adoption; = Post-Adoption;
preliminary work [82–
ISC = Information
84]. According System
to some Continuance;
researchers, ISS =is IS
adoption Success Model;
a multi-phase process = IS than
ISDrather Discontinuance Model;
a binary decision
TOE = Technology–Organization–Environment Framework; OTH = Others; THEO = Theoretical/Conceptual;
[85–88], and for some theoreticians, adoption occurs over seven rather than five stages [89,90].
EMP = Empirical. * Study examines adopters’ and non-adopters’ intention to increase the level of sourcing; thus,
itHowever, mostasresearchers
is categorized adoption. ** agree
Study that technology
examines adoption
adopters’ intentionoperates across
at individual andthe following five
organizational stages:
levels; thus,
itawareness, interest,
is categorized evaluation, trial, and continuance. Several researchers advocated a four-phase
as an individual.
model, which involved initiation, adoption, decision, and implementation (e.g., [81,91,92]). However,
certain
In studies
Table, have concentrated
previous studies have their empirical
adapted understanding
various of technology
IS theories adoption analyzed
and empirically into a single
CC in
phase (e.g., adoption or pre-/post-adoption) [14]. This perspective is consistent with the wide-ranging
different contexts from an individual or organizational viewpoint in the pre-adoption or post-adoption
stages of technology adoption that have been investigated in previous studies [93–95] (see Figure 1).
(i.e. continuance use) phase. However, no empirical study was found measuring the continuance use of
Certain studies have reported that, in terms of the methods that are available in technology adoption
CC in HEIs. Therefore, the main contribution of this study is to develop an instrument and conceptualize
research, many are limited because they do not differentiate between changes in the significance of
a model to measure
factors the continuance
in the different use of[96].
phases of adoption CCTherefore,
in the context of HEIs.asBeside
opportunities, well as this context-specific
suitable research
contribution, our study also reduces the gap of related to organizational IS continuance
settings for exploring continuance as the last phase of adoption, have been limited. research.

Figure1.1.Life
Figure Life cycle
cycle of
ofan
anISIS[95].
[95].

A range
A range of theories
of theories havehave beenused
been usedto to study
study adoption
adoption(e.g., UTAT
(e.g., UTAT or or
TAM), continuance
TAM), (e.g., (e.g.,
continuance
ISC), and discontinuance at the individual level. Contrastingly, scholars have used
ISC), and discontinuance at the individual level. Contrastingly, scholars have used theories such theories such as as
TOE, DOI, and social contagion to examine adoption from an organizational
TOE, DOI, and social contagion to examine adoption from an organizational perspective. Additionally, perspective.
Additionally, the ISS and ISD models have been leveraged to investigate continuance and
the ISS and ISD models have been leveraged to investigate continuance and discontinuance respectively,
discontinuance respectively, at the organizational level. Table 2 provides an overview of the various
at the organizational level. Table 2 provides an overview of the various theoretical approaches that have
theoretical approaches that have been used in the literature to examine the lifecycle of an IS.
been Dissimilar
used in theto literature
studies thattohave
examine theonlifecycle
focused of an IS.
the individual Dissimilar
level, those thatto studies
have that have
addressed focused on
continuance
the individual level, those that have addressed continuance and discontinuance
and discontinuance at the organizational level are few and far between [9,14,16,97]. at the organizational
level are few and far between [9,14,16,97].
Appl. Sci. 2020, 10, 6628 5 of 36

Table 2. Life Cycle of An IS with Different Theoretical Approaches.

Life Cycle Phases Adoption Usage Termination


User/organization Transformation Intent to adopt Continuance usage intention Discontinuance usage intention
End-user state No user User Ex-user
ECT [100], which has taken
Individual Level-based theories TAM [98], and UTAT model [99]
shape in the ISC model [101]
TOE framework [102], DOI [103],
Organizational Level-based theories ISS model [105] ISD model [16]
and Social Contagion [104]
Legend: IS = Information System, TAM = User Acceptance Model, UTAT = Unified Theory of
Acceptance and Use of Technology, ECT = Expectation Confirmation Theory, ISC = Information System
Continuance model, ISS = Information System Success model, ISD = Information System Discontinuance
model, TOE = Technology–Environment–Organization Framework, DOI = Diffusion of Innovation theory,
ITMAP = Information Technology Post Adoption Model.

After initially adopting an IS, a user decides to either continue or discontinue the IS adoption.
Contrastingly, this is unlikely to retire or replace their on-premise CC services [106]. Nevertheless,
since almost all CC services involve a subscription model, and since this study’s focal point is the issue
of continuance, the aim of this study is not to examine factors that influence the use of CC services in
HEIs. Instead, the main area of focus in the current study is the set of constructs that contribute to IS
continuance. An implication of this is that it is possible to evaluate success and system performance,
which is dissimilar to the pre-adoption phase in which only expectations can be used to estimate usage.
This also enables the integration of post-adoption variables as predictors of IS adoption continuance,
thereby exerting far-reaching impacts on model development.

3. Theoretical and Conceptual Background


In this section, we present the theoretical and conceptual background, focusing on the prior
literature in organizational-level continuance to extend and contextualize the IS continuance model to
improve our understanding of the determinants of CC continuance use in HEIs.

3.1. IS Continuance Model


The IS Continuance (ISC) model [101] was developed based on expectation confirmation theory
(ECT) [100]. The model has been used widely in the field of marketing to examine the impact of user
satisfaction on a user’s intention to continue their adoption of a technology [107,108]. As Figure 2
indicates, IS continuance behavior in the ISC model is informed by post-consumption variables, namely,
the perceived usefulness and satisfaction. Bhattacherjee [101] applied several theoretical changes to
fine-tune ECT theory to the ISC model.
The first of these changes relates to the pre-consumption antecedents of confirmation, namely
perceived performance, and expectation. Specifically, both antecedents were removed from the model,
and the researcher’s rationale for doing so was that their influences are addressed in other constructs
(specifically, confirmation and satisfaction). The second change relates to the addition of an ex-post
expectation variable, namely perceived usefulness. Noteworthily, ex-post expectation is critical in IS
services and products, principally because expectations tend not to remain stable over time. Consistent
with previous studies on initial IS use [98,109], the study conducted by Bhattacherjee [101] reported
that perceived usefulness may continuously influence subsequence IS continuance use decisions.
Resultantly, perceived usefulness was considered a novel determinant of satisfaction. The third change
is that, regarding the usefulness–intention connection originally developed in TAM [98], the ISC model
suggests that this may exist not only in the original use context but also in the continuance context.
This is linked to the fact that continuance intention in humans can be attributed to a sequence of
adoption decisions that are not related to factors such as timing or the behavioral stage [110]. Hence,
perceived usefulness should have a direct impact on IS continuance intention to have an indirect
effect on IS continuance intention via satisfaction. In the literature, the ISC model has primarily been
employed to examine continuance use from an individual perspective. However, the model has been
Appl. Sci. 2020, 10, x FOR PEER REVIEW 6 of 36

to a sequence of adoption decisions that are not related to factors such as timing or the behavioral
stage [110]. Hence, perceived usefulness should have a direct impact on IS continuance intention to
Appl. Sci.
have an2020, 10, 6628
indirect 6 of 36
effect on IS continuance intention via satisfaction. In the literature, the ISC model
has primarily been employed to examine continuance use from an individual perspective. However,
the model has been extended for organizational post-adoption studies (e.g., [72]); consequently, it is
extended for organizational post-adoption studies (e.g., [72]); consequently, it is associated with a
associated with a substantial level of external validity. Besides, the ISC model has been used to
substantial level of external validity. Besides, the ISC model has been used to explain the continuance
explain the continuance phenomenon in the education context (e.g., [111–113]).
phenomenon in the education context (e.g., [111–113]).
The focal point of the ISC model is an individual’s continued acceptance of technology, whereas
The focal point of the ISC model is an individual’s continued acceptance of technology, whereas
the purpose of this study is to address organizational continuance use. However, several researchers
the purpose of this study is to address organizational continuance use. However, several researchers
have extended the model for organizational post-adoption context (e.g., [72]). As suggested by the
have extended the model for organizational post-adoption context (e.g., [72]). As suggested by the
TOE model, a critical point of difference between organizational and individual continuance use
TOE model, a critical point of difference between organizational and individual continuance use
settings is that the technology adoption decisions made by organizations are typically informed both
settings is that the technology adoption decisions made by organizations are typically informed both
by technology factors associated with individual beliefs (e.g., satisfaction) and by organizational
by technology factors associated with individual beliefs (e.g., satisfaction) and by organizational factors
factors such as external threats and opportunities. Therefore, to fine-tune ISC model to address the
such as external threats and opportunities. Therefore, to fine-tune ISC model to address the research
research problem, it is necessary to supplement factors in the organizational continuance context,
problem, it is necessary to supplement factors in the organizational continuance context, especially
especially those relating to organizational and environmental settings [72]. Given that researchers
those relating to organizational and environmental settings [72]. Given that researchers need to choose
need to choose a theory or model that is appropriate for the research setting (in this case, continuance
a theory or model that is appropriate for the research setting (in this case, continuance use at the
use at the organizational level), this study substitutes the perceived usefulness construct of the ISC
organizational level), this study substitutes the perceived usefulness construct of the ISC model with
model with logical reasoning. Perceived usefulness is generally considered the most relevant
logical reasoning. Perceived usefulness is generally considered the most relevant technological factor
technological factor that informs IS post-adoption behavior in the ISC model, and a range of studies,
that informs IS post-adoption behavior in the ISC model, and a range of studies, including [72,114],
including [72,114], have employed it as a baseline model. Nevertheless, the theory of planned
have employed it as a baseline model. Nevertheless, the theory of planned behavior (TPB) [115]
behavior (TPB) [115] suggests that net benefits ought to be viewed as behavioral belief (that is,
suggests that net benefits ought to be viewed as behavioral belief (that is, perceived usefulness) [116].
perceived usefulness) [116]. For this reason, the net benefit construct taken from the IS success model
For this reason, the net benefit construct taken from the IS success model is used instead of perceived
is used instead of perceived usefulness, thereby achieving an effective fit with the research setting.
usefulness, thereby achieving an effective fit with the research setting.

Figure 2. Expectation confirmation model of IS continuance [101].


Figure 2. Expectation confirmation model of IS continuance [101].
3.2. IS Success Model
3.2. IS Success Model
According to Gable, Sedera [67], the positive outcomes arising from IS are the ultimate “acid test”,
According
and so the question to Gable,
thatSedera
should[67], the positiveis outcomes
be addressed arising from
that of whether IS hasISbeenare the ultimatefor
beneficial “acid
the
test”,
organization [117–119]. Other questions of interest are whether IS is worth retaining, whetherthe
and so the question that should be addressed is that of whether IS has been beneficial for it
organization [117–119].
requires modification, andOther questions
the impacts thatofit interest are in
will deliver whether IS isHence,
the future. wortharetaining,
continuance whether
decision it
requires modification,
relating to and theas
IS can be regarded impacts that itthat
something willisdeliver
informedin the
by future.
IS success Hence, a continuance
at several levels [14].decision
relating to IS can be regarded as something that is informed by IS success at several
Numerous studies have been conducted on IS success (ISS), and the ISS model [105], as well as a levels [14].
Numerous
revised version of studies have[120],
this model been have
conducted on IS
been used success (ISS),
extensively in theand the ISStomodel
literature explore [105], as well
the issue as
[121].
aThe
revised version of this model [120], have been used extensively in the literature
use of ISS model [105] in this study stems from the following considerations: firstly, the model has to explore the issue
[121]. The in
been used use of ISS research
various model [105] in this
settings study stems from
[14,67,72,122,123]; the following
secondly, it is easy considerations:
to communicatefirstly, the
the results
model has been
of the model due to used in variousnature
the systematic research
of the settings
included [14,67,72,122,123];
dimensions; andsecondly, thirdly, theitmodel
is easy to
is not
communicate the results of the model due to the systematic nature of the included
narrowly applied as a framework for measuring success, and so it has a high level of external validity. dimensions; and
thirdly,
Thethe modeldimensions
success is not narrowly
chosenapplied as a framework
by a researcher must befor measuring
based success, setting
on the research and so andit has a high
research
level of external validity.
problem; therefore, in this study, two dimensions are removed based on logical reasoning: namely,
Theservice
use and successquality.
dimensions
In termschosen
of thebyuseaconstruct,
researcherit has
must be critiqued
been based oninthe theresearch
literaturesetting and
for several
research problem; therefore,
reasons [67,124–127], in thisinstudy,
particularly termstwoof thedimensions are removed
way it performs based on logical
an intermediate function reasoning:
that lies
between quality and impacts, and as such does not operate as a measure of success [128]. Additionally,
the adoption of system use construct of IS success has been identified as unsuitable in previous
Appl. Sci. 2020, 10, 6628 7 of 36

studies of system performance [14]. Regarding the service quality construct, this was removed for
the following reasons: firstly, it is comparable to the complete idea of IS success in several ways,
where system and quality constitute a “functional” aspect, while the effects constitute the “technical”
aspects in the “operational” IS (in this case, the system is considered a set of services); and secondly,
in order to assess the service provider’s services, a narrow perspective of service quality is involved.
In CC service research, an assessment of this kind would be an antecedent rather than a measure.
Given these considerations, several constructs, namely information quality, system quality, and net
benefits, are integrated into the conceptual model of this study.

3.3. IS Discontinuance Model


Additional factors that influence organization persistence, particularly in the context of CC use in
HEIs, were identified in this study for an explanation of CC continuance use in the organization-level
IS post-adoption context. This resulted in the inclusion of system investment as an organizational
construct, as well as technical integration as a technological construct [16]. Each of these constructs
relates to the commitment that an organization has to a subscription-based technology (e.g., CC) [14].
Regarding technical integration, this study sought to examine whether the features of a technology
impacted continuance decisions. Specifically, in view of the principal objective of CC, the study
investigated the degree to which HEIs would benefit from the sophistication provided by the features
of CC services. However, the positive impacts of the features of the technology are not assured in many
cases. For example, certain organizations may not have the expertise needed to exploit the complexity
of the technology [129–131], meaning that they cannot integrate it into the overall functioning of
the firm. As a result of this, the organization would be compelled to discontinue their use of CC
services [14,78].
In terms of the system investment variable, this—as a source of behavioral persistence—has often
been referred to as a “sunk cost” in the literature [132]. Among managerial personnel, it is common to
invest continually in an area despite reasonable evidence for not doing so. The notion of a sunk cost
becomes relevant when the cost of acquisition can be regarded as a capital expenditure (CapX). System
investment studies have assessed the role played by CapX in the formulation of computer software
prices in the context of switching between software solutions [133], as well as the impact on succeeding
decisions in terms of IS outsourcing [134]. In the case of CC, system investment is a noteworthy
variable because of the technology’s low barriers to entry and minimal overhead costs [135]. In view of
this, it is not unreasonable to view CC services as any other utility that can be turned on and off at
will [135,136]. However, it is critical to recognize that many CC services are typically associated with
significant implementation costs. A key implication of this is that system investment is fundamental in
CC continuance in HEIs.
Regarding competitive pressure, this refers to the pressure that an institution’s leadership
may feel regarding the performance-related abilities that its competitors are gaining through the
exploitation of CC services (e.g., an increase in student assessment outcomes due to the use of CC
platforms) [72,137,138].

3.4. TOE Framework


In the context of the TOE framework [102], it is possible to divide the constructs determining
behavior related to CC continuance into the following contextual areas: firstly, organizational context;
secondly, technology context; and finally, environmental context. However, it is notable that the
framework itself does not include information about these constructs. In terms of the effect of the
technology context on CC adoption behavior, this refers to the set of technology-related factors that feed
into an organization’s decision to adopt an innovative IS [139]. As for the organizational context, this is
concerned with the way in which various factors affect IS adoption behavior. These factors include
available resources, opportunities for collaboration, profile characteristics, peer influence, internal
communication, organizational culture, formal and informal linking structures, human resources
Appl. Sci. 2020, 10, 6628 8 of 36

quality, firm size and scope, and the internal social network. Finally, the environmental context
demonstrates that an organization’s IS adoption is significantly impacted by constructs that lie outside
of its direct control (e.g., competitors, government regulations, and supply chains) [102]. In view
of these considerations, it is clear that the TOE framework can play an effective role in identifying
non-technology-level factors that have not been considered in other studies on consumer software
(e.g., constructs relevant to external circumstances) [140]. Additionally, the TOE framework helpfully
interprets the notion of adoption behavior based on the following technological innovations: firstly,
innovations applied for technical tasks (i.e., type 1 innovations); secondly, innovations relating to the
business administration (i.e., type 2 innovations); and thirdly, innovations integrated into core business
procedures (i.e., type 3 innovations) [141].
Along with the technological and organizational variables of continuance use, which were
derived from models of IS continuance, discontinuance, and success, constructs were identified that
affect organizational and environmental persistence, especially insofar as they relate to CC in HEIs.
As a result of this process, collaboration was identified as an organizational variable [42,142–144],
while regulatory policy [145–147] and competitive pressure [72,145] were identified as environmental
variables. Collaboration tasks lie at the heart of HEIs, and collaboration can be conceptualized as the
ability of CC services to facilitate communication among stakeholders [42,142]. In the case of digital
natives, CC services play a vital role in effective collaboration [148,149]. Table 3 presents a mapping
matrix for the continuance use constructs and theories, each of which has been obtained from the
extant and related literature [150,151].

Table 3. Mapping matrix of model constructs from ISC, ISS, ISD, and TOE.

Constructs/Independent Variables
Technology Organization Environment
Technology Integration

Competitive Pressures
Information Quality

System Integration

Regulatory Policy
Confirmation

System Quality
Satisfaction

Collaboration
Net Benefits

Theory/ Technology/Dependent
Source
Model Variable

Organizational level √ √ √
ISD information System [16]
discontinuance intentions.
Information system √ √
ISC [101]
continuance.
√ √ √
ISS Information system success. [105,120]
ECM & Enterprise 2.0 √ √ √
[72]
TOE post-adoption.
Continuance intention to
TAM [75]
use CC.
Disruptive technology √ √
ISC &
continuous adoption [76]
OTH
intentions.
√ √ √
ISS CC evaluation [77]
ISS & Cloud-Based Enterprise √ √ √ √ √
[78]
ISD Systems.
SaaS-based collaboration √ √
ISC [45]
tools.
CC client-provider √ √
ISC [70]
relationship.
Appl. Sci. 2020, 10, 6628 9 of 36

Table 3. Cont.

Constructs/Independent Variables
Technology Organization Environment

Technology Integration

Competitive Pressures
Information Quality

System Integration

Regulatory Policy
Confirmation

System Quality
Satisfaction

Collaboration
Net Benefits
Theory/ Technology/Dependent
Source
Model Variable

Operational Cloud √ √ √
ISC [152]
Enterprise System.
Usage and adoption of CC. √
OTH [143]
by SMEs
Knowledge management √
TOE [153]
systems diffusion
Information technology √ √
TCT [154]
adoption behavior life cycle
√ √ √
ISC Wearable Continuance [155]
Legend: TOE = Technology–Organization–Environment Framework; ISC = Information System Continuance Model;
ISS = IS Success Model; ISD = IS Discontinuance Model; TAM = Technology Acceptance Model; TCT = Technology
Continuance Theory; OTH = Others.

4. Research Model and Hypotheses


A robust theory of organizational level continuance of CC is yet to be developed [14]. Therefore,
based on the theoretical and conceptual background outlined previously, this research used a method
that complements and contextualize existing constructs in the IS continuance model through the
lens of the TOE framework. We extended the IS continuance model [101] using constructs from
dominants models in innovation organization-level IS post-adoption research which are IS success
model [105,120] (i.e., net benefits, system quality, and information quality) and IS discontinuance
model [16] (i.e., technical integration, system investment, and competitive pressure). To keep our
research model coherent and relevant, we identified additional contextual constructs from the literature
as constructs to predict continuance use of CC in educational context (i.e., collaboration and regulatory
policy). To structure our model, we took a technological–organizational–environmental approach
by applying the lens of the TOE framework [102] to our research model (i.e., Technology context:
net benefits, system quality, information quality, and technical integration; Organizational context:
system investment, and collaboration; and Environmental context: regulatory policy, and competitive
pressure). We also formulated related hypotheses to clarify our research agenda, emphasize research
areas that need further investigation, and acquire requisite knowledge on CC continuance use. Figure 3
provides an overview of the original IS continuance model and this study’s proposed extensions.
The model is grounded at the organizational level of analysis [156], and the smallest unit of analysis is
an individual CC.
Based on a positivist, deterministic philosophical paradigm, a priori assumptions in the form of
hypotheses were established for later statistical analysis to facilitate model validation. The propositions
focus on the link between the independent variables encompassing the IS continuance model, IS success
model, IS discontinuance model, and TOE framework, and the dependent variable, namely CC
continuance use.
Appl. Sci. 2020, 10, 6628 10 of 36
Appl. Sci. 2020, 10, x FOR PEER REVIEW 10 of 36

Figure
Figure 3.
3. Research
Research Model.
Model.

Based on a positivist,
The relationships deterministic
among philosophical
perceived usefulnessparadigm, a priori assumptions
and confirmation, continuancein the form of
intention,
hypotheses wereas established
and satisfaction, for later statistical
noted by Bhattacherjee [101] in theanalysis
context to facilitate
of system model validation.
acceptance, are relevantThefor
propositions focus on the link between the independent variables encompassing
investigating CC continuance in HEIs. In this study, perceived usefulness was substituted by net the IS continuance
model,
benefits,ISwhich
success
is amodel, IS discontinuance
cognitive model,
belief relevant to IS useand TOE
[157]. In framework,
the context ofand
TAM,the perceived
dependentusefulness
variable,
namely CC continuance use.
is considered a user’s belief towards system usefulness [98]. Whereas in the organizational context,
The relationships
net benefit is considered among perceived
a belief about theusefulness
degree toandwhichconfirmation,
IS promotes continuance
organizationalintention, and
objectives.
satisfaction,
This definitionas is
noted by Bhattacherjee
aligned [101] in the context
with other organizational-level of system
definitions acceptance,
[157,158]. are relevant
We thus propose thefor
investigating CC
following hypotheses: continuance in HEIs. In this study, perceived usefulness was substituted by net
benefits, which is a cognitive belief relevant to IS use [157]. In the context of TAM, perceived
usefulness
Hypothesisis1considered a user’s belief
(H1). An institution’s towardslevel
satisfaction system
withusefulness
initial CC[98]. Whereas
adoption in theinfluences
positively organizational
its CC
context,
continuancenetuse.
benefit is considered a belief about the degree to which IS promotes organizational
objectives. This definition is aligned with other organizational-level definitions [157,158]. We thus
Hypothesis
propose the 2a (H2a). An
following institution’s extent of confirmation positively influences its satisfaction with CC use.
hypotheses:
H1: An institution’s satisfaction level with initial CC adoption positively influences its CC
Hypothesis 2b (H2b). An institution’s net benefits from CC use positively influence its satisfaction with CC use.
continuance use.
H2a: An institution’s
Hypothesis 3a (H3a).extent of confirmation
An institution’s positively
net benefits from CCinfluences its satisfaction
use positively influence itswith CC use. use.
CC continuance
H2b: An institution’s net benefits from CC use positively influence its satisfaction with CC use.
H3a: An institution’s
Hypothesis 3b (H3b).net
Anbenefits from
institution’s CC use
extent positively positively
of confirmation influenceinfluences
its CC continuance use.
its net benefits from CC use.
H3b: An institution’s extent of confirmation positively influences its net benefits from CC use.
The relationships among system quality, information quality, and continuance intention [120]
The relationships among system quality, information quality, and continuance intention [120] in
in the context of IS success can also be applied to CC continuance use in HEIs. Prior studies have
the context of IS success can also be applied to CC continuance use in HEIs. Prior studies have
examined the relationships among system quality, information quality, and satisfaction [77,159–165].
examined the relationships among system quality, information quality, and satisfaction [77,159–165].
Hence, it follows:
Hence, it follows:
H4a: System4a
Hypothesis quality
(H4a).positively influences
System quality an institution’s
positively influences ansatisfaction
institution’swith CC use.
satisfaction with CC use.
H4b: System quality positively influences an institution’s CC continuance use.
Hypothesis
H5a: 4b (H4b).
Information System
quality qualityinfluences
positively positively influences an institution’s
an institution’s CC continuance
satisfaction with CC use. use.
H5b: Information quality positively influences an institution’s CC continuance use.
Appl. Sci. 2020, 10, 6628 11 of 36

Hypothesis 5a (H5a). Information quality positively influences an institution’s satisfaction with CC use.

Hypothesis 5b (H5b). Information quality positively influences an institution’s CC continuance use.

The relationships among technical integration, system investment, and discontinuance intention
[16] can also be applied to CC continuance use in HEIs. Thus, we propose:

Hypothesis 6 (H6). Technical integration positively influences an institution’s CC continuance use.

Hypothesis 7 (H7). System investment positively influences an institution’s CC continuance use.

Presently, the success of HEIs depends in large part on effective collaboration. This is noteworthy
because, by leveraging CC, it is possible for HEIs to exploit new modes of communication between
key stakeholders [42,142]. Digital natives, many of whom are students within HEIs, now require
the Internet to undertake daily tasks [148,149], and also to participate in online group activities (e.g.,
socializing, group studying, and so on) [166]. In order to satisfy student requirements, it is necessary
to practitioners within HEIs to understand the various ways in which knowledge and content can be
delivered to them [167]. In view of this, it is important to know what types of expectations students
have, and to understand how technology can be leveraged or incorporated into teaching activities to
meet these expectations. Hence, it is not unreasonable to suggest that the competitiveness of a HEI
depends on its utilization of novel technology to satisfy student needs, and to enable streamlined
collaboration and communication [168]. Taking the context into account, we thus predict:

Hypothesis 8 (H8). The collaboration characteristics of CC services positively influence an institution’s CC


continuance use.

Regulatory policy is another critical consideration that is likely to affect an organization’s decision
to use, or to continue using, a technology. One of the reasons for this is because the regulatory policies
established by a government play a key role in setting laws relating to the use of certain technologies
(e.g., CC) [147,169,170]. For example, the authors of [145–147] discussed how regulatory policies
have shaped adoption trends in CC in various research settings. Taking the context into account,
we therefore hypothesized:

Hypothesis 9 (H9). Regulatory policy positively influences an institution’s CC continuance use.

Competitive pressure refers to the pressure that an institution’s leadership may feel regarding the
performance-related abilities that its competitors are gaining through the exploitation of CC services
(e.g., an increase in student assessment outcomes due to the use of CC platforms) [72,137,138]. In the
literature, several scholars have noted that competitive pressure plays a determining role in influencing
CC use in multiple research settings [72,153,170–173]. Taking the context into account, we thus predict:

Hypothesis 10 (H10). Competitive pressure positively impacts an institution’s CC continuance use.

The research model will be used to examine CC continuance use at the organizational level in
HEIs. Nevertheless, institutions are a key element of the CC ecosystem, in which diverse sets of
actors are involved (e.g., government agencies, public organizations, and researchers). The proposed
model can be referenced by CC actors in HEIs as a basis for cooperating with stakeholders, which is a
prerequisite for the creation and provision of improved products and services.
Appl. Sci. 2020, 10, 6628 12 of 36

5. Methodology
A positivism quantitative survey approach is warranted to address the research objectives
illustrated through the research questions and the hypotheses. Instrument development, data collection
and data analysis mechanisms are discussed in detail below.

5.1. Research Design


This research relies on the positivist philosophical paradigm, as well as the collection and
analysis of quantitative data. The rationale for this decision stems from the way in which the
approach permits a cost-effective and timely research process [174]. Furthermore, a natural way to
address the study’s research questions and test the hypotheses involved using a quantitative survey,
since this yielded a direct approach to comparing dependent and independent variables. According to
Creswell and Creswell [174], describing causal relationships between variables is only possible when a
non-experimental correlation research design with quantitative data is utilized.
Since this study relies on the theoretical foundation of continuance, established guidelines were
followed to develop the research instrument (i.e., item formulation, scale development, and instrument
testing) [175,176]. Following the model development, a pool of survey items was derived from the
literature; then, content validity was tested by examining the extent to which every item reflected its
nominated construct. Additionally, consistent with recommendations reported by Kelley [177] and
McKenzie, Wood [178], the expert review evaluation process was undertaken to establish measurement
representativeness, clarity, and comprehensiveness. Afterwards, pilot test was undertaken to assess
the validity and reliability of the research instrument. Instrument development and data collection
were the two main methodological activities undertaken in this research, and these will be described
later in this manuscript.

5.2. Instrument Development


Survey research is an essential and complex process used to ensure research objectives are met [179].
Therefore, designing and selecting the correct instrument for survey is fundamental as it should be
answering the research questions on what is to be measured and how it is to be measured—in this case,
the construct validity and construct reliability, respectively [179,180].
In this study, both reflective and formative measures were used to test the research model, as shown
in Table 4. Formative measurements were taken of net benefits, information quality, and system quality,
mainly because formative measurement gives rise to actionable and specific concept attributes [181],
that is specifically interesting from a practical viewpoint. In the context of formative measurements,
a single indicator’s weight is used to draw practical insights about the criticality of certain details,
thereby generating information that guides practical enforcement in terms of the system characteristics
(e.g., “overall system quality is high” (reflective) vs. “system is easy to use” (formative)). Dissimilar
to the formative constructs (e.g., system quality, net benefits, and information quality), the purpose
of which is to assess an information system’s success, it is possible for the reflective constructs to
be given historically. Hence, the measurement of these constructs involved well-validated reflective
scales [16]. In the case of the formative instrument, this was developed based on the guidelines of
Moore and Benbasat [175]), which was combined with recent processes in scale development [182–184].
For the formative measures, the objective was to achieve mutual exclusivity and parsimony, and to
identify one measure that would be the most appropriate for inclusion in the model. As a case in point,
parsimony and accuracy are critical considerations for all measures in a formative model, particularly
since every dimension and measure is essential. Consequently, there should only be a small level of
overlap, and no unnecessary measures or dimensions should be present. Such attention is considered
vital in selecting the tentative measures.
It is worth drawing attention to the fact that the questionnaire scales were adapted from the prior
literature (i.e., from well-validated studies on the IS continuance model [101], IS success model [105,120],
Appl. Sci. 2020, 10, 6628 13 of 36

IS discontinuance model [16], and TOE framework [102]) (See Appendix A). The instrument’s feasibility,
consistency of style and formatting, readability, and linguistic clarity [185,186], were evaluated in
interviews with academic researchers (n = 2) with experience in questionnaire design. Their feedback
on the general design and measurement scales were requested for improving the usability of the
questionnaire. A content-based literature review approach recommended by Webster and Watson [187])
was used for instrument conceptualization and content specification, in which constructs have been
clearly defined (See Table 3). The next stage involved producing an item pool, the purpose of which
was to represent every aspect of the construct without overlap [183]. Notably, the elimination of a
measure from a formative indicator model risks leaving out a relevant part of the conceptual domain
(or, for that matter, changing a construct’s meaning). This is because the construct is the set of all
indicators [188], and also because maintaining irrelevant items will not have the effect of introducing
bias into the results when examining the data with PLS [181]. In view of this, every dimension that
was identified was retained and changed into an item.

Table 4. Constructs and definitions.

Constructs Definition Literature Sources Previous Studies


Net Benefits Extent to which an information system benefits
[105,116,120] [14,77,78,152]
(Formative) individuals, groups, or organizations.
System Quality Desirable features of a system (e.g., reliability, timeliness,
[105,116,120] [14,77,78,152]
(Formative) or ease of use).
Information
Desirable features of a system’s output (e.g., format,
Quality [105,116,120] [14,77,78,152]
relevance, or completeness).
(Formative)
Extent to which a user in a HEI feels satisfied when the
Confirmation outcomes are consistent with (or exceed) their
[101,189,190] [45,72]
(Reflective) expectations or desires, or when the outcomes are
inconsistent with or below their expectations or desires.
Psychological state that results when the emotion linked
Satisfaction
to disconfirmed expectations is paired with the user’s [101,191] [45,72,76]
(Reflective)
previous attitudes towards the consumption experience.
Technical Extent to which an information system depends on
Integration intricate connections with different technological [16,192] [14,78]
(Reflective) elements.
Resources, both financial and otherwise, that the
System Investment
institution has applied to acquire, implement, and use an [16,193,194] [14,78]
(Reflective)
information system.
Collaboration Extent to which CC application supports cooperation
[195,196] [42,142–144]
(Reflective) and collaboration among stakeholders.
Regulatory Policy Extent to which government policy supports, pressures,
[147,169,170] [145,146]
(Reflective) or protects the continued use of CC applications.
Competitive Pressure perceived by institutional leadership that
Pressure industry rivals may have won a significant competitive [170,171,197] [72,145]
(Reflective) advantage using CC applications.
Continuance
Extent to which organizational decision makers are
Intention [16,101] [45,72,76]
likely to continue using an information system.
(Reflective)

The questionnaire comprises three parts: firstly, a preamble; secondly, a demographic section;
and finally, a section on the constructs relating to continuance use of CC in HEIs. For the first section,
we applied the key informant approach [198] in which two questions were used to eliminate participants
from the sample: first, checking for participants whose institutions had not yet adopted CC services;
and second, checking for participants who do not participate in the ICT adoption decision. Participants
eligible for inclusion in the study, they can complete the next two sections of the questionnaire. In the
second section, demographic data about each institution’s age, faculty, student population, years of CC
service adoption, and type of CC service model were gathered. A service provider variable was also
Appl. Sci. 2020, 10, 6628 14 of 36

measured based on asking the respondent who their CC service provider was (e.g., Oracle, Microsoft,
Google, Salesforce, and Amazon, among others). Notably, the service providers did not affect the
final dependent variable. In the third section of the questionnaire, each item sought to address an
aspect of the research question, particularly measuring the information of constructs that led towards
an organizational continuance use. These included satisfaction (S), confirmation (Con), net benefits
(NB), technical integration (TE), system quality (SQ), information quality (IQ), system investment (SI),
collaboration (Col), regulatory policy (RP), and competitive pressure (CP). As shown in Appendix A,
a 5-point Likert scale, ranging from “strongly agree” to “strongly disagree”, was used to measure each
item. Additionally, SAT items were measured on a 5-point Likert scale with different options (e.g.,
1 = very dissatisfied to 5 = very satisfied; 1 = very displeased to 5 = very pleased; 1 = very frustrated to
5 = very contented; and 1 = absolutely terrible to 5 = absolutely delighted).
Further, the instrument development process took into consideration the debate surrounding
the practice of gathering perceptual data on both the dependent and independent variables from a
single respondent [199]. In this debate, a central issue is that of whether the practice may result in
excessive common method variance (CMV). Nevertheless, some studies indicate that CMV is a greater
problem for abstract constructs (e.g., attitude) when compared to concrete measures (e.g., those linked
to IS success in this research) [200]. Besides, it has been noted in the literature that the constructs
of IS success are not highly susceptible to CMV [201]. Moreover, CMV is not a major concern for
formative constructs because the items do not have to co-vary [14]. Furthermore, in the process of
operationalizing the research instrument, CMV can be further reduced by neglecting to group the
items from reflective constructs under the associated construct headings [199,200].

5.3. Data Collection


A continuance decision within an organization must be reached unanimously, and so the choice
made in this research to use an individual as a representative of an organization (even inside a team)
is reasonable [14]. Given the adopted survey methodology, individuals reported on organizational
properties. Hence, ensuring that each participant had the requisite authority and knowledge to
contribute data was critical. Therefore, the key informant approach [198] was utilized in this research.
In the introductory section of the questionnaire, participants were informed that the study only sought
to recruit key decision makers within organizations. Furthermore, the participants were asked directly
to withdraw from the study if they were not directly involved in their institution’s decision to continue
using CC services.
For data analysis, descriptive statistics were used to examine data from the first and second
sections of the questionnaire. The survey respondents had been recruited through online and offline
distribution channels to reach participants with the questionnaire. An invitation from the institutional
email with reminders where sent online to 50 people, 34 completed and submitted the questionnaire
with a response rate of 68%. Of the 12 surveys handed over face to face, only four surveys were
completed and returned successfully with a response rate of 33.3%. This indicates that face to face
method consumes more time and effort than the online approach. For most scholars, a pilot study
sample size of 20–40 is reasonable [202–206], and so our pilot study’s reliability statistic was based on
38 completed questionnaires.

5.4. Data Analysis


Cronbach’s Alpha and composite reliability (CR) tests were used to measure instrument reliability.
Each test was undertaken using statistical package for the social sciences (SPSS). For the purpose of
validating the measurement and structural model, structural equation modelling (SEM) was applied
to the pilot data with SmartPLS 3.0 [207]. A variance-based technique was used to analyze the
structural model, and this decision was made for several reasons: firstly, the partial least squares (PLS)
method is effective for small-to-moderately sized samples, and it provides parameter estimates even
at reduced sample sizes [208,209]; secondly, PLS is viable for exploratory research [210], particularly
Appl. Sci. 2020, 10, 6628 15 of 36

when examining new structural paths in the context of incremental studies that extend previous
models [211], or when the relationships and measures proposed are new or have not been extensively
examined in the prior literature [212,213]; and thirdly, the variance-based approach in PLS is effective
for predictive applications. Therefore, since the study’s objective was to identify the factors underlying
organizational-level CC continuance use (i.e., not to examine a particular behavioral model), PLS was
aAppl.
suitable choice
Sci. 2020, [214].
10, x FOR PEER REVIEW 15 of 36

5.5. Prototype Development


5.5. Prototype Development and
and Evaluation
Evaluation
Drawing
Drawing on on the
the literature
literature andand theories
theoriesofofthe theISIScontinuance
continuancemodel,
model, IS IS success
success model,
model, IS
IS discontinuance model, and TOE framework; the present study
discontinuance model, and TOE framework; the present study purposes a continuance use purposes a continuance use
measurement
measurement prototype
prototype (i.e.,
(i.e., system/application)
system/application) to to validate
validate the
the research
researchmodel.
model.
In
In this phase, aa prototype
this phase, prototype is developed, and
is developed, evaluation is
and evaluation conducted through
is conducted through aa survey.
survey. The
The main
main
purpose
purpose of the prototype development is to apply and validate the proposed model through the
of the prototype development is to apply and validate the proposed model through the
evaluation and of
evaluation and of the
thelevel
levelofofmatch
matchbetween
between thethe model
model andand
the the prototype
prototype in real
in the the real world.
world. The
The prototype
prototype developed
developed for for
the the present
present study
study is is
ininaccordance
accordancewith withthe
the processes
processes proposed
proposed byby
Sommerville [215]. The processes involved in this phase are shown in Figure 4 and
Sommerville [215]. The processes involved in this phase are shown in Figure 4 and detailed below. detailed below.

Figure 4.
Figure 4. Prototype
Prototype Development Processes [215].
Development Processes [215].

5.5.1. Establish Prototype


5.5.1. Establish Prototype Objectives
Objectives
The
The objectives
objectives of
of developing
developing the
the prototype
prototype are: (i) to
are: (i) to assist
assist cloud
cloud service
service providers
providers andand ICT
ICT
decision
decision makers at HEIs for evaluating the continuance use of CC services in HEIs; (ii) to provide aa
makers at HEIs for evaluating the continuance use of CC services in HEIs; (ii) to provide
guideline
guideline on
on the
the vital
vital requirements
requirements needed
needed for
for ensuring
ensuring aa successful
successful use
use of
of the
the CC
CC service
service in
inHEIs.
HEIs.

5.5.2. Define Prototype


5.5.2. Define Prototype Functionality
Functionality
The
The fundamental criterion to
fundamental criterion to quantify the success
quantify the success of
of aa software
software system
system isis the
the extent
extent to
to which
which it
it
pleases
pleases its customers [48]. A software/system requirements specification (SRS) is an explanation of aa
its customers [48]. A software/system requirements specification (SRS) is an explanation of
proposed
proposed software
software system
system that
that meets
meets different
different kinds
kinds of
of stakeholder
stakeholder needs.
needs. The
The system requirements
system requirements
specification (SRS) suggest two main requirements of a system: functional
specification (SRS) suggest two main requirements of a system: functional and non-functional and non-functional
requirements.
requirements. The The functions
functions relevant
relevant for
for the
the development
development of of the
the prototype
prototype areare derived
derived from
from the
the
literature. Accordingly, specific features of the prototype that are significant to the
literature. Accordingly, specific features of the prototype that are significant to the proposedproposed prototype
should listed
prototype and elaborated
should when full implementation
listed and elaborated in carried out.
when full implementation in carried out.
5.5.3. Develop Prototype
5.5.3. Develop Prototype
The prototype will be built using the React JS web application framework. It is used collaboratively
The prototype will be built using the React JS web application framework. It is used
with Hyper Text Markup Language (HTML), Cascading Style Sheets (CSS), jQuery, Node js web server
collaboratively with Hyper Text Markup Language (HTML), Cascading Style Sheets (CSS), jQuery,
and MySQL Database. A flow chart design use case diagram, content, and navigation structure as well
Node js web server and MySQL Database. A flow chart design use case diagram, content, and
as data dictionary will be designed to build the prototype.
navigation structure as well as data dictionary will be designed to build the prototype.

5.5.4. Evaluate Prototype


This research will implement the user acceptance test to validate the proposed model by
evaluating the overall usability and acceptability of the prototype. A structured survey will be
conducted at the end of the development process once the prototype is fully developed. The
acceptance test used is based on the Perceived Usefulness and Ease of Use (PUEU) instrument by
Davis [98] which has been based on the Technology Acceptance Model (TAM) [216,217]. Perceived
Appl. Sci. 2020, 10, 6628 16 of 36

5.5.4. Evaluate Prototype


This research will implement the user acceptance test to validate the proposed model by evaluating
the overall usability and acceptability of the prototype. A structured survey will be conducted at the
end of the development process once the prototype is fully developed. The acceptance test used is
based on the Perceived Usefulness and Ease of Use (PUEU) instrument by Davis [98] which has been
based on the Technology Acceptance Model (TAM) [216,217]. Perceived usefulness and perceived ease
of use are hypothesized to be fundamental determinants of user acceptance and system use [98,218].
A sample of respondents consisting of ICT decision makers at HEIs will be selected to participate in
this survey.
According to Faulkner [219], studies to evaluate a prototype of a novel user interface design
reveals severe errors quickly and, therefore, often require fewer participants. The literature suggests
that 3 to 20 participants provide valid results [220]. PUEU consists of 12 questions with 7 scales
from unlikely (1) to likely (7). To analyze the PUEU test, descriptive analysis (mean, standard error,
median, mode) using SPSS will be carried out. The questions of the survey are divided into two parts.
The first part is on demographics while the second on prototype perceived usefulness and ease of use.
The PUEU test has been widely implemented to explain on the overall user acceptance towards an
information technology (e.g., [221–226]).

6. Preliminary Results
A pilot study was conducted to increase the consistency of the measures used throughout the
research. Noteworthily, the main objective associated with a pilot study is to ensure the validation of
the initial instrument, and to identify any inconsistencies that could undermine the accuracy of the
results [227].

6.1. Validity and Reliability of the Survey Instrument


Face validation, content validation, and a pilot study were undertaken to ensure the validity and
reliability of the survey instrument. Face validity refers to the procedure of assessing an instrument
based on feasibility, consistency of style and formatting, readability, and linguistic clarity [185,186].
In this study, face validity was evaluated in interviews with academic researchers (n = 2) with
experience in questionnaire design. Regarding content validity, this refers to the extent to which
a survey instrument measures what it intended to measure [177,178,227], and an expert panel was
assembled to test it in this study. Although only three experts are the minimum for a content validity
panel [228,229], five were recruited for the present study (three academics involved in IS and CC,
as well as two industry practitioners). Online and offline distribution channels were used for the
questionnaire in order to assess the constructs, and the measurement of items also took place with
4-point scale suggested by Davis [230]. A textbox was provided for every question, thereby ensuring
that the participants had enough free space in which to provide comments. The participants identified
areas for potential changes (e.g., “re-word this sentence”), and to quantify the relevance of every item,
the average congruency percentage (ACP) [231] was computed. Noteworthily, the threshold value of
ACP recommended by DeVon, Block [186] was achieved (i.e., 90%). In this regard, it is worth drawing
attention to the fact that the questionnaire’s items were adapted from the prior literature of the adopted
theoretical models (i.e., the IS continuance model [101], IS success model [105,120], IS discontinuance
model [16], and TOE framework [102].
Reliability testing involved measuring each construct’s attribute based on the Cronbach Alpha
reliability test statistic, and this decision was made owing to the statistical test’s widely accepted status
in the literature in terms of internal reliability (i.e., the ability of an object of measurement to yield the
same results on independent occasions). Several studies have inspected the minimum threshold for
Cronbach Alpha as being above or equal to 0.7, while values less than 0.6 have been inspected to have
a lack of reliability [232]. Cronbach Alpha values are given for every construct, as well as inter-item
Appl. Sci. 2020, 10, 6628 17 of 36

correlation values, in Table 5. According to Briggs and Cheek [233], inter-item correlations in a reflective
measurement scale offer data pertaining to the scale’s dimensionality. Furthermore, mean inter-item
correlation is distinct when compared to a reliability estimate because it is not impacted by scale
length. Resultantly, it provides a less ambiguous sense of item homogeneity. When the mean inter-item
correlation does not exceed 0.3, this suggests that the item is not correlated with others in the same
construct in a strong way [234]. At the same time, inter-item correlations that exceed 0.9 are indicative
of multicollinearity issues [235]. For Cronbach Alpha values that exceed 0.7, this suggests favorable
internal consistency in terms of the items on the scale [236]. Specifically, Cronbach Alpha values
that exceed 0.9 are considered “excellent”, while values exceeding 0.8 and 0.7 are considered “good”
and “acceptable”, respectively [237]. For values exceeding 0.6, these are considered “questionable”,
while values lower than 0.5 are considered “unacceptable” [237]. In this study, relatively only one of
the inter-item correlations fell below the recommended value of 0.3, meaning that the implications
associated with their elimination from the survey instrument were considered. Given their weak
correlations with other items in the measurement of confirmation construct, one item was removed.
Additionally, because the resulting Cronbach Alpha values would not have increased after the deletion
of the item, the item was retained (see Table 5).

Table 5. Reliability Statistics.

Construct No. Items Min Inter-Item Correlation Max Inter-Item Correlation Cronbach’s Alpha
CC Continuance Use 3 0.656 0.828 0.907
Satisfaction 4 0.684 0.81 0.916
Confirmation 5 0.124 0.795 0.78
Net Benefit 13 - - 0.916
Technical Integration 3 0.711 0.788 0.891
System Quality 13 - - 0.927
Information Quality 7 - - 0.928
System Investment 3 0.67 0.731 0.836
Collaboration 5 0.504 0.742 0.899
Regulatory Policy 5 0.398 0.821 0.894
Competitive Pressure 4 0.577 0.772 0.86
All Items 65 0.862 0.913

Drawing on partial least squares (PLS) analysis, the validity of the reflective and formative models
was evaluated based on the recommendations of Hair Jr, Hult [208], which are discussed below.

6.2. Reflective Measurement Model Evaluation


Assessment of the reflective measurement model involved taking an estimate of internal
consistency, along with convergent and discriminant validity (see Table 6). The reliability of the
instrument was acceptable, with reflective factor loadings in excess of 0.505 (i.e., greater than the
recommended level of 0.5) [209]. CR was also acceptable, where every construct’s value was greater
than 0.852 [238].
Appl. Sci. 2020, 10, 6628 18 of 36

Table 6. Quantitative Assessment of Measurement Model (Reflective).

Cloud Computing Continuance Use (Reflective) Loadings 1 AVE 2 CR 3


CCCU1 0.896 0.845 0.942
CCCU2 0.961
CCCU3 0.898
Confirmation (Reflective) Loadings AVE CR
CON1 0.505 0.544 0.852
CON2 0.622
CON3 0.876
CON4 0.786
CON5 0.832
Satisfaction (Reflective) Loadings AVE CR
SAT1 0.872 0.798 0.941
SAT2 0.887
SAT3 0.917
SAT4 0.897
Technical Integration (Reflective) Loadings AVE CR
TE1 0.923 0.824 0.934
TE2 0.876
TE3 0.924
System Investment (Reflective) Loadings AVE CR
SI1 0.91 0.799 0.923
SI2 0.871
SI3 0.901
Collaboration (Reflective) Loadings AVE CR
COL1 0.88 0.716 0.926
COL2 0.775
COL3 0.835
COL4 0.867
COL5 0.87
Regulatory Policy (Reflective) Loadings AVE CR
RP1 0.809 0.697 0.92
RP2 0.84
RP3 0.859
RP4 0.853
RP5 0.812
Competitive Pressure (Reflective) Loadings AVE CR
CP1 0.776 0.748 0.922
CP2 0.904
CP3 0.878
CP4 0.895
1All Item Loading > 0.5 Indicates Indicator Reliability; 2 All Average Variance Expected (AVE) > 0.5 as indicates
Convergent Reliability; 3 All Composite Reliability > 0.7 Indicates Internal Consistency.

In the case of convergent validity, this was computed as the average variance extracted (AVE) for
every construct, and it was greater than 0.5 in each case [239]. Every square root for the AVEs was
greater than the corresponding latent variable correlation, thereby indicating a satisfactory level of
discriminant validity (see Table 7).
Appl. Sci. 2020, 10, 6628 19 of 36

Table 7. Discriminant Validity (Fornell-Larker Criterion).

Latent Construct CCCU COL CP Conf IQ NB RP SI SQ SAT TE


CC Continuance Use 0.919
Collaboration 0.85 0.846
Competitive Pressure −0.524 −0.541 0.865
Confirmation −0.201 −0.277 0.397 0.737
Information Quality −0.615 −0.579 0.381 0.328 formative
Net Benefits 0.719 0.736 −0.592 −0.617 −0.733 formative
Regulatory Policy 0.409 0.472 −0.821 −0.306 −0.378 0.513 0.835
System Investment 0.903 0.807 −0.458 −0.251 −0.578 0.695 0.383 0.894
System Quality 0.838 0.789 −0.593 −0.481 −0.782 0.877 0.438 0.776 formative
Satisfaction 0.473 0.54 −0.468 −0.58 −0.812 0.794 0.462 0.481 0.805 0.894
Technical Integration 0.894 0.739 −0.479 −0.256 −0.696 0.647 0.396 0.809 0.794 0.504 0.908

6.3. Formative Measurement Model Evaluation


The three-step process suggested by Hair Jr, Hult [207] was used to evaluate the formative
measurement model (see Table 8). At the outset, convergent validity was examined, which refers
to the degree to which a given measure is positively correlated with other measures in the same
construct [207]. In order to test convergent validity, redundancy analysis can be performed [240].
In this study, every construct was associated with satisfactory convergent validity, and the path
coefficients ranged from 0.763 to 0.884 (i.e., greater than the recommended level of 0.7) [207]. In the
second step, collinearity issues were addressed by computing variance inflation factors (VIFs) for
every indicator [241], which can identify multicollinearity problems among the measures. Most VIFs
in this study did not exceed the recommended level of 5 [242], but several were unreasonably high.
Nevertheless, because formative measurement models are based on the regression of the formative
construct against its measures, the stability of the measures’ coefficients is not only affected by the
strength of the intercorrelations but also by the sample size. Given that the present process is concerned
with testing the initial model, these indicators will not be considered until full-scale data are used to
test the model.
For the final step of the three-step process, indicators are evaluated in terms of their significance
and relevance, specifically by utilizing the initial research model. A range of formative indicators were
not significant at the level of 10%. This is consistent with expectations because, as noted by Cenfetelli
and Bassellier [243], the greater the number of indicators, the higher the likelihood that the indicators
will not be significant. This is because each indicator competes to account for the variance in the
target construct. Mathieson, Peacock [181] used in their study a range of formative indicators to assess
perceived resources, and four out of seven were identified as insignificant. Additionally, Walther,
Sedera [14] noted that system quality is associated with three significant indicators, while information
quality and net benefits only have one and two significant indicators, respectively. Critically, however,
lack of significance for an indicator should not be viewed as suggestive of a lack of relevance. If an
indicator’s outer weight is not significant, but its outer loading is considerable (i.e., greater than 0.5),
it is necessary to interpret the indicator not as absolutely important but rather as important [207]. In the
study undertaken by [207], it was noted that, if the theory-driven way the construct is conceptualized is
strongly supportive of the decision to keep the indicator (e.g., based on the judgements of specialists),
then it ought not to be eliminated in most cases. An additional issue relates to the phenomenon of
negative indicator weights [243]. Interpretations should not view these as items that negatively impact
the construct; rather, they should be considered as correlated to a higher degree with indicators of the
same measure as opposed to the construct they measure.
Appl. Sci. 2020, 10, 6628 20 of 36

Table 8. Quantitative Assessment of Measurement Model (Formative).

Redundancy Analysis, Assessing Multicollinearity, Significance and Contribution


Net Benefits (formative) VIF t-values Weights Loadings
NB1 3.615 0.604 0.125 0.633
NB2 3.276 1.32 0.313 0.802
NB3 6.695 1.653 −0.489 0.648
NB4 3.098 1.561 0.346 0.71
NB5 2.202 1.618 0.23 0.535
NB6 1.942 0.801 0.134 0.691
NB7 5.617 1.785 0.608 0.876
NB8 3.896 0.632 0.146 0.609
NB9 3.236 0.988 0.214 0.641
NB10 1.51 0.932 0.158 0.45
NB11 6.653 1.149 −0.391 0.721
NB12 2.053 0.231 −0.039 0.594
Net Benefits (Reflective) F2
Redundancy Analysis 0.763
NB13
System Quality (formative) VIF t-values Weights Loadings
SQ1 4.197 0.115 −0.019 0.75
SQ2 3.715 0.854 0.141 0.626
SQ3 2.57 1.397 0.165 0.794
SQ4 2.182 1.222 0.134 0.72
SQ5 5.351 1.203 0.204 0.867
SQ6 5.582 0.199 −0.035 0.726
SQ7 2.615 2.399 0.262 0.771
SQ8 1.3 0.711 −0.065 0.184
SQ9 4.435 1.392 0.239 0.768
SQ10 1.92 0.046 0.005 0.707
SQ11 3.749 1.26 0.156 0.659
SQ12 2.434 0.692 0.075 0.82
System Quality (reflective) F2
Redundancy Analysis 0.784
SQ13
Information Quality (formative) VIF t-values Weights Loadings
IQ1 3.122 0.366 −0.079 0.664
IQ2 3.232 0.995 0.228 0.874
IQ3 2.84 1.569 0.348 0.839
IQ4 3.78 1.787 0.436 0.874
IQ5 4.753 0.838 0.219 0.831
IQ6 2.928 0.017 −0.004 0.727
Information Quality (reflective) F2
Redundancy Analysis 0.884
IQ7

7. Discussion and Conclusions


The goal of this research paper was to propose a conceptual model that extends and contextualizes
the IS continuance model to improve our understanding of the organizational level continuance
of CC in HEIs. Drawing on the prior literature in organizational-level continuance (i.e., IS success
and IS discontinuance models), the proposed research model extends the IS continuance model
by the following constructs: net benefits, system quality, information quality, technical integration,
system investment, collaboration, regulatory policy, and competitive pressure through the lens of
TOE framework. The results of a pilot study, conducted through a survey with ICT decision makers,
and based on the proposed conceptual model, indicate that the instrument is both reliable and valid,
and therefore pave the way for further research.
The pilot study’s results lend valuable support to the model constructs and instrument in assessing
CC continuance use at HEIs. Additionally, analysis with Cronbach’s Alpha indicates that the model
constructs have satisfactory reliability [238], while convergent and discriminant validity for each of
Appl. Sci. 2020, 10, 6628 21 of 36

the constructs has been established using AVE and CR tests. Based on the feedback received from
the pilot study’s participants, as well as from the statistical analysis, a revised survey instrument
was devised for the full-scale research. An assessment of the reflective measurement model was
undertaken by computing internal consistency, discriminant validity, and convergent validity. In each
case, the instrument was associated with satisfactory results. Formative measures were evaluated
based on convergent validity, collinearity issues, and significance and relevance, revealing satisfactory
performance in terms of convergent validity. As for collinearity issues, these were examined by
computing the variance inflation factors (VIFs) for every indicator, most of which did not exceed the
maximum threshold of 5 [242]. Certain VIFs were greater than the required value, but because formative
measurement models are based on regression (which is informed both by measure intercorrelations
and sample size), these indicators will be considered when using full-scale data for the model.
Additionally, indicators were examined in terms of significance and relevance by considering the initial
research model.

7.1. Theoretical Contributions


This study provided empirical literature within IS, especially CC; in addition, this study provided
assessment for CC adoption in HEIs, and more revitalization of the CC and intent of decision makers
to utilize CC in HEIs.
One of the most contributions of this research constitutes to the body of knowledge within IS field
surrounding continuance and discontinuance. This study provides an extensive model that extends
and contextualizes the IS model using three dominant models in organizational-level continuance (i.e.,
IS success model, IS discontinuance model, and TOE framework) to improve our understanding of
the organizational-level continuance of CC in HEIs. Furthermore, the results of the full-scale research
will aid in developing a context-specific model that can be exploited to examine CC continuance
in HEIs. Furthermore, it will play a valuable role in extending the academic literature on CC with
empirical evidence. In addition, the study will form the basis of a final model that could be applied to
examine continuance with other novel technologies within the sphere of education, as well as in other
sectors. Finally, the study is expected to contribute to developing the literature in the best-available
organizational-level continuance models for HEI settings, as well as other similar domains.

7.2. Practical Implications


From the practical implications’ viewpoint, this study provides potential implications for
practitioners and cloud providers. Given that almost all CC service models rely on subscriptions [6],
the current research purposed a conceptual model to assess the main factors that HEIs consider when
deciding whether to continue their use of CC services. In practical settings, many are concerned
with reducing capital IT expenses [3,18,19], and IS services allow client organizations to select from
various services that they can continue using or discontinue using [8,9]. Therefore, conducting research
that focuses on the continuance of IS will play a critical role not only in theory but also in practice.
Furthermore, measuring the model constructs not only reflectively but also formatively would add
little to the practical contribution of the study.
Besides, this study provides potential implications for decision makers. The research results will
also assist IT decision makers in CC when seeking to optimize institutional resource utilization, or to
commission and market CC projects. As a case in point, the individual weightings associated with the
model’s constructs can serve as guidelines that software vendors will use to focus their efforts towards
retaining customers. These weights can also be leveraged by clients to guide routine assessments over
whether the use of a specific CC service should be discontinued.

7.3. Limitations
This study has some limitations that will bring about the focus of subsequent research. First, similar
to organizational studies, possible bias to the results may occur from representing individual views
Appl. Sci. 2020, 10, 6628 22 of 36

rather than a shared opinion within the HEIs. This can be addressed if the constructs in the full-scale
study are used for longitudinal evaluations rather than just cross-sectional evaluations, then the client
organizations will be able to learn about critical “pain-points”. Second, the proposed model is intended
for organizational-level usage. However, CC ecosystems involve operationalizing constructs such as
IT infrastructure availability and computer sophistication [244,245]. Future research will have to take
additional perspectives to understand continuance on an organizational level. Finally, the proposed
model contextualized the IS continuance model based on previous literature in organizational-level
continuance to better suite HEIs. However, future research may have to consider contextual constructs
to understand continuance on HEIs.

7.4. Future Research Directions


The model and instrument development activities covered in this paper are the first step towards
a larger initiative that seeks to examine the factors affecting the organizational continuance use of
CC in HEIs. The validated instrument that was piloted in this study will be used to gather data
from a large sample of IT decision makers in Malaysian HEIs. The study will analyze the impact
of every construct in the proposed model on the continuance use of CC, and their significance in
the model will be validated based on a test of the proposed hypotheses, where SEM and maybe
other analytical approaches (e.g., Artificial Neural Network) will be used. A final model will be
generated, which is expected to have considerable value in future organizational-level continuance
studies of various technologies in the coming years (e.g., CC continuance in SMEs and government
departments). Therefore, further quantitative and qualitative research about the conceptualized model
and its relationships is recommended. A prototype should be developed and evaluated to validate
the research model. A user acceptance test using the Perceived Usefulness and Ease of Use (PUEU)
instrument will conducted and demonstrated the overall feasibility and acceptability of the prototype.
As a final phase of the research design [180], conclusions should be drawn from the final model through
the interpretation of results confirmed by empirical analysis and the evaluation of prototype developed.
A world with current modern technologies that keep evolving in which organizations and
individuals keep adopting the evolved technologies requires further assessment for its sustainability
and success use. Thus, researchers are required to always examine these innovations by investigating
the continuance use of future technologies. In this regard, the fourth industrial (IR 4.0) revolutions
provide a dialectical, intricate, and intriguing opportunity to all aspects of our life, in which the
society would be changed for the better. As a case in point, education in the IR 4.0 era (Education 4.0)
is driven by technologies such as artificial intelligent (AI), augmented reality (AR), Internet of things
(IoT), big data analysis, CC, 3D printing, and mobile devices, which can promote the way of teaching,
research, and service and change the work area from task-centered to human-based [48,153,246–251].
Many of the IR 4.0 technologies have been adopted and used in many sectors; therefore, further
investigations on the continuance use of those technologies may gain the attention of the researchers.
Finally, future research on how technology advancements could leverage learning and teaching process
to achieve sustainability in HEIs is recommended.

Author Contributions: Conceptualization, Y.A.M.Q.; Data curation, Y.A.M.Q. and R.A. (Rusli Abdulah); Formal
analysis, Y.A.M.Q., R.A. (Rodziah Atan) and Y.Y.; Methodology, Y.A.M.Q., R.A. (Rusli Abdulah) and Y.Y.;
Project administration, R.A. (Rusli Abdulah), Y.Y. and R.A. (Rodziah Atan); Resources, R.A. (Rusli Abdulah),
Y.Y. and R.A. (Rodziah Atan); Supervision, R.A. (Rusli Abdulah), Y.Y. and R.A. (Rodziah Atan); Validation,
Y.A.M.Q. and R.A. (Rodziah Atan); Writing—original draft, Y.A.M.Q.; Writing—review & editing, Y.A.M.Q.
and R.A. (Rusli Abdulah). All authors have read and agreed to the published version of the manuscript.
Funding: This research was funded by Research Management Center (RMC), Universiti Putra Malaysia (UPM),
UPM Journal Publication Fund (9001103).
Conflicts of Interest: The authors declare no conflict of interest.
Appl. Sci. 2020, 10, 6628 23 of 36

Appendix A

Table A1. Constructs and their Measurement Items.

Measurement Items
Constructs Reflective/Formative Theories
Items Adapted Source Previous Studies
(1 = Strongly Disagree to 7 = Strongly Agree)
CCA1: Our institution intends to continue using the cloud computing service rather than
discontinue.
CC Continuous Intention Reflective CCA2: Our institution’s intention is to continue using the cloud computing service rather [101] [45,72,76] ECM & ISD
than use any another means (traditional software).
CCA3: If we could, our institution would like to discontinue the use of the cloud
computing service. (reverse coded).
How do you feel about your overall experience with your current cloud computing
service (SaaS, IaaS, or PaaS)?
SAT1: Very dissatisfied (1)–Very satisfied (7)
Satisfaction (SAT) Reflective [101] [45,72,76] ECM
SAT2: Very displeased (1)–Very pleased (7)
SAT3: Very frustrated (1)–Very contented (7)
SAT4: Absolutely terrible (1)–Absolutely delighted (7).
(1 = Strongly Disagree to 7 = Strongly Agree)
CON1. Our experience with using cloud computing services was better than what we
expected.
CON2. The benefits with using cloud computing services were better than we expected.
Confirmation (Con) Reflective CON3. The functionalities provided by cloud computing services for team projects was [101] [45,72] ECM
better than what I expected.
CON4. Cloud computing services support our institution more than expected.
CON5. Overall, most of our expectations from using cloud computing services were
confirmed.
Our cloud computing service . . .
NB1. . . . increases the productivity of end-users.
NB2. . . . increases the overall productivity of the institution.
NB3. . . . enables individual users to make better decisions.
NB4. . . . helps to save IT-related costs.
NB5. . . . makes it easier to plan the IT costs of the institution.
NB6. . . . enhances our strategic flexibility. [105,120]
Net Benefits (NB) Formative [14,77,78,152] ECM
NB7. . . . enhances the ability of the institution to innovate.
NB8. . . . enhances the mobility of the institution’s employees.
NB9. . . . improves the quality of the institution’s business processes.
NB10. . . . shifts the risks of IT failures from my instituting to the provider.
NB11. . . . lower the IT staff requirements within the institution to keep the system running.
NB12. . . . improves outcomes/outputs of my institution.
NB13. . . . has brought significant benefits to the institution. [116]
Appl. Sci. 2020, 10, 6628 24 of 36

Table A1. Cont.

Measurement Items
Constructs Reflective/Formative Theories
Items Adapted Source Previous Studies
TI1. The technical characteristics of the cloud computing service make it complex.
TI2. The cloud computing service depends on a sophisticated integration of technology
Technical Integration (TE) Reflective [16] [14,78] ISD
components.
TI3. There is considerable technical complexity underlying the cloud computing service.
Our cloud computing service . . .
SQ1. . . . operates reliably and stable.
SQ2. . . . can be flexibly adjusted to new demands or conditions.
SQ3. . . . effectively integrates data from different areas of the company.
SQ4. . . . makes information easy to access (accessibility).
SQ5. . . . is easy to use.
SQ6. . . . provides information in a timely fashion (response time). [105,120]
System Quality (SQ) Formative [14,77,78,152]
SQ7. . . . provides key features and functionalities that meet the institution requirements.
SQ8. . . . is secure.
SQ9. . . . is easy to learn.
SQ10. . . . meets different user requirements within the institution.
SQ11. . . . is easy to upgrade from an older to a newer version.
SQ12. . . . is easy to customize (after implementation, e.g., user interface). ISS

SQ13. Overall, our cloud computing system is of high quality. [116]


Our cloud computing service . . .
IQ1. . . . provides a complete set of information
IQ2. . . . produces correct information.
IQ3. . . . provides information which is well formatted. [105,120]
Information Quality (IQ) Formative [14,77,78,152]
IQ4. . . . provides me with the most recent information.
IQ5. . . . produces relevant information with limited unnecessary elements.
IQ6. . . . produces information which is easy to understand.
IQ7. In general, our cloud computing service provides our institution with high-quality
[116]
information.
SI1. Significant organizational resources have been invested in our cloud computing service
SI2. We have committed considerable time and money to the implementation and
System Investment (SI) Reflective operation of the cloud-based system. [16] [14,78] ISD
SI3. The financial investments that have been made in the cloud-based system are
substantial.
Appl. Sci. 2020, 10, 6628 25 of 36

Table A1. Cont.

Measurement Items
Constructs Reflective/Formative Theories
Items Adapted Source Previous Studies
Col1. Interaction of our institution with employees, industry and other institutions is easy
with the continuance use of cloud computing service
Col2. Collaboration between our institution and industry raise by the continuance use of
cloud computing service
Col3. The continuance uses of cloud computing service improve collaboration among
Collaboration (Col) Reflective [195,196] [42,142–144] TOE
institutions.
Col4. If our institution continues using cloud computing service, it can communicate with
its partners (institutions and industry)
Col5. Communication with the institution’s partners (institutions and industry) is
enhanced by the continuance use of cloud computing service
RP1. Our institution is under pressure from some government agencies to continue using
cloud computing service.
RP2. The government is providing us with incentives to continue using cloud computing
service.
Regulatory Policy (RP) Reflective [172,252,253] [145–147] TOE
RP3. The government is active in setting up the facilities to enable cloud computing service.
RP4. The laws and regulations that exist nowadays are sufficient to protect the use of cloud
computing service.
RP5. There is legal protection in the use of cloud computing service.
CP1. Our Institution thinks that continuance use of cloud computing service has an
influence on competition among other institutions
CP2. Our institution will lose students to competitors if they don’t keep using cloud
Competitive Pressure (CP) Reflective computing service [170,171,197] [72,145] TOE
CP3. Our institution is under pressure from competitors to continue using cloud
computing service
CP4. Some of our competitors have been using cloud computing service
Appl. Sci. 2020, 10, 6628 26 of 36

References
1. Alexander, B. Social networking in higher education. In The Tower and the Cloud; EDUCAUSE: Louisville, CO,
USA, 2008; pp. 197–201.
2. Katz, N. The Tower and the Cloud: Higher Education in the Age of Cloud Computing; EDUCAUSE: Louisville, CO,
USA, 2008; Volume 9.
3. Sultan, N. Cloud computing for education: A new dawn? Int. J. Inf. Manag. 2010, 30, 109–116. [CrossRef]
4. Son, I.; Lee, D.; Lee, J.-N.; Chang, Y.B. Market perception on cloud computing initiatives in organizations:
An extended resource-based view. Inf. Manag. 2014, 51, 653–669. [CrossRef]
5. Salim, S.A.; Sedera, D.; Sawang, S.; Alarifi, A.H.E.; Atapattu, M. Moving from Evaluation to Trial: How do
SMEs Start Adopting Cloud ERP? Australas. J. Inf. Syst. 2015, 19. [CrossRef]
6. Mell, P.; Grance, T. The NIST Definition of Cloud Computing; U.S. Department of Commerce, National Institute
of Standards and Technology: Gaithersburg, MD, USA, 2011.
7. González-Martínez, J.A.; Bote-Lorenzo, M.L.; Gómez-Sánchez, E.; Cano-Parra, R. Cloud computing and
education: A state-of-the-art survey. Comput. Educ. 2015, 80, 132–151. [CrossRef]
8. Qasem, Y.A.M.; Abdullah, R.; Jusoh, Y.Y.; Atan, R.; Asadi, S. Cloud Computing Adoption in Higher Education
Institutions: A Systematic Review. IEEE Access 2019, 7, 63722–63744. [CrossRef]
9. Rodríguez Monroy, C.; Almarcha Arias, G.C.; Núñez Guerrero, Y. The new cloud computing paradigm:
The way to IT seen as a utility. Lat. Am. Caribb. J. Eng. Educ. 2012, 6, 24–31.
10. IDC. IDC Forecasts Worldwide Public Cloud Services Spending. 2019. Available online: https://www.idc.
com/getdoc.jsp?containerId=prUS44891519 (accessed on 28 July 2020).
11. Hsu, P.-F.; Ray, S.; Li-Hsieh, Y.-Y. Examining cloud computing adoption intention, pricing mechanism,
and deployment model. Int. J. Inf. Manag. 2014, 34, 474–488. [CrossRef]
12. Dubey, A.; Wagle, D. Delivering Software as a Service. The McKinsey Quarterly, 6 May 2007.
13. Walther, S.; Sedera, D.; Urbach, N.; Eymann, T.; Otto, B.; Sarker, S. Should We Stay, or Should We Go?
Analyzing Continuance of Cloud Enterprise Systems. J. Inf. Technol. Theory Appl. 2018, 19, 4.
14. Qasem, Y.A.; Abdullah, R.; Jusoh, Y.Y.; Atan, R. Conceptualizing a model for Continuance Use of Cloud
Computing in Higher Education Institutions. In Proceedings of the AMCIS 2020 TREOs, Salt Lake City, UT,
USA, 10–14 August 2020; p. 30.
15. Furneaux, B.; Wade, M.R. An exploration of organizational level information systems discontinuance
intentions. MIS Q. 2011, 35, 573–598. [CrossRef]
16. Long, K. Unit of Analysis. Encyclopedia of Social Science Research Methods; SAGE Publications, Inc.: Los Angeles,
CA, USA, 2004.
17. Berman, S.J.; Kesterson-Townes, L.; Marshall, A.; Srivathsa, R. How cloud computing enables process and
business model innovation. Strategy Leadersh. 2012, 40, 27–35. [CrossRef]
18. Stahl, E.; Duijvestijn, L.; Fernandes, A.; Isom, P.; Jewell, D.; Jowett, M.; Stockslager, T. Performance Implications
of Cloud Computing; Red Paper: New York, NY, USA, 2012.
19. Thorsteinsson, G.; Page, T.; Niculescu, A. Using virtual reality for developing design communication.
Stud. Inform. Control 2010, 19, 93–106. [CrossRef]
20. Pocatilu, P.; Alecu, F.; Vetrici, M. Using cloud computing for E-learning systems. In Proceedings of the 8th
WSEAS International Conference on Data networks, Communications, Computers, Baltimore, MD, USA,
7–9 November 2009; pp. 54–59.
21. Sasikala, S.; Prema, S. Massive centralized cloud computing (MCCC) exploration in higher education.
Adv. Comput. Sci. Technol. 2011, 3, 111.
22. García-Peñalvo, F.J.; Johnson, M.; Alves, G.R.; Minović, M.; Conde-González, M.Á. Informal learning
recognition through a cloud ecosystem. Future Gener. Comput. Syst. 2014, 32, 282–294. [CrossRef]
23. Nguyen, T.D.; Nguyen, T.M.; Pham, Q.T.; Misra, S. Acceptance and Use of E-Learning Based on Cloud
Computing: The Role of Consumer Innovativeness. In Computational Science and Its Applications—Iccsa 2014;
Pt, V.; Murgante, B., Misra, S., Rocha, A., Torre, C., Rocha, J.G., Falcao, M.I., Taniar, D., Apduhan, B.O.,
Gervasi, O., Eds.; Springer: Cham, Switzerland, 2014; pp. 159–174.
24. Pinheiro, P.; Aparicio, M.; Costa, C. Adoption of cloud computing systems. In Proceedings of the
International Conference on Information Systems and Design of Communication—ISDOC ’14, Lisbon,
Portugal, 16 May 2014; ACM: Lisbon, Portugal, 2014; pp. 127–131.
Appl. Sci. 2020, 10, 6628 27 of 36

25. Behrend, T.S.; Wiebe, E.N.; London, J.E.; Johnson, E.C. Cloud computing adoption and usage in community
colleges. Behav. Inf. Technol. 2011, 30, 231–240. [CrossRef]
26. Almazroi, A.A.; Shen, H.F.; Teoh, K.K.; Babar, M.A. Cloud for e-Learning: Determinants of its Adoption by
University Students in a Developing Country. In Proceedings of the 2016 IEEE 13th International Conference
on E-Business Engineering (Icebe), Macau, China, 4–6 November 2016; pp. 71–78.
27. Meske, C.; Stieglitz, S.; Vogl, R.; Rudolph, D.; Oksuz, A. Cloud Storage Services in Higher Educatio-Results
of a Preliminary Study in the Context of the Sync&Share-Project in Germany. In Learning and Collaboration
Technologies: Designing and Developing Novel Learning Experiences; Pt, I.; Zaphiris, P., Ioannou, A., Eds.;
Springer: Berlin/Heidelberg, Germany, 2014; pp. 161–171.
28. Khatib, M.M.E.; Opulencia, M.J.C. The Effects of Cloud Computing (IaaS) on E- Libraries in United Arab
Emirates. Procedia Econ. Financ. 2015, 23, 1354–1357. [CrossRef]
29. Arpaci, I.; Kilicer, K.; Bardakci, S. Effects of security and privacy concerns on educational use of cloud
services. Comput. Hum. Behav. 2015, 45, 93–98. [CrossRef]
30. Park, S.C.; Ryoo, S.Y. An empirical investigation of end-users’ switching toward cloud computing: A two
factor theory perspective. Comput. Hum. Behav. 2013, 29, 160–170. [CrossRef]
31. Riaz, S.; Muhammad, J. An Evaluation of Public Cloud Adoption for Higher Education: A case study from
Pakistan. In Proceedings of the 2015 International Symposium on Mathematical Sciences and Computing
Research (Ismsc), Ipoh, MX, USA, 19–20 May 2015; pp. 208–213.
32. Militaru, G.; Purcărea, A.A.; Negoiţă, O.D.; Niculescu, A. Examining Cloud Computing Adoption Intention
in Higher Education: Exploratory Study. In Exploring Services Science; Borangiu, T., Dragoicea, M., Novoa, H.,
Eds.; 2016; pp. 732–741.
33. Wu, W.W.; Lan, L.W.; Lee, Y.T. Factors hindering acceptance of using cloud services in university: A case
study. Electron. Libr. 2013, 31, 84–98. [CrossRef]
34. Gurung, R.K.; Alsadoon, A.; Prasad, P.W.C.; Elchouemi, A. Impacts of Mobile Cloud Learning (MCL) on
Blended Flexible Learning (BFL). In Proceedings of the 2016 International Conference on Information and
Digital Technologies (IDT), Rzeszow, Poland, 5–7 July 2016; pp. 108–114.
35. Bhatiasevi, V.; Naglis, M. Investigating the structural relationship for the determinants of cloud computing
adoption in education. Educ. Inf. Technol. 2015, 21, 1197–1223. [CrossRef]
36. Yeh, C.-H.; Hsu, C.-C. The Learning Effect of Students’ Cognitive Styles in Using Cloud Technology.
In Advances in Web-Based Learning—ICWL 2013 Workshops; Chiu, D.K.W., Wang, M., Popescu, E., Li, Q., Lau, R.,
Shih, T.K., Yang, C.-S., Sampson, D.G., Eds.; Springer: Berlin/Heidelberg, Germany, 2015; pp. 155–163.
37. Stantchev, V.; Colomo-Palacios, R.; Soto-Acosta, P.; Misra, S. Learning management systems and cloud file
hosting services: A study on students’ acceptance. Comput. Hum. Behav. 2014, 31, 612–619. [CrossRef]
38. Yuvaraj, M. Perception of cloud computing in developing countries. Libr. Rev. 2016, 65, 33–51. [CrossRef]
39. Sharma, S.K.; Al-Badi, A.H.; Govindaluri, S.M.; Al-Kharusi, M.H. Predicting motivators of cloud computing
adoption: A developing country perspective. Comput. Hum. Behav. 2016, 62, 61–69. [CrossRef]
40. Kankaew, V.; Wannapiroon, P. System Analysis of Virtual Team in Cloud Computing to Enhance Teamwork
Skills of Undergraduate Students. Procedia Soc. Behav. Sci. 2015, 174, 4096–4102. [CrossRef]
41. Yadegaridehkordi, E.; Iahad, N.A.; Ahmad, N. Task-Technology Fit and User Adoption of Cloud-based
Collaborative Learning Technologies. In Proceedings of the 2014 International Conference on Computer and
Information Sciences (Iccoins), Kuala Lumpur, MX, USA, 3–5 June 2014.
42. Arpaci, I. Understanding and predicting students’ intention to use mobile cloud storage services. Comput.
Hum. Behav. 2016, 58, 150–157. [CrossRef]
43. Shiau, W.-L.; Chau, P.Y.K. Understanding behavioral intention to use a cloud computing classroom: A multiple
model comparison approach. Inf. Manag. 2016, 53, 355–365. [CrossRef]
44. Tan, X.; Kim, Y. User acceptance of SaaS-based collaboration tools: A case of Google Docs. J. Enterp. Inf. Manag.
2015, 28, 423–442. [CrossRef]
45. Atchariyachanvanich, K.; Siripujaka, N.; Jaiwong, N. What Makes University Students Use Cloud-based
E-Learning?: Case Study of KMITL Students. In Proceedings of the 2014 International Conference on
Information Society (I-Society 2014), London, UK, 10–12 November 2014; pp. 112–116.
46. Vaquero, L.M. EduCloud: PaaS versus IaaS Cloud Usage for an Advanced Computer Science Course.
IEEE Trans. Educ. 2011, 54, 590–598. [CrossRef]
Appl. Sci. 2020, 10, 6628 28 of 36

47. Ashtari, S.; Eydgahi, A. Student Perceptions of Cloud Computing Effectiveness in Higher Education.
In Proceedings of the 2015 IEEE 18th International Conference on Computational Science and Engineering
(CSE), Porto, Portugal, 21–23 October 2015; pp. 184–191.
48. Qasem, Y.A.; Abdullah, R.; Atan, R.; Jusoh, Y.Y. Cloud-Based Education As a Service (CEAAS) System
Requirements Specification Model of Higher Education Institutions in Industrial Revolution 4.0. Int. J. Recent
Technol. Eng. 2019, 8. [CrossRef]
49. Huang, Y.-M. The factors that predispose students to continuously use cloud services: Social and technological
perspectives. Comput. Educ. 2016, 97, 86–96. [CrossRef]
50. Tashkandi, A.N.; Al-Jabri, I.M. Cloud computing adoption by higher education institutions in Saudi Arabia:
An exploratory study. Clust. Comput. 2015, 18, 1527–1537. [CrossRef]
51. Tashkandi, A.; Al-Jabri, I. Cloud Computing Adoption by Higher Education Institutions in Saudi Arabia:
Analysis Based on TOE. In Proceedings of the 2015 International Conference on Cloud Computing, ICCC,
Riyadh, Saudi Arabia, 26–29 April 2015.
52. Dahiru, A.A.; Bass, J.M.; Allison, I.K. Cloud computing adoption in sub-Saharan Africa: An analysis using
institutions and capabilities. In Proceedings of the International Conference on Information Society, i-Society
2014, London, UK, 10–12 November 2014; pp. 98–103.
53. Shakeabubakor, A.A.; Sundararajan, E.; Hamdan, A.R. Cloud Computing Services and Applications to
Improve Productivity of University Researchers. Int. J. Inf. Electron. Eng. 2015, 5, 153. [CrossRef]
54. Md Kassim, S.S.; Salleh, M.; Zainal, A. Cloud Computing: A General User’s Perception and Security
Awareness in Malaysian Polytechnic. In Pattern Analysis, Intelligent Security and the Internet of Things;
Abraham, A., Muda, A.K., Choo, Y.-H., Eds.; Springer International Publishing: Cham, Switzerland, 2015;
pp. 131–140.
55. Sabi, H.M.; Uzoka, F.-M.E.; Langmia, K.; Njeh, F.N. Conceptualizing a model for adoption of cloud computing
in education. Int. J. Inf. Manag. 2016, 36, 183–191. [CrossRef]
56. Sabi, H.M.; Uzoka, F.-M.E.; Langmia, K.; Njeh, F.N.; Tsuma, C.K. A cross-country model of contextual
factors impacting cloud computing adoption at universities in sub-Saharan Africa. Inf. Syst. Front. 2017, 20,
1381–1404. [CrossRef]
57. Yuvaraj, M. Determining factors for the adoption of cloud computing in developing countries. Bottom Line
2016, 29, 259–272. [CrossRef]
58. Surya, G.S.F.; Surendro, K. E-Readiness Framework for Cloud Computing Adoption in Higher Education.
In Proceedings of the 2014 International Conference of Advanced Informatics: Concept, Theory and
Application (ICAICTA), Bandung, Indonesia, 20–21 August 2014; pp. 278–282.
59. Alharthi, A.; Alassafi, M.O.; Walters, R.J.; Wills, G.B. An exploratory study for investigating the critical
success factors for cloud migration in the Saudi Arabian higher education context. Telemat. Inform. 2017, 34,
664–678. [CrossRef]
60. Mokhtar, S.A.; Al-Sharafi, A.; Ali, S.H.S.; Aborujilah, A. Organizational Factors in the Adoption of Cloud
Computing in E-learning. In Proceedings of the 3rd International Conference on Advanced Computer
Science Applications and Technologies Acsat, Amman, Jordan, 29–30 December 2014; pp. 188–191.
61. Lal, P. Organizational learning management systems: Time to move learning to the cloud! Dev. Learn. Organ. Int. J.
2015, 29, 13–15. [CrossRef]
62. Yuvaraj, M. Problems and prospects of implementing cloud computing in university libraries. Libr. Rev.
2015, 64, 567–582. [CrossRef]
63. Koch, F.; Assunção, M.D.; Cardonha, C.; Netto, M.A.S. Optimising resource costs of cloud computing for
education. Future Gener. Comput. Syst. 2016, 55, 473–479. [CrossRef]
64. Qasem, Y.A.; Abdullah, R.; Atan, R.; Jusoh, Y.Y. Towards Developing A Cloud-Based Education As A Service
(CEAAS) Model For Cloud Computing Adoption in Higher Education Institutions. Complexity 2018, 6, 7.
[CrossRef]
65. Qasem, Y.A.M.; Abdullah, R.; Yah, Y.; Atan, R.; Al-Sharafi, M.A.; Al-Emran, M. Towards the Development of a
Comprehensive Theoretical Model for Examining the Cloud Computing Adoption at the Organizational Level.
In Recent Advances in Intelligent Systems and Smart Applications; Al-Emran, M., Shaalan, K., Hassanien, A.E.,
Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 63–74.
Appl. Sci. 2020, 10, 6628 29 of 36

66. Jia, Q.; Guo, Y.; Barnes, S.J. Enterprise 2.0 post-adoption: Extending the information system continuance
model based on the technology-Organization-environment framework. Comput. Hum. Behav. 2017, 67,
95–105. [CrossRef]
67. Tripathi, S. Understanding the determinants affecting the continuance intention to use cloud computing.
J. Int. Technol. Inf. Manag. 2017, 26, 124–152.
68. Obal, M. What drives post-adoption usage? Investigating the negative and positive antecedents of disruptive
technology continuous adoption intentions. Ind. Mark. Manag. 2017, 63, 42–52. [CrossRef]
69. Ratten, V. Continuance use intention of cloud computing: Innovativeness and creativity perspectives.
J. Bus. Res. 2016, 69, 1737–1740. [CrossRef]
70. Flack, C.K. IS Success Model for Evaluating Cloud Computing for Small Business Benefit: A Quantitative
Study. Ph.D. Thesis, Kennesaw State University, Kennesaw, GA, USA, 2016.
71. Walther, S.; Sarker, S.; Urbach, N.; Sedera, D.; Eymann, T.; Otto, B. Exploring organizational level continuance
of cloud-based enterprise systems. In Proceedings of the ECIS 2015 Completed Research Papers, Münster,
Germany, 26–29 May 2015.
72. Ghobakhloo, M.; Tang, S.H. Information system success among manufacturing SMEs: Case of developing
countries. Inf. Technol. Dev. 2015, 21, 573–600. [CrossRef]
73. Schlagwein, D.; Thorogood, A. Married for life? A cloud computing client-provider relationship continuance
model. In Proceedings of the European Conference on Information Systems (ECIS) 2014, Tel Aviv, Israel,
9–11 June 2014.
74. Hadji, B.; Degoulet, P. Information system end-user satisfaction and continuance intention: A unified
modeling approach. J. Biomed. Inf. 2016, 61, 185–193. [CrossRef]
75. Esteves, J.; Bohórquez, V.W. An Updated ERP Systems Annotated Bibliography: 2001–2005; Instituto de Empresa
Business School Working Paper No. WP; Instituto de Empresa Business School: Madrid, Spain, 2007; pp. 4–7.
76. Gable, G.G.; Sedera, D.; Chan, T. Re-conceptualizing information system success: The IS-impact measurement
model. J. Assoc. Inf. Syst. 2008, 9, 18. [CrossRef]
77. Sedera, D.; Gable, G.G. Knowledge management competence for enterprise system success. J. Strateg.
Inf. Syst. 2010, 19, 296–306. [CrossRef]
78. Walther, S.; Plank, A.; Eymann, T.; Singh, N.; Phadke, G. Success factors and value propositions of software as
a service providers—A literature review and classification. In Proceedings of the 2012 AMCIS: 18th Americas
Conference on Information Systems, Seattle, WA, USA, 9–12 August 2012.
79. Ashtari, S.; Eydgahi, A. Student perceptions of cloud applications effectiveness in higher education.
J. Comput. Sci. 2017, 23, 173–180. [CrossRef]
80. Ding, Y. Looking forward: The role of hope in information system continuance. Comput. Hum. Behav. 2019,
91, 127–137. [CrossRef]
81. Rogers, E. Diffusion of Innovation; Macmillan Press Ltd.: London, UK, 1962.
82. Ettlie, J.E.J.P.R. Adequacy of stage models for decisions on adoption of innovation. Psychol. Rep. 1980, 46,
991–995. [CrossRef]
83. Fichman, R.G.; Kemerer, C.F.J.M.s. The assimilation of software process innovations: An organizational
learning perspective. Manag. Sci. 1997, 43, 1345–1363. [CrossRef]
84. Salim, S.A.; Sedera, D.; Sawang, S.; Alarifi, A. Technology adoption as a multi-stage process. In Proceedings
of the 25th Australasian Conference on Information Systems (ACIS), Auckland, New Zealand,
8–10 December 2014.
85. Fichman, R.G.; Kemerer, C.F. Adoption of software engineering process innovations: The case of object
orientation. Sloan Manag. Rev. 1993, 34, 7–23.
86. Choudhury, V.; Karahanna, E. The relative advantage of electronic channels: A multidimensional view.
MIS Q. 2008, 32, 179. [CrossRef]
87. Karahanna, E.; Straub, D.W.; Chervany, N.L. Information technology adoption across time: A cross-sectional
comparison of pre-adoption and post-adoption beliefs. MIS Q. 1999, 23, 183. [CrossRef]
88. Pavlou, P.A.; Fygenson, M. Understanding and predicting electronic commerce adoption: An extension of
the theory of planned behavior. MIS Q. 2006, 30, 115. [CrossRef]
89. Shoham, A. Selecting and evaluating trade shows. Ind. Mark. Manag. 1992, 21, 335–341. [CrossRef]
90. Mintzberg, H.; Raisinghani, D.; Theoret, A. The Structure of “Unstructured” Decision Processes. Adm. Sci. Q.
1976, 21, 246–275. [CrossRef]
Appl. Sci. 2020, 10, 6628 30 of 36

91. Pierce, J.L.; Delbecq, A.L. Organization structure, individual attitudes and innovation. Acad. Manag. Rev.
1977, 2, 27–37. [CrossRef]
92. Zmud, R.W. Diffusion of modern software practices: Influence of centralization and formalization. Manag. Sci.
1982, 28, 1421–1431. [CrossRef]
93. Aguirre-Urreta, M.I.; Marakas, G.M. Exploring choice as an antecedent to behavior: Incorporating alternatives
into the technology acceptance process. J. Organ. End User Comput. 2012, 24, 82–107. [CrossRef]
94. Schwarz, A.; Chin, W.W.; Hirschheim, R.; Schwarz, C. Toward a process-based view of information technology
acceptance. J. Inf. Technol. 2014, 29, 73–96. [CrossRef]
95. Maier, C.; Laumer, S.; Weinert, C.; Weitzel, T. The effects of technostress and switching stress on discontinued
use of social networking services: A study of Facebook use. Inf. Syst. J. 2015, 25, 275–308. [CrossRef]
96. Damanpour, F.; Schneider, M. Phases of the adoption of innovation in organizations: Effects of environment,
organization and top managers 1. Br. J. Manag. 2006, 17, 215–236. [CrossRef]
97. Jeyaraj, A.; Rottman, J.W.; Lacity, M.C. A review of the predictors, linkages, and biases in IT innovation
adoption research. J. Inf. Technol. 2006, 21, 1–23. [CrossRef]
98. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology.
MIS Q. 1989, 13, 319. [CrossRef]
99. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User acceptance of information technology: Toward a
unified view. MIS Q. 2003, 27, 425–478. [CrossRef]
100. Oliver, R.L. A cognitive model of the antecedents and consequences of satisfaction decisions. J. Mark. Res.
1980, 17, 460–469. [CrossRef]
101. Bhattacherjee, A. Understanding information systems continuance: An expectation-confirmation model.
MIS Q. 2001, 25, 351. [CrossRef]
102. Tornatzky, L.G.; Fleischer, M.; Chakrabarti, A.K. Processes of Technological Innovation; Lexington Books:
Lanham, MD, USA, 1990.
103. Rogers, E.M. Diffusion of Innovations; The Free Press: New York, NY, USA, 1995; p. 12.
104. Teo, H.-H.; Wei, K.K.; Benbasat, I. Predicting intention to adopt interorganizational linkages: An institutional
perspective. MIS Q. 2003, 27, 19. [CrossRef]
105. DeLone, W.H.; McLean, E.R. Information systems success: The quest for the dependent variable. Inf. Syst. Res.
1992, 3, 60–95. [CrossRef]
106. Eden, R.; Sedera, D.; Tan, F. Sustaining the Momentum: Archival Analysis of Enterprise Resource Planning
Systems (2006–2012). Commun. Assoc. Inf. Syst. 2014, 35, 3. [CrossRef]
107. Chou, S.-W.; Chen, P.-Y. The influence of individual differences on continuance intentions of enterprise
resource planning (ERP). Int. J. Hum. Comput. Stud. 2009, 67, 484–496. [CrossRef]
108. Lin, W.-S. Perceived fit and satisfaction on web learning performance: IS continuance intention and
task-technology fit perspectives. Int. J. Hum. Comput. Stud. 2012, 70, 498–507. [CrossRef]
109. Karahanna, E.; Straub, D. The psychological origins of perceived usefulness and ease-of-use. Inf. Manag.
1999, 35, 237–250. [CrossRef]
110. Roca, J.C.; Chiu, C.-M.; Martínez, F.J. Understanding e-learning continuance intention: An extension of the
Technology Acceptance Model. Int. J. Hum. Comput. Stud. 2006, 64, 683–696. [CrossRef]
111. Dai, H.M.; Teo, T.; Rappa, N.A.; Huang, F. Explaining Chinese university students’ continuance learning
intention in the MOOC setting: A modified expectation confirmation model perspective. Comput. Educ.
2020, 150, 103850. [CrossRef]
112. Ouyang, Y.; Tang, C.; Rong, W.; Zhang, L.; Yin, C.; Xiong, Z. Task-technology fit aware expectation-
confirmation model towards understanding of MOOCs continued usage intention. In Proceedings of the 50th
Hawaii International Conference on System Sciences, Hilton Waikoloa Village, HI, USA, 4–7 January 2017.
113. Joo, Y.J.; So, H.-J.; Kim, N.H. Examination of relationships among students’ self-determination, technology
acceptance, satisfaction, and continuance intention to use K-MOOCs. Comput. Educ. 2018, 122, 260–272.
[CrossRef]
114. Thong, J.Y.; Hong, S.-J.; Tam, K.Y. The effects of post-adoption beliefs on the expectation-confirmation model
for information technology continuance. Int. J. Hum. Comput. Stud. 2006, 64, 799–810. [CrossRef]
115. Ajzen, I. The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 1991, 50, 179–211. [CrossRef]
116. Wixom, B.H.; Todd, P.A. A theoretical integration of user satisfaction and technology acceptance. Inf. Syst. Res.
2005, 16, 85–102. [CrossRef]
Appl. Sci. 2020, 10, 6628 31 of 36

117. Lokuge, S.; Sedera, D. Deriving information systems innovation execution mechanisms. In Proceedings of the
25th Australasian Conference on Information Systems (ACIS), Auckland, New Zealand, 8–10 December 2014.
118. Lokuge, S.; Sedera, D. Enterprise systems lifecycle-wide innovation readiness. In Proceedings of the PACIS
2014 Proceedings, Chengdu, China, 24–28 June 2014; pp. 1–14.
119. Melville, N.; Kraemer, K.; Gurbaxani, V. Information technology and organizational performance:
An integrative model of IT business value. MIS Q. 2004, 28, 283–322. [CrossRef]
120. Delone, W.H.; McLean, E.R. The DeLone and McLean model of information systems success: A ten-year
update. J. Manag. Inf. Syst. 2003, 19, 9–30.
121. Urbach, N.; Smolnik, S.; Riempp, G. The state of research on information systems success. Bus. Inf. Syst. Eng.
2009, 1, 315–325. [CrossRef]
122. Wang, Y.S. Assessing e-commerce systems success: A respecification and validation of the DeLone and
McLean model of IS success. Inf. Syst. J. 2008, 18, 529–557. [CrossRef]
123. Urbach, N.; Smolnik, S.; Riempp, G. An empirical investigation of employee portal success. J. Strateg.
Inf. Syst. 2010, 19, 184–206. [CrossRef]
124. Barki, H.; Huff, S.L. Change, attitude to change, and decision support system success. Inf. Manag. 1985, 9,
261–268. [CrossRef]
125. Gelderman, M. The relation between user satisfaction, usage of information systems and performance.
Inf. Manag. 1998, 34, 11–18. [CrossRef]
126. Seddon, P.B. A respecification and extension of the DeLone and McLean model of IS success. Inf. Syst. Res.
1997, 8, 240–253. [CrossRef]
127. Yuthas, K.; Young, S.T. Material matters: Assessing the effectiveness of materials management IS. Inf. Manag.
1998, 33, 115–124. [CrossRef]
128. Burton-Jones, A.; Gallivan, M.J. Toward a deeper understanding of system usage in organizations: A multilevel
perspective. MIS Q. 2007, 31, 657. [CrossRef]
129. Sedera, D.; Gable, G.; Chan, T. ERP success: Does organisation Size Matter? In Proceedings of the PACIS
2003 Proceedings, Adelaide, Australia, 10–13 July 2003; p. 74.
130. Sedera, D.; Gable, G.; Chan, T. Knowledge management for ERP success. In Proceedings of the PACIS 2003
Proceedings, Adelaide, Australia, 10–13 July 2003; p. 97.
131. Abolfazli, S.; Sanaei, Z.; Tabassi, A.; Rosen, S.; Gani, A.; Khan, S.U. Cloud Adoption in Malaysia: Trends,
Opportunities, and Challenges. IEEE Cloud Comput. 2015, 2, 60–68. [CrossRef]
132. Arkes, H.R.; Blumer, C. The psychology of sunk cost. Organ. Behav. Hum. Decis. Process. 1985, 35, 124–140.
[CrossRef]
133. Ahtiala, P. The optimal pricing of computer software and other products with high switching costs.
Int. Rev. Econ. Financ. 2006, 15, 202–211. [CrossRef]
134. Benlian, A.; Vetter, J.; Hess, T. The role of sunk cost in consecutive IT outsourcing decisions. Z. Fur Betr. 2012,
82, 181.
135. Armbrust, M.; Fox, A.; Griffith, R.; Joseph, A.D.; Katz, R.; Konwinski, A.; Lee, G.; Patterson, D.; Rabkin, A.;
Stoica, I. A view of cloud computing. Commun. ACM 2010, 53, 50–58. [CrossRef]
136. Wei, Y.; Blake, M.B. Service-oriented computing and cloud computing: Challenges and opportunities.
IEEE Internet Comput. 2010, 14, 72–75. [CrossRef]
137. Bughin, J.; Chui, M.; Manyika, J. Clouds, big data, and smart assets: Ten tech-enabled business trends to
watch. McKinsey Q. 2010, 56, 75–86.
138. Lin, H.-F. Understanding the determinants of electronic supply chain management system adoption:
Using the Technology–Organization–Environment framework. Technol. Forecast. Soc. Chang. 2014, 86, 80–92.
[CrossRef]
139. Oliveira, T.; Martins, M.F. Literature review of information technology adoption models at firm level. Electron.
J. Inf. Syst. Eval. 2011, 14, 110–121.
140. Chau, P.Y.; Tam, K.Y. Factors affecting the adoption of open systems: An exploratory study. MIS Q. 1997, 21,
1–24. [CrossRef]
141. Galliers, R.D. Organizational Dynamics of Technology-Based Innovation; Springer: Boston, MA, USA, 2007;
pp. 15–18.
Appl. Sci. 2020, 10, 6628 32 of 36

142. Yadegaridehkordi, E.; Iahad, N.A.; Ahmad, N. Task-Technology Fit Assessment of Cloud-Based Collaborative
Learning Technologes: Remote Work and Collaboration: Breakthroughs in Research and Practice.
Int. J. Inf. Systems Serv. Sect. 2017, 371–388. [CrossRef]
143. Gupta, P.; Seetharaman, A.; Raj, J.R.J.I.J.o.I.M. The usage and adoption of cloud computing by small and
medium businesses. Int. J. Inf. Manag. 2013, 33, 861–874. [CrossRef]
144. Chong, A.Y.-L.; Lin, B.; Ooi, K.-B.; Raman, M. Factors affecting the adoption level of c-commerce: An empirical
study. J. Comput. Inf. Syst. 2009, 50, 13–22.
145. Oliveira, T.; Thomas, M.; Espadanal, M. Assessing the determinants of cloud computing adoption: An analysis
of the manufacturing and services sectors. Inf. Manag. 2014, 51, 497–510. [CrossRef]
146. Senyo, P.K.; Effah, J.; Addae, E. Preliminary insight into cloud computing adoption in a developing country.
J. Enterp. Inf. Manag. 2016, 29, 505–524. [CrossRef]
147. Klug, W.; Bai, X. The determinants of cloud computing adoption by colleges and universities.
Int. J. Bus. Res. Inf. Technol. 2015, 2, 14–30.
148. Cornu, B. Digital Natives: How Do They Learn? How to Teach Them; UNESCO Institute for Information
Technology in Education: Moscow, Russia, 2011; Volume 52, pp. 2–11.
149. Oblinger, D.; Oblinger, J.L.; Lippincott, J.K. Educating the Net Generation; c2005. 1 v.(various pagings):
Illustrations; Educause: Boulder, CO, USA, 2005.
150. Wymer, S.A.; Regan, E.A. Factors influencing e-commerce adoption and use by small and medium businesses.
Electron. Mark. 2005, 15, 438–453. [CrossRef]
151. Qasem, Y.A.; Abdullah, R.; Atan, R.; Jusoh, Y.Y. Mapping and Analyzing Process of Cloud-based Education
as a Service (CEaaS) Model for Cloud Computing Adoption in Higher Education Institutions. In Proceedings
of the 2018 Fourth International Conference on Information Retrieval and Knowledge Management (CAMP),
Kota Kinabalu, MX, USA, 26–28 March 2018; pp. 1–8.
152. Walther, S.; Sedera, D.; Sarker, S.; Eymann, T. Evaluating Operational Cloud Enterprise System Success:
An Organizational Perspective. In Proceedings of the ECIS, Utrecht, The Netherlands, 6–8 June 2013; p. 16.
153. Wang, M.W.; Lee, O.-K.; Lim, K.H. Knowledge management systems diffusion in Chinese enterprises:
A multi-stage approach with the technology-organization-environment framework. In Proceedings of the
PACIS 2007 Proceedings, Auckland, New Zealand, 4–6 July 2007; p. 70.
154. Liao, C.; Palvia, P.; Chen, J.-L. Information technology adoption behavior life cycle: Toward a Technology
Continuance Theory (TCT). Int. J. Inf. Manag. 2009, 29, 309–320. [CrossRef]
155. Li, Y.; Crossler, R.E.; Compeau, D. Regulatory Focus in the Context of Wearable Continuance. In Proceedings
of the AMCIS 2019 Conference Site, Cancún, México, 15–17 August 2019.
156. Rousseau, D.M.J.R.i.o.b. Issues of level in organizational research: Multi-level and cross-level perspectives.
Res. Organ. Behav. 1985, 7, 1–37.
157. Walther, S. An Investigation of Organizational Level Continuance of Cloud-Based Enterprise Systems.
Ph.D. Thesis, University of Bayreuth, Bayreuth, Germany, 2014.
158. Petter, S.; DeLone, W.; McLean, E. Measuring information systems success: Models, dimensions, measures,
and interrelationships. Eur. J. Inf. Syst. 2008, 17, 236–263. [CrossRef]
159. Robey, D.; Zeller, R.L.J.I. Factors affecting the success and failure of an information system for product quality.
Interfaces 1978, 8, 70–75. [CrossRef]
160. Aldholay, A.; Isaac, O.; Abdullah, Z.; Abdulsalam, R.; Al-Shibami, A.H. An extension of Delone and McLean
IS success model with self-efficacy: Online learning usage in Yemen. Int. J. Inf. Learn. Technol. 2018, 35,
285–304. [CrossRef]
161. Xu, J.D.; Benbasat, I.; Cenfetelli, R.T.J.M.Q. Integrating service quality with system and information quality:
An empirical test in the e-service context. MIS Q. 2013, 37, 777–794. [CrossRef]
162. Spears, J.L.; Barki, H.J.M.q. User participation in information systems security risk management. MIS Q.
2010, 34, 503. [CrossRef]
163. Lee, S.; Shin, B.; Lee, H.G. Understanding post-adoption usage of mobile data services: The role of
supplier-side variables. J. Assoc. Inf. Syst. 2009, 10, 860–888. [CrossRef]
164. Alshare, K.A.; Freeze, R.D.; Lane, P.L.; Wen, H.J. The impacts of system and human factors on online learning
systems use and learner satisfaction. Decis. Sci. J. Innov. Educ. 2011, 9, 437–461. [CrossRef]
165. Benlian, A.; Koufaris, M.; Hess, T. Service quality in software-as-a-service: Developing the SaaS-Qual
measure and examining its role in usage continuance. J. Manag. Inf. Syst. 2011, 28, 85–126. [CrossRef]
Appl. Sci. 2020, 10, 6628 33 of 36

166. Oblinger, D. Boomers gen-xers millennials. EDUCAUSE Rev. 2003, 500, 37–47.
167. Monaco, M.; Martin, M. The millennial student: A new generation of learners. Athl. Train. Educ. J. 2007, 2,
42–46. [CrossRef]
168. White, B.J.; Brown, J.A.E.; Deale, C.S.; Hardin, A.T. Collaboration using cloud computing and traditional
systems. Issues Inf. Syst. 2009, 10, 27–32.
169. Nkhoma, M.Z.; Dang, D.P.; De Souza-Daw, A. Contributing factors of cloud computing adoption:
A technology-organisation-environment framework approach. In Proceedings of the European Conference
on Information Management & Evaluation, Melbourne, Australia, 2–4 December 2013.
170. Zhu, K.; Dong, S.; Xu, S.X.; Kraemer, K.L. Innovation diffusion in global contexts: Determinants of
post-adoption digital transformation of European companies. Eur. J. Inf. Syst. 2006, 15, 601–616. [CrossRef]
171. Zhu, K.; Kraemer, K.L.; Xu, S. The process of innovation assimilation by firms in different countries:
A technology diffusion perspective on e-business. Manag. Sci. 2006, 52, 1557–1576. [CrossRef]
172. Shah Alam, S.; Ali, M.Y.; Jaini, M.M.F. An empirical study of factors affecting electronic commerce adoption
among SMEs in Malaysia. J. Bus. Econ. Manag. 2011, 12, 375–399. [CrossRef]
173. Ifinedo, P. Internet/e-business technologies acceptance in Canada’s SMEs: An exploratory investigation.
Internet Res. 2011, 21, 255–281. [CrossRef]
174. Creswell, J.W.; Creswell, J.D. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches;
Sage Publications: Los Angeles, CA, USA, 2017.
175. Moore, G.C.; Benbasat, I. Development of an instrument to measure the perceptions of adopting an
information technology innovation. Inf. Syst. Res. 1991, 2, 192–222. [CrossRef]
176. Wu, K.; Vassileva, J.; Zhao, Y. Understanding users’ intention to switch personal cloud storage services:
Evidence from the Chinese market. Comput. Hum. Behav. 2017, 68, 300–314. [CrossRef]
177. Kelley, D.L. Measurement Made Accessible: A Research Approach Using Qualitative, Quantitative and Quality
Improvement Methods; Sage Publications: Los Angeles, CA, USA, 1999.
178. McKenzie, J.F.; Wood, M.L.; Kotecki, J.E.; Clark, J.K.; Brey, R.A. Establishing content validity: Using qualitative
and quantitative steps. Am. J. Health Behav. 1999, 23, 311–318. [CrossRef]
179. Zikmund, W.G.; Babin, B.J.; Carr, J.C.; Griffin, M. Business Research Methods, 9th ed.; South-Western Cengage
Learning: Nelson, BC, Canada, 2013.
180. Sekaran, U.; Bougie, R. Research Methods for Business: A Skill Building Approach; John Wiley & Sons: New York,
NY, USA, 2016.
181. Mathieson, K.; Peacock, E.; Chin, W.W. Extending the technology acceptance model. ACM SIGMIS Database
Database Adv. Inf. Syst. 2001, 32, 86. [CrossRef]
182. Diamantopoulos, A.; Winklhofer, H.M. Index construction with formative indicators: An alternative to scale
development. J. Mark. Res. 2001, 38, 269–277. [CrossRef]
183. MacKenzie, S.B.; Podsakoff, P.M.; Podsakoff, N.P. Construct measurement and validation procedures in MIS
and behavioral research: Integrating new and existing techniques. MIS Q. 2011, 35, 293–334. [CrossRef]
184. Petter, S.; Straub, D.; Rai, A. Specifying formative constructs in information systems research. MIS Q. 2007,
31, 623. [CrossRef]
185. Haladyna, T.M. Developing and Validating Multiple-Choice Test Items; Routledge: London, UK, 2004.
186. DeVon, H.A.; Block, M.E.; Moyle-Wright, P.; Ernst, D.M.; Hayden, S.J.; Lazzara, D.J.; Savoy, S.M.;
Kostas-Polston, E. A psychometric toolbox for testing validity and reliability. J. Nurs. Sch. 2007, 39,
155–164. [CrossRef] [PubMed]
187. Webster, J.; Watson, R.T. Analyzing the past to prepare for the future: Writing a literature review. MIS Q.
2002, 26, 13–23.
188. Mac MacKenzie, S.B.; Podsakoff, P.M.; Jarvis, C.B. The Problem of Measurement Model Misspecification
in Behavioral and Organizational Research and Some Recommended Solutions. J. Appl. Psychol. 2005, 90,
710–730. [CrossRef] [PubMed]
189. Briggs, R.O.; Reinig, B.A.; Vreede, G.-J. The Yield Shift Theory of Satisfaction and Its Application to the IS/IT
Domain. J. Assoc. Inf. Syst. 2008, 9, 267–293. [CrossRef]
190. Rushinek, A.; Rushinek, S.F. What makes users happy? Commun. ACM 1986, 29, 594–598. [CrossRef]
191. Oliver, R.L. Measurement and evaluation of satisfaction processes in retail settings. J. Retail. 1981, 57, 24–48.
192. Swanson, E.B.; Dans, E. System life expectancy and the maintenance effort: Exploring their equilibration.
MIS Q. 2000, 24, 277. [CrossRef]
Appl. Sci. 2020, 10, 6628 34 of 36

193. Gill, T.G. Early expert systems: Where are they now? MIS Q. 1995, 19, 51. [CrossRef]
194. Keil, M.; Mann, J.; Rai, A. Why software projects escalate: An empirical analysis and test of four theoretical
models. MIS Q. 2000, 24, 631. [CrossRef]
195. Campion, M.A.; Medsker, G.J.; Higgs, A.C.J.P.p. Relations between work group characteristics and
effectiveness: Implications for designing effective work groups. Pers. Psychol. 1993, 46, 823–847. [CrossRef]
196. Baas, P. Task-Technology Fit in the Workplace: Affecting Employee Satisfaction and Productivity; Erasmus
Universiteit: Rotterdam, The Netherlands, 2010.
197. Doolin, B.; Troshani, I. Organizational Adoption of XBRL. Electron. Mark. 2007, 17, 199–209. [CrossRef]
198. Segars, A.H.; Grover, V. Strategic Information Systems Planning Success: An Investigation of the Construct
and Its Measurement. MIS Q. 1998, 22, 139. [CrossRef]
199. Sharma, R.; Yetton, P.; Crawford, J. Estimating the effect of common method variance: The method—Method
pair technique with an illustration from TAM Research. MIS Q. 2009, 33, 473–490. [CrossRef]
200. Gorla, N.; Somers, T.M.; Wong, B. Organizational impact of system quality, information quality, and service
quality. J. Strateg. Inf. Syst. 2010, 19, 207–228. [CrossRef]
201. Malhotra, N.K. Questionnaire design and scale development. The Handbook of Marketing Research: Uses, Misuses,
and Future Advances; Sage: Thousand Oaks, CA, USA, 2006; pp. 83–94.
202. Hertzog, M.A. Considerations in determining sample size for pilot studies. Res. Nurs. Health 2008, 31,
180–191. [CrossRef]
203. Saunders, M.N. Research Methods for Business Students, 5th ed.; Pearson Education India: Bengaluru, India, 2011.
204. Sekaran, U.; Bougie, R. Research Methods for Business, A Skill Building Approach; John Willey & Sons Inc.:
New York, NY, USA, 2003.
205. Tellis, W. Introduction to case study. Qual. Rep. 1997, 3, 2.
206. Whitehead, A.L.; Julious, S.A.; Cooper, C.L.; Campbell, M.J. Estimating the sample size for a pilot randomised
trial to minimise the overall trial sample size for the external pilot and main trial for a continuous outcome
variable. Stat. Methods Med. Res. 2016, 25, 1057–1073. [CrossRef]
207. Hair Jr, J.F.; Hult, G.T.M.; Ringle, C.; Sarstedt, M. A Primer on Partial Least Squares Structural Equation Modeling
(PLS-SEM); Sage Publications: Thousand Oaks, CA, USA, 2016.
208. Chin, W.W.; Marcolin, B.L.; Newsted, P.R. A partial least squares latent variable modeling approach
for measuring interaction effects: Results from a Monte Carlo simulation study and an electronic-mail
emotion/adoption study. Inf. Syst. Res. 2003, 14, 189–217. [CrossRef]
209. Hulland, J. Use of partial least squares (PLS) in strategic management research: A review of four recent
studies. Strateg. Manag. J. 1999, 20, 195–204. [CrossRef]
210. Gefen, D.; Rigdon, E.E.; Straub, D. Editor’s comments: An update and extension to SEM guidelines for
administrative and social science research. MIS Q. 2011, 35, 3–14. [CrossRef]
211. Chin, W.W. How to write up and report PLS analyses. In Handbook of Partial Least Squares; Springer:
Berlin/Heidelberg, Germany, 2010; pp. 655–690.
212. Ainuddin, R.A.; Beamish, P.W.; Hulland, J.S.; Rouse, M.J. Resource attributes and firm performance in
international joint ventures. J. World Bus. 2007, 42, 47–60. [CrossRef]
213. Henseler, J.; Ringle, C.M.; Sinkovics, R.R. The use of partial least squares path modeling in international
marketing. In New Challenges to International Marketing; Emerald Group Publishing Limited: Bingley, UK,
2009; pp. 277–319.
214. Urbach, N.; Ahlemann, F. Structural equation modeling in information systems research using partial least
squares. J. Inf. Technol. Theory Appl. 2010, 11, 5–40.
215. Sommerville, I. Software Engineering, 9th ed.; Pearson Education Limited: Harlow, UK, 2011; p. 18.
ISBN 0137035152.
216. Venkatesh, V.; Bala, H. Technology acceptance model 3 and a research agenda on interventions. Decis. Sci.
2008, 39, 273–315. [CrossRef]
217. Venkatesh, V.; Davis, F.D. A theoretical extension of the technology acceptance model: Four longitudinal
field studies. Manag. Sci. 2000, 46, 186–204. [CrossRef]
218. Taylor, S.; Todd, P.A. Understanding information technology usage: A test of competing models. Inf. Syst. Res.
1995, 6, 144–176. [CrossRef]
219. Faulkner, L. Beyond the five-user assumption: Benefits of increased sample sizes in usability testing.
Behav. Res. Methods Instrum. Comput. 2003, 35, 379–383. [CrossRef]
Appl. Sci. 2020, 10, 6628 35 of 36

220. Turner, C.W.; Lewis, J.R.; Nielsen, J. Determining usability test sample size. Int. Encycl. Ergon. Hum. Factors
2006, 3, 3084–3088.
221. Zaman, H.; Robinson, P.; Petrou, M.; Olivier, P.; Shih, T.; Velastin, S.; Nystrom, I. Visual Informatics: Sustaining
Research and Innovations; LNCS, Springer: Selangor, Malaysia, 2011.
222. Hadi, A.; Daud, W.M.F.W.; Ibrahim, N.H. The development of history educational game as a revision tool for
Malaysia school education. In Proceedings of the International Visual Informatics Conference, Selangor, MY,
USA, 9–11 November 2011; Springer: Berlin/Heidelberg, Germany, 2011.
223. Marian, A.M.; Haziemeh, F.A. On-Line Mobile Staff Directory Service: Implementation for the Irbid University
College (Iuc). Ubiquitous Comput. Commun. J. 2011, 6, 25–33.
224. Brinkman, W.-P.; Haakma, R.; Bouwhuis, D. The theoretical foundation and validity of a component-based
usability questionnaire. Behav. Inf. Technol. 2009, 28, 121–137. [CrossRef]
225. Mikroyannidis, A.; Connolly, T. Case Study 3: Exploring open educational resources for informal learning.
In Responsive Open Learning Environments; Springer: Cham, Switzerland, 2015; pp. 135–158.
226. Shanmugam, M.; Yah Jusoh, Y.; Jabar, M.A. Measuring Continuance Participation in Online Communities.
J. Theor. Appl. Inf. Technol. 2017, 95, 3513–3522.
227. Straub, D.; Boudreau, M.C.; Gefen, D. Validation guidelines for IS positivist research. Commun. Assoc. Inf. Syst.
2004, 13, 24. [CrossRef]
228. Lynn, M.R. Determination and quantification of content validity. Nurs. Res. 1986, 35, 382–385. [CrossRef]
229. Dobratz, M.C. The life closure scale: Additional psychometric testing of a tool to measure psychological
adaptation in death and dying. Res. Nurs. Health 2004, 27, 52–62. [CrossRef]
230. Davis, L.L. Instrument review: Getting the most from a panel of experts. Appl. Nurs. Res. 1992, 5, 194–197.
[CrossRef]
231. Polit, D.F.; Beck, C.T. The content validity index: Are you sure you know what’s being reported? Critique
and recommendations. Res. Nurs. Health 2006, 29, 489–497. [CrossRef] [PubMed]
232. Qasem, Y.A.M.; Asadi, S.; Abdullah, R.; Yah, Y.; Atan, R.; Al-Sharafi, M.A.; Yassin, A.A. A Multi-Analytical
Approach to Predict the Determinants of Cloud Computing Adoption in Higher Education Institutions.
Appl. Sci. 2020, 10. [CrossRef]
233. Coolican, H. Research Methods and Statistics in Psychology; Psychology Press: New York, NY, USA, 2017.
234. Briggs, S.R.; Cheek, J.M. The role of factor analysis in the development and evaluation of personality scales.
J. Personal. 1986, 54, 106–148. [CrossRef]
235. Tabachnick, B.G.; Fidell, L.S. Principal components and factor analysis. Using Multivar Stat. 2001, 4, 582–633.
236. Gliem, J.A.; Gliem, R.R. Calculating, interpreting, and reporting Cronbach’s alpha reliability coefficient for
Likert-type scales. In Proceedings of the 2003 Midwest Research-to-Practice Conference in Adult, Continuing,
and Community, Jeddah, Saudi Arabia, 8–10 October 2003.
237. Mallery, P.; George, D. SPSS for Windows Step by Step: A Simple Guide and Reference; Allyn & Bacon: Boston,
MA, USA, 2003.
238. Nunnally, J.C. Psychometric Theory 3E; Tata McGraw-Hill Education: New York, NY, USA, 1994.
239. Bagozzi, R.P.; Yi, Y. On the evaluation of structural equation models. J. Acad. Mark. Sci. 1988, 16, 74–94.
[CrossRef]
240. Fornell, C.; Larcker, D.F. Evaluating structural equation models with unobservable variables and measurement
error. J. Mark. Res. 1981, 18, 39–50. [CrossRef]
241. Gefen, D.; Straub, D.; Boudreau, M.-C. Structural Equation Modeling and Regression: Guidelines for Research
Practice. Commun. Assoc. Inf. Syst. 2000, 4, 7. [CrossRef]
242. Chin, W.W.; Marcoulides, G. The partial least squares approach to structural equation modeling. Mod. Methods
Bus. Res. 1998, 295, 295–336.
243. Diamantopoulos, A.; Siguaw, J.A. Formative versus reflective indicators in organizational measure
development: A comparison and empirical illustration. Br. J. Manag. 2006, 17, 263–282. [CrossRef]
244. Hair, J.F.; Ringle, C.M.; Sarstedt, M. Partial least squares structural equation modeling: Rigorous applications,
better results and higher acceptance. Long Range Plan. 2013, 46, 1–12. [CrossRef]
245. Cenfetelli, R.T.; Bassellier, G. Interpretation of formative measurement in information systems research.
MIS Q. 2009, 33, 689–707. [CrossRef]
246. Yoo, Y.; Henfridsson, O.; Lyytinen, K. Research commentary—The new organizing logic of digital innovation:
An agenda for information systems research. Inf. Syst. Res. 2010, 21, 724–735. [CrossRef]
Appl. Sci. 2020, 10, 6628 36 of 36

247. Nylén, D.; Holmström, J. Digital innovation strategy: A framework for diagnosing and improving digital
product and service innovation. Bus. Horiz. 2015, 58, 57–67. [CrossRef]
248. Maksimovic, M. Green Internet of Things (G-IoT) at engineering education institution: The classroom of
tomorrow. Green Internet Things 2017, 16, 270–273.
249. Fortino, G.; Rovella, A.; Russo, W.; Savaglio, C. Towards cyberphysical digital libraries: Integrating IoT
smart objects into digital libraries. In Management of Cyber Physical Objects in the Future Internet of Things;
Springer: Berlin/Heidelberg, Germany, 2016; pp. 135–156.
250. Picciano, A.G. The evolution of big data and learning analytics in American higher education. J. Asynchronous
Learn. Netw. 2012, 16, 9–20. [CrossRef]
251. Ifinedo, P. An empirical analysis of factors influencing Internet/e-business technologies adoption by SMEs in
Canada. Int. J. Inf. Technol. Decis. Mak. 2011, 10, 731–766. [CrossRef]
252. Talib, A.M.; Atan, R.; Abdullah, R.; Murad, M.A.A. Security framework of cloud data storage based on
multi agent system architecture—A pilot study. In Proceedings of the 2012 International Conference on
Information Retrieval and Knowledge Management, CAMP’12, Kuala Lumpur, Malaysia, 13–15 March 2012.
253. Adrian, C.; Abdullah, R.; Atan, R.; Jusoh, Y.Y. Factors influencing to the implementation success of big data
analytics: A systematic literature review. In Proceedings of the International Conference on Research and
Innovation in Information Systems, ICRIIS, Langkawi, Malaysia, 16–17 July 2017.

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy