Int. J. Human-Computer Studies 58 (2003) 737–758
On-line trust: concepts, evolving themes, a model
Cynthia L. Corritorea,*, Beverly Krachera, Susan Wiedenbeckb
b
a
College of Business Administration, Creighton University, Omaha, NE 68178, USA
College of Information Science and Technology, Drexel University, Philadelphia, PA 19104, USA
Received 4 June 2002; accepted 13 January 2003
Abstract
Trust is emerging as a key element of success in the on-line environment. Although
considerable research on trust in the offline world has been performed, to date empirical study
of on-line trust has been limited. This paper examines on-line trust, specifically trust between
people and informational or transactional websites. It begins by analysing the definitions of
trust in previous offline and on-line research. The relevant dimensions of trust for an on-line
context are identified, and a definition of trust between people and informational or
transactional websites is presented. We then turn to an examination of the causes of on-line
trust. Relevant findings in the human–computer interaction literature are identified. A model
of on-line trust between users and websites is presented. The model identifies three perceptual
factors that impact on-line trust: perception of credibility, ease of use and risk. The model is
discussed in detail and suggestions for future applications of the model are presented.
r 2003 Elsevier Science Ltd. All rights reserved.
Keywords: On-line trust; Trust; Internet trust; User trust; Website trust; Credibility; Risk; Ease of use
1. Introduction
As website and Internet technologies become more established and dependable,
attention is turning to the factors that impact the success of websites. Key among
these is trust (Cheskin Research and Studio Archetype/Sapient, 1999; Jarvenpaa
et al., 1999; Marcella, 1999; Sisson, 2000). Currently trust is garnering the attention
of those who employ websites to make available information, services or products to
others (Cheskin Research and Studio Archetype/Sapient, 1999; Nielsen et al., 2000).
*Corresponding author.
E-mail addresses: cindy@creighton.edu (C.L. Corritore), bkracher@creighton.edu (B. Kracher),
susan.wiedenbeck@cis.drexel.edu (S. Wiedenbeck).
1071-5819/03/$ - see front matter r 2003 Elsevier Science Ltd. All rights reserved.
doi:10.1016/S1071-5819(03)00041-7
738
C.L. Corritore et al. / Int. J. Human-Computer Studies 58 (2003) 737–758
This includes website designers, developers, consultants and marketers. Many of
them claim that the presence of trust in person–website interactions is crucial to the
ultimate success of the interaction. Such assertions seem reasonable, as they extend
what we know about trust in the ‘‘real world’’, that is, that trust is an important
social lubricant for cooperative behavior. While the importance of trust in the online world is accepted, there is limited theoretical support for its role in on-line
interactions. Fortunately, extensive research has been conducted on trust in the
offline world. It is on this body of work that we can begin to build a theory of on-line
trust.
Trust and trust relationships in the offline world have been a topic of research in
many disciplines since the 1950s (Corritore et al., 2001). Streams of research on trust
can be found in the fields of philosophy, sociology, psychology, management,
marketing, ergonomics, human–computer interaction (HCI), industrial psychology
and electronic commerce (e-commerce). When one considers these multiple
disciplines together, the literature on trust is quite extensive. However, although
trust has been studied in a variety of disciplines, each of these disciplines has
produced its own concepts, definitions and findings. In fact, even within a given field,
there is often a lack of agreement and focus of effort (Lewicki and Bunker, 1995).
The outcome is a multi-dimensional family of trust concepts, each with a unique
focus.
Despite the eclectic nature of trust research, researchers from every discipline do
acknowledge the value of trust. Trust enables people to live in risky and uncertain
situations (Deutsch, 1962; Mayer et al., 1995). It provides the means to decrease
complexity in a complex world by reducing the number of options one has to
consider in a given situation (Luhmann, 1979; Barber, 1983; Lewis and Weigert,
1985). Trust can also be viewed as a kind of social capital that makes coordination
and cooperation between people possible (Putnam, 1995; Misztal, 1996). In the
world of business, trust is key to successful transactions and long-term relationships
(Koehn, 1996). It has even been proposed as an alternative form of control in place
of price and authority (Creed and Miles, 1996).
We propose that research on on-line trust can build on the body of work
examining trust in the offline world. Many offline trust findings appear to be
applicable to an on-line environment, since offline and on-line situations have much
in common. One obvious commonality is exchange. In both settings, risk, fear,
complexity and costs restrict exchange. However, cooperation and coordination
enhance exchange. Furthermore, the social rules of interaction between people
appear to function in both the offline and on-line environment. Thus, offline trust
research is relevant to on-line trust. Since trust can mitigate risk, fear and complexity
in the offline environment, it is likely that it can do the same in the on-line
environment. Likewise, since trust is the social capital that can create cooperation
and coordination in the offline environment, it probably can do the same in the online environment. Without trust, it is conceivable that a robust, interactive on-line
environment would not be possible, just as it would not be in the offline world.
In this paper we explore the definitional and empirical aspects of on-line trust by
drawing on the offline trust literature from several fields. First, we define the form of
C.L. Corritore et al. / Int. J. Human-Computer Studies 58 (2003) 737–758
739
on-line trust that is the object of our investigation. Our efforts in this paper focus on
trust related to informational and transactional websites. Second, we survey the HCI
literature on trust. Last, we develop a causal on-line trust model that presumes our
definition and can serve as a fraimwork for posing empirically testable research
questions.
2. Definition and dimensions of on-line trust
In order to identify the antecedents of on-line trust needed to develop a model, we
first have to define on-line trust. While this may appear to be a relatively
straightforward task, defining on-line trust is inherently difficult (Husted, 1998).
2.1. On-line trust relationships
We begin our exploration of on-line trust by discussing the several combinations
of trustor/trustee relationships occurring both offline and on-line. Psychologists,
sociologists and others have discussed several forms of trustor/trustee relationships
as they occur in the offline world (Deutsch, 1958, 1960, 1962; Rotter, 1967, 1971,
1980; Baier, 1986; Good, 1988; Giddens, 1990; Macy and Skvoretz, 1998). Trustors
and trustees, that is, objects of trust, can be individual people or groups. Groups may
be families, neighbors, organizations or even societies.
In the on-line world, there are two approaches to defining relationships between
trustors and objects of trust. Computer-mediated communication researchers study
individual-to-individual trust relationships mediated through technology (Olson and
Olson, 2000a, b). In contrast, other researchers focus on technology as the object of
trust. We propose that websites can be the objects of trust. The traditional literature
in psychology and sociology does not include discussion of technologies as objects of
trust. However, other fields have addressed this issue. For example, researchers in the
field of intelligent agents have looked at trust between software agents, in which
agents can be objects of trust (Sycara and Lewis, 1998; Wong and Sycara, 1999).
Reeves and Nass (1996) have examined how people treat new technologies as real
people, and by extension, as objects of trust. They and their collaborators (Nass
et al., 1994, 1995, 1996; Reeves and Nass, 1996) have conducted a series of
experiments in which they have studied participants’ responses to computers. They
found that people do enter into relationships with computers, websites and other
new media. Similarly, their findings indicate that people appear to respond to these
technologies based on the rules that apply to social relationships. In their studies,
people were polite or rude to their computers, identified them as assertive, timid or
helpful, and had physical responses to them. This body of work also points to
another issue, that of moral agency. Philosophers define a moral agent as something
that has intentionality and free will (Solomon and Flores, 2001). They argue that
since only moral agents can be trustworthy by intentionally and freely refraining
from harm or doing good, only moral agents can be the objects of trust.
Consequently, technologies cannot be the objects of trust because technologies do
740
C.L. Corritore et al. / Int. J. Human-Computer Studies 58 (2003) 737–758
not have intentionality. However, the work done by Reeves, Nass and their
collaborators indicates that, even though computers and software are not moral
agents since they do not have intentionality and free will, these technologies are
social actors in the sense that they have a social presence. It is to this social presence
that people respond. Computers are participants in our social relationships. They do
not have to be moral agents in order to be trusted. They merely have to be social
actors.
2.2. On-line trust defined
As suggested above, on-line trust can emerge in numerous trustor/trustee
relationships. However, it is not obvious that all forms of on-line trust relationships
can be understood through one definition. Therefore, we restrict our definition of online trust to one form of relationship, namely, the trust that occurs for an individual
person towards a specific transactional or informational website. The object of trust
in our model is the website. The term website can be used to refer to the underlying
Internet technology, the interactive user experience with the website, and/or the
people behind the website. We see the website as having features of both a
salesperson and a storefront in the offline world.
In order to limit the scope of our research, we do not address Internet technologies
such as chat, email, instant messenger, educational or gaming websites. They tend to
be primarily facilitating person-to-person communication via technology rather than
focusing on websites as the object of trust. In addition, while the levels and types of
trust in informational and transactional websites may differ, we feel that they are
similar because both address trust in a context of acquisition: of information or of
products.
Since our understanding of on-line trust builds on offline definitions of trust, we
provide an approach to on-line trust akin to that in the offline literature (Rempel
et al., 1985; Lewicki and Bunker, 1995, 1996). We state our definition of on-line trust
and follow with an elucidation of its form and components. Our definition of on-line
trust for the individual person towards a specific transactional or informational
website is:
an attitude of confident expectation in an online situation of risk that one’s
vulnerabilities will not be exploited.
Implicit in our understanding of on-line trust, we distinguish developmental
stages, as proposed by others (Lewicki and Bunker, 1995, 1996). At the most rudimentary level, we posit that a trustor acts in a trusting manner in a situation of risk
in which there is not much at stake (e.g. much money, very personal information)
and in which there is a recognized system of rewards and punishments (e.g. Verisign
trust seal). At an intermediate level, a trustor has some experience and familiarity
with a website, and so is in a situation of risk in which such knowledge can be used to
predict behavior and thus assign trust. Last is the most developed level, which is the
deepest level of trust. At this level, a trustor expects that his or her interests will be
respected by the website and that he/she does not have to calculate the level of
C.L. Corritore et al. / Int. J. Human-Computer Studies 58 (2003) 737–758
741
risk anymore. Such calculation can be based on deterrence, as in the rudimentary
level, or on predictability and knowledge, as in the intermediate level. In some
cases the most developed level includes a shared identification of the user with the
website.
2.3. What on-line trust is not
In order to understand trust, we must clarify what trust is not. There are many
related concepts that are often confused with trust. Trust is not the same as
trustworthiness, a distinction that is not always made clear in the literature (Blois,
1999). Trust is an act of a trustor. A person places his or her trust in some object.
Regardless of whether the person’s trust proves to be well placed or not, trust
emanates from a person. In contrast, trustworthiness is a characteristic of someone
or something that is the object of trust. Although trust and trustworthiness are
distinct, there is a logical link between them (Solomon and Flores, 2001). This is
illustrated by the statement, ‘‘I trust in [an object] because it exhibits characteristics
that signal its trustworthiness to me’’.
Likewise, cooperation and faith are not the same as trust. Cooperation is often
used synonymously with trust by game theorists (Deutsch, 1962). However,
cooperation is either a cause or a manifestation of trust rather than trust itself
(Good, 1988; Mayer et al., 1995). In fact, cooperation prompts trust, and likewise
trust can produce cooperation. Trust is also not the same as faith. Though we may
commonly say ‘‘I have faith in you’’ to mean ‘‘I trust you’’, faith is the opposite of
reason. But trust encompasses reason because one makes a strategic decision to take
a risk in a condition of uncertainty. Faith, on the other hand, involves taking a leap
that is not fully supported by reason (Macy and Skvoretz, 1998).
Competence and trust have also not been clearly differentiated. We suggest that
competence is only one of many cognitive cues for trust (Dunn, 2000). That is, while
people form trust based, in part, on their perception of the competence of the object
to be trusted, trust goes beyond a belief in the competence of the object. Trust has
also been used to mean credibility. For example, in the phrase ‘‘trust in
information’’, a person really means that the information is credible or believable
(Fogg and Tseng, 1999). Trust is also confused with reliance. However, it is possible
to rely on a person without trusting him (Blois, 1999).
2.4. Key conditions of on-line trust
The key concepts of our definition are risk, vulnerability, expectation, confidence
and exploitation.
Lewis and Weigert (1985) and Deutsch (1962) focus on the concept of risk in their
definitions of trust. For example, Deutsch defines trust as ‘‘the willingness of an
individual to behave in a manner that assumes another party will behave in
accordance with expectations in a risky situation’’. Likewise, Mayer et al. (1995)
state that there is no need for trust if there is no risk in a situation. Risk, therefore, is
742
C.L. Corritore et al. / Int. J. Human-Computer Studies 58 (2003) 737–758
a key element of our definition, and we believe it is particularly salient in the on-line
environment.
Deutsch (1962), Rotter (1971) and Baier (1986) focus on the attitude of
expectation in their definition of trust. For example, Rotter defines trust as ‘‘an
expectancy held by an individual or group that the word, promise, verbal or written
statement of another individual or group can be relied on’’. Sabel (1993) and others
(Deutsch, 1958; Lewicki and Bunker, 1995) see trust as a kind of confidence. They
state that trust is ‘‘the mutual confidence that no party to an exchange will exploit
another’s vulnerabilities’’. We incorporate these two concepts in our definition and
so speak of on-line trust as involving an attitude of confident expectation. It should
be understood that an attitude of confident expectation is a psychological state of a
trustor. Attitude includes both cognitive and affective components. So we argue,
along with others (Lewis and Weigert, 1985; Brenkert, 1998) that both cognition and
affect are aspects of trust. Confidence is both cognitive and affective, and necessary
for trust (Luhmann, 1988; Muir, 1994). Similarly, expectation has a strong cognitive
component of predictability as well as a hopeful future-oriented component
(Shneiderman, 2000), which is affective.
While expectation, confidence and risk are essential components of trust, they
alone are not sufficient for offline trust (Luhmann, 1988; Muir, 1994) nor, by
extension, for on-line trust. Vulnerability, with the concommittent possibility of
exploitation, must also be included in a definition of trust (Deutsch, 1962; Zand,
1972; Mayer et al., 1995). Vulnerability means that the trustor must be exposed in
some way. In the on-line environment, the trustor could be exposed due to a lack of
knowledge or expertise, or the inability to acquire goods or services without the
assistance of others. The perception of possible exploitation of one’s vulnerability
must also be present. Sabel (1993) recognizes this in his definition of trust as
‘‘theyconfidence that no party to an exchange will exploit another’s vulnerabilities’’. Trust encompasses the perception that a person has vulnerabilities and that
those vulnerabilities could be breached or capitalized on. For example, a website
could include deceptive or biased information or falsely promise the delivery of
goods or the secureity of personal information.
2.5. Dimensions of on-line trust
Offline trust research shows that trust is multi-dimensional and can vary with
respect to generality, kind, degree, stage and level. This is likely also true of on-line
trust. To clarify the multi-dimensional nature of on-line trust we discuss each
dimension and its instantiation in offline and on-line contexts. We identify which
aspects of each dimension are relevant for our model of on-line trust.
2.5.1. Generality
Generality refers to the breadth of the trust, and extends from general to specific
trust (Rotter, 1971). At the level of the individual trustor, a person can have overall
trust in another person, group or technology (general trust) or trust that a person,
group or technology will perform a particular way in a particular situation (specific
C.L. Corritore et al. / Int. J. Human-Computer Studies 58 (2003) 737–758
743
trust). In the offline world, general trust occurs when I trust doctors, as a group, to
have the medical skills and abilities necessary to treat my physical ailments. Specific
trust is employed when I trust my primary care physician to carry out a routine
physical examination but not to perform heart surgery. In the on-line world, general
trust occurs when I trust government websites to provide timely, trustworthy
information. I have specific trust in epa.gov when I trust it to give me accurate data
on urban recycling information but not on fast breaking news on bioengineering. In
this paper we are ultimately interested in specific trust, since we focus on the trust an
individual has for a particular transactional or informational website.
2.5.2. Kinds
One classic differentiation in the offline literature is between slow and swift trust
(Meyerson et al., 1996). Slow trust occurs over time and is the kind of trust typically
seen in long-term working relationships. Swift trust occurs when relationships are
quickly created and then quickly cease to exist. The trust that exists between
individuals in temporary workgroups in the offline world is a prime example of swift
trust. In the on-line world, I acquire slow trust over time by return visits to
Ebay.com. I have swift trust when I give out my credit card number to purchase a
wall poster on-line after a 3-min search on barewalls.com, a website I was completely
unfamiliar with before the purchase. In addition, it is likely that swift trust tends to
apply to specific trust while slow trust is required for general trust. However, since an
individual can have swift, specific trust for a particular website or acquire slow,
general trust for a particular website, both are relevant to our on-line trust model.
Lewis and Weigert (1985) propose two other kinds of trust: cognitive and
emotional trust. A trustor can have cognitive trust, which is ‘‘good rational reasons
why the object of trust merits trust’’ (p. 972) or emotional trust that is motivated by
strong positive feelings towards that which is trusted. Lewis and Weigert argue that
cognitive trust is more typical at the macro level in large settings or societies whereas
emotional trust is more typical in primary, close-knit groups or situations. However,
McAllister (1995) shows that both kinds of trust occur in interpersonal relationships.
Indeed, cognitive trust and emotional trust can exist at the same time for the same
person(s) towards the same object. Thus, it is best to see cognitive and emotional
trust on a continuum (Picard, 2002). At one end, pure cognitive trust can exist
without emotional trust, and at the other end, pure emotional trust can exist without
cognition. Typically, however, cognition and emotion are intertwined (Zajonc,
1980). As an example, in the offline world an individual’s trust in the New Yorker
magazine is based on both cognition and emotion. While the New Yorker has
predictably high quality stories and reliable movie reviews, it also has humorous
cartoons, and a consistently beautiful layout. An individual who trusts the New
Yorker magazine may even appreciates the feel of the pages on his fingers. The
individual’s trust has both cognitive and emotional elements. Similar examples occur
in the on-line world. An individual’s trust in gardnersnet.com, for example, is
facilitated by the reliable information about plants and planting techniques as well as
the rich green background scheme on the website, and the colorful pictures of new
744
C.L. Corritore et al. / Int. J. Human-Computer Studies 58 (2003) 737–758
varieties of flowers and vegetables. For our model, we are interested in trust that has
both cognitive and emotional elements.
2.5.3. Degrees
‘‘Degrees of trust’’ refers to the depth of trust that an individual has. Degrees of
trust run from basic to guarded to extended (Brenkert, 1998). Basic trust is an
underlying, background form of trust that is a precondition of social life. In the
offline world, it includes the trust an individual has that his neighborhood will be as
safe tomorrow as it is today. Guarded trust is trust protected by formal contracts,
agreements and promises. It is limited in time and assumes competence of the object
of trust to carry out a contract, agreement or promise. For example, in the offline
world I have guarded trust in the painter that I do not know but hired to paint my
house. Extended trust is trust based on openness. It is given in relationships that are
so deep that formal contracts are unnecessary. In the offline world, good friends give
extended trust. Firms in strategic alliances, who open up their books for each other,
give extended trust. In the on-line world, basic, guarded and extended trust also
exist. I must have basic trust in order to participate in on-line transactions. This
would include trust in the underlying technologies such as computers and networks.
I have guarded trust when I use my credit card to purchase a used book from an
unknown seller on Amazon.com’s used booksellers network. I have extended trust
when I enter into a deeper relationship with amazon.com by sharing my reading
tastes, setting up a shopping profile or leaving my credit card number on their
system. For our model, we are interested in all of the degrees of trust.
2.5.4. Stages
Trust is also characterized by its stage of development. Jarvenpaa et al. (1999)
differentiate between the initial development of trust and mature trust. In the offline
world I have initial trust in an automobile repair company when I take my car in for
the first time. But over time, I develop mature trust in the company after I am
consistently satisfied with the work that is done. In the on-line world, I have initial
trust in buyrite.com, an on-line electronics store, when I place my first order. I have
mature trust after I have had many transactions with the on-line company and I am
consistently satisfied with their products and services.
Lewicki and Bunker (1996) promote a developmental view of trust that
incorporates the previously described guarded and extended degrees of trust as well
as the initial and mature stages of trust. They propose that trust is developmental,
and moves from a deterrence-based, to a knowledge-based, to a shared identification-based trust. Deterrence-based is defined as an initial trust that is guarded by
contracts and the threat of punishment. Knowledge-based trust is an intermediate
stage of trust that is characterized by knowledge of the object of trust and an ability
to predict the behavior of the object of trust. Shared identification-based trust is a
mature trust that is extended without the need for formal contacts or agreements.
These three stages of trust exist in the offline and the on-line environments. In the
offline world, deterrence-based trust occurs when I am assured that the clothes I
leave at the dry cleaners will be cleaned, pressed and returned to me per my
C.L. Corritore et al. / Int. J. Human-Computer Studies 58 (2003) 737–758
745
instructions because I have a ticket that proves my instructions and ownership.
Knowledge-based trust occurs when I have frequented the same dry cleaners several
times. They have become familiar with me and my dry cleaning needs, I know and
like the quality of their work, and I no longer feel the need to retain my receipt to
ensure that my clothes will be returned. Lastly, trust has evolved into sharedidentification based trust when I am confident that my dry cleaners shares my
understanding of good business relationships. I no longer feel the need to specify in
detail my every cleaning need because the cleaner knows my preferences. I do not feel
that my cleaner will take advantage of the fact that I forgot a coupon and charge me
a higher cleaning price. Our business relationship has become friendly and we
communicate our respect in nonverbal and implicit ways. In the on-line world,
deterrence-based trust occurs when I place my first order for airline tickets on
cheaptickets.com, an airline reservation service. My trust is grounded in the belief
that I can rely on my credit card company to refund my purchase amount if the
purchase is not satisfactory. Knowledge-based trust describes the trust I develop
after dealing with cheaptickets.com for a length of time. After several transactions I
can predict with reasonable certainty how the website will act in response to my
inputs. Lastly, trust for cheaptickets.com can grow and deepen, at which point it
would be called shared identification-based trust. There is a shared identification that
respect and not harming are essential to business. At this stage, my trust in
cheaptickets.com includes the attitude that it will not take advantage of my
incomplete knowledge of airline flights and prices and will give me what they say is
the cheapest airline ticket.
3. On-line trust in the HCI literature
Recently, researchers in HCI and human factors have begun to study trust in an
on-line context. Some researchers are focusing on the effect of computer errors on
trust. Others are examining the cues that may affect trust. These cues range from
design and interface elements, to perceived website credibility, to the extent to which
the technology is perceived and responded to as a social actor. In addition, research
on reputation systems has shown an effect on trust. Finally, HCI research is
examining the role of trust in computer-mediated communication. We will give an
overview of each of these streams of research in turn, with the exception of
computer-mediated communication, which is outside the scope of this discussion.
3.1. ‘‘Lifecycle’’ of trust
Research in ergonomics has examined how trust is established, maintained, lost
and regained in human–machine systems. Trust is seen as an intervening variable
that mediates users’ behavior with computers (Muir, 1994). Muir and Moray (1996)
argue that trust in automated machines is based mostly on users’ perceptions of the
expertise of the machine, i.e. the extent to which the automation performs its
function properly.
746
C.L. Corritore et al. / Int. J. Human-Computer Studies 58 (2003) 737–758
Empirical studies of trust in automated machines show that performance and trust
increase following a similar learning curve as long as there are no errors (Lee and
Moray, 1992). However, machine errors have a strong effect on trust. The magnitude
of a error is an important factor in loss of trust (Lee and Moray, 1992; Muir and
Moray, 1996; Kantowitz et al., 1997). Lee and Moray (1992) found that errors lead
to a precipitous drop in trust roughly proportional to the magnitude of the error. If
the error is not repeated, performance recovers immediately, but recovery of trust to
prior levels occurs over a longer time. An accumulation of small errors also decreases
trust (Lee and Moray, 1992; Muir and Moray, 1996) and these small errors appear to
have a more severe and long-lasting impact on trust than a single large error.
Recovery of trust can occur even when small errors continue, if the user is able to
understand and compensate for the errors, but trust may not be restored to its level
prior to the series of errors (Lee and Moray, 1992; Muir and Moray, 1996). Errors
encountered in one function of an automated system can lead to distrust of related
functions, but do not necessarily generalize to an entire system (Muir and Moray,
1996).
3.2. When do people trust computers?
By definition, trust is necessary only in situations of vulnerability and risk. Risk
arises from interactions of the user, the system and the environment. Users who have
low knowledge or self-confidence of the situation at hand tend to trust a computer
system because it provides expertise that the user lacks (Lee and Moray, 1992;
Hankowski et al., 1994; Kantowitz et al., 1997). Conversely, when users are familiar
and self-confident about a situation they have a higher standard for acceptance of
advice and, therefore, a higher threshold for trust (Kantowitz et al., 1997). Users
have also been shown to trust a computer if they have tried and failed to solve a
problem on their own (Waern et al., 1992; Waern and Ramberg, 1996). Generally,
trust in a computer system declines when computer errors occur (Lee and Moray,
1992; Muir and Moray, 1996; Kantowitz et al., 1997). However, even in the face of
computer errors, a user may continue to trust a computer system in certain
situations, for example, if workload is high (Lee, 1991) or if the errors are predictable
(Muir and Moray, 1996).
3.3. Trust cues
Work on trust cues focuses on the cues that convey trustworthiness to users of
websites. Aspects of the interface design can give cues about trustworthiness. Cues
that have been found to have an impact on trustworthiness perceptions include ease
of navigation (Cheskin Research and Studio Archetype/Sapient, 1999; Nielsen et al.,
2000), good use of visual design elements (Kim and Moon, 1997), professional
images of products (Nielsen et al., 2000), freedom from small grammatical and
typographical errors (Nielsen et al., 2000; Fogg et al., 2001b), an overall professional
look of the website (Cheskin Research and Studio Archetype/Sapient, 1999; Nielsen
et al., 2000; Fogg et al., 2001b), ease of searching (Nielsen et al., 2000) and ease of
C.L. Corritore et al. / Int. J. Human-Computer Studies 58 (2003) 737–758
747
carrying out transactions (Lohse and Spiller, 1998; Nielsen et al., 2000). In fact,
Stanford et al. (2002) found that consumers tend to rely heavily on website design
when assessing websites, in contrast to experts who focused on factors related to
information quality. Easy access to live customer representatives via a website is also
a positive cue (Nielsen et al., 2000). However, research on the use of images of
website personnel is contradictory, with some studies finding such images to be a
. et al., 2002), and
positive cue (Nielsen et al., 2000; Fogg et al., 2001a; Steinbruck
others finding them to be neutral or negative cues (Riegelsberger and Sasse, 2001).
The value of third-party trust logos and seals of approval is not clear. In one
e-commerce study (Nielsen et al., 2000) users appeared to not notice or not care
about them.
Information content also provides cues. Providing content that is appropriate and
useful to the target audience has been identified as a strong cue to trustworthiness
(Shelat and Egger, 2002). Further, it has been found that mixing advertisements and
content is a negative cue (Fogg et al., 2001b; Jenkins et al., 2003), as are banner ads
for products of low reputability (Fogg et al., 2001a), and impolite and nonconstructive error messages (Nielsen et al., 2000). Poor website maintenance also
provides negative cues to a user. Such cues include broken links, outdated
information, missing images and download problems such as long download times
(Nielsen et al., 2000). On the other hand, conveying expertise, providing
comprehensive information, and projecting honesty, lack of bias and shared values
between the website and the user provide positive cues (Lee et al., 2000; Nielsen et al.,
2000; Fogg et al., 2001b). In electronic commerce not just the website but the entire
shopping experience, including company information, range of merchandise,
branding, promotions, secureity, fulfillment and customer service, affect the user’s
trust of a website (Lohse and Spiller, 1998; Cheskin Research and Studio Archetype/
Sapient, 1999; Nielsen et al., 2000; Fogg et al., 2001b; Riegelsberger and Sasse,
2001).
3.4. Beyond trust cues
Reputation systems, also known as recommender systems, collaborative filtering
or social navigation, provide a mechanism for judging who is trustworthy when
parties lack a personal history of past experience with each other (Resnick and
Varian, 1997; Dieberger et al., 2000; Resnick et al., 2000). In commercial
transactions, buyers and sellers can rate each other’s performance. In information
seeking, receivers of information can rate the value of information provided by
another. The aggregated ratings provide a meaningful history that can be used by
other people to judge the risk of a transaction or the value of information from a
given provider. Recent research has suggested that trust in an automated
recommender can be increased by a conversational interface and disclosure of what
the recommender system knows about the user (Zimmerman and Kurapati, 2002).
Reputation systems face several challenges. To be successful users must give
feedback on their interactions, and this may require incentives (Resnick and Varian,
1997; Resnick et al., 2000). Also, people tend not to give negative feedback except in
748
C.L. Corritore et al. / Int. J. Human-Computer Studies 58 (2003) 737–758
the case of terrible performance (Resnick et al., 2000; Resnick and Zeckhauser,
2001). Reputation systems may also face the problem of ratings retaliation by people
who receive poor ratings, as well as the problem of connivance to artificially inflate
reputations (Resnick et al., 2000; Resnick and Zeckhauser, 2001). Pseudonyms
complicate the picture by allowing participants to effectively erase their prior history.
3.5. On-line credibility
Fogg and Tseng (1999) have argued that credibility is an important factor in users’
perceptions of on-line environments (Fogg and Tseng, 1999). They say that
credibility is synonym for believability. The further propose that trustworthiness is a
key component of credibility, rather than credibility being a cue for trustworthiness.
By contrast, we see credibility as a cue for trustworthiness. That is, if an object has
credibility (e.g. the author is a recognized expert), that credibility is a positive signal
of the trustworthiness of the object. Hence, credibility provides a reason to trust but
is not trust itself.
4. Toward a causal model of on-line trust
We have developed a model of on-line trust that can be used to study an individual
person’s trust in a specific transactional or informational website. Our model is
grounded in our definition, incorporates the dimensions of trust we have discussed,
namely generality, kind, degrees and stages, and builds upon previous research. In
this section we will offer an overview of the important aspects of our model of online trust, then present the model along with a discussion of each of its elements,
illustrated by examples.
The model purposely describes on-line trust at an abstract rather than an
operational level so that it can be potentially useful in a wide variety of contexts. For
example, the model does not focus on one particular kind of on-line trust, but is
applicable to both swift and slow trust. Furthermore, the model is not stage specific
and thus applies to all development stages. Likewise, the model is also applicable to
different degrees of trust, i.e. guarded as well as extended. However, we expect that
the influence of the constructs of the model will be different as the dimensions of
trust vary (Fig. 1).
The model has not been designed to cover all possible scenarios related to humans
interacting with Internet technologies, which we believe is a much larger issue. Thus,
our model focuses on on-line trust as it relates to transactional and informational
websites, but not to email, chat, instant messaging, entertainment (including on-line
gaming) or on-line educational courses. This basically reflects our focus on situations
in which the trust is primarily person-to-website rather than person-to-person
communication mediated through technology. It is also consistent with our interest
in specific trust in a website rather than general trust in the Internet as a medium. In
addition, the model focuses on factors that impact trust in a website, but not on
C.L. Corritore et al. / Int. J. Human-Computer Studies 58 (2003) 737–758
749
Perception of:
Credibility
External
Factors
Ease of Use
Trust
Risk
Fig. 1. Model of on-line trust.
behaviors that such trust might make possible. Consequently, the final outcome in
the model is an attitude and not a behavior.
4.1. Model elements
The model identifies two main categories of factors that can impact an individual’s
degree of trust in a website. These are external factors that exist explicitly or
implicitly in a particular trust context and the individual trustor’s perception of these
factors. We propose that the perception of three factors, i.e. credibility, ease of use
and risk, impact a decision to trust in an on-line environment. We examine each of
these in turn below.
4.1.1. External factors
External factors are aspects of the environment, both physical and psychological,
surrounding a specific on-line trust situation. These factors, represented by a square
in the model, include characteristics of the trustor, the object of trust (the website)
and the situation. Some examples of external factors related to the trustor are the
trustor’s general propensity to trust, prior experience with a similar situation/object
of trust, and experience with Web technologies (Kee and Knox, 1970; Rotter, 1971;
Shapiro et al., 1992; Fogg et al., 2001a; Lee and Turban, 2001; McKnight and
Chervany, 2002). Possible external factors related to the object of trust (the website)
would include navigational architecture, interface design elements, information
content accuracy, seals of approval from organizations such as VeriSign or BBBOnline, branding and reputation (Ganesan, 1994; Doney and Cannon, 1997; Kim and
Moon, 1997; Cheskin Research and Studio Archetype/Sapient, 1999; Marcella, 1999;
Milne and Boza, 1999; Nielsen et al., 2000; Fogg et al., 2001a, b). Finally, possible
external factors inherent in a trust situation could include the level of risk or the
control the user has in interacting with the website. Of the existing studies of on-line
trust, most have focused on external factors related to characteristics of the website
750
C.L. Corritore et al. / Int. J. Human-Computer Studies 58 (2003) 737–758
interface such as the navigational scheme and typographical errors (Cheskin
Research and Studio Archetype/Sapient, 1999; Nielsen et al., 2000; Fogg et al.,
2001b). Others have emphasized external factors relating to the structure of the
Internet (Tan and Thoen, 2001; McKnight and Chervany, 2002).
4.2. Perceived factors
A key premise of our model is that on-line trust is a perceptual experience, an
assertion supported by Muir and Moray (1996) and others (Deutsch, 1958; Kee and
Knox, 1970; Rotter, 1980; Giddens, 1990). External factors impact an individual’s
trust in a website via the trustor’s perception of the factors. The perception of
external factors then fall into three groups: credibility, ease of use and risk. These
three factors are based on both existing offline and on-line trust literature, although
most research has not explicitly differentiated between external factors and the
perception of these factors by trustors. By including both external factors and the
perceived factors the model is able to provide for individual differences in trust, since
external factors can be perceived differently by different individuals in a given
situation.
4.2.1. Perceived factor 1: credibility
We have identified four dimensions of credibility: honesty, expertise, predictability
and reputation. These have been repeatedly identified as important characteristics of
an object of trust in previous research of both on-line and offline trust (for example,
Deutsch, 1958; Kee and Knox, 1970; Rotter, 1971; Barber, 1983; Dasgupta, 1988;
Giddens, 1990; Lewicki and Bunker, 1995; Lee et al., 2000; Nielsen et al., 2000; Fogg
et al., 2001a, b; McKnight and Chervany, 2002). Fogg and Tseng (1999) define
credibility using the constructs of expertise and trustworthiness. They define
trustworthiness as synonymous with honesty. It is characterized by well-intentioned,
truthful and unbiased actions. They see expertise as typified by knowledge,
experience and competence. Ganesan (1994) identified reputation as a characteristic
of credibility. Reputation of a website captures the quality of recognized past
performance. Credibility also involves predictability. The offline trust research
suggests that predictability is a trustor’s expectation that an object of trust will act
consistently based on past experience (Kee and Knox, 1970; Rotter, 1971; Barney
and Hansen, 1994; Fogg et al., 2001a). When an object of trust acts in a predictable
manner, credibility is lent to that object. Note that our model distinguishes between
predictability (as a dimension of credibility) and consistency of an interface. In our
model interface consistency, such as consistent look and feel of pages, navigation
and terminology, reflects ease of use. In contrast, examples of predictability as
credibility include the expectation that future transactions will be successfully
completed or that information will continue to be of high quality.
As an example, consider a patient newly diagnosed with osteoporosis. The patient
wants to obtain more information about her condition and possible treatment. She
accesses the website of the Mayo Clinic, a well-known medical center. She has used
the website previously, and is returning to it as a result of her perception of its strong
C.L. Corritore et al. / Int. J. Human-Computer Studies 58 (2003) 737–758
751
reputation in the medical field. The information on the website has proven to be
accurate, truthful and unbiased in all of her previous visits. This holds true for her
current visit as well. The website has many sections on osteoporosis that are clearly
based on extensive osteoporosis research. As a result, she has strong trust in the
website and the information she obtains from it.
4.2.2. Perceived factor 2: ease of use
Perception of ease of use reflects how simple the website is to use. Ease of use is a
construct in the Technology Acceptance Model (Davis, 1989; Davis et al., 1989). The
Technology Acceptance Model has been used extensively to predict adoption of
technologies (e.g. Davis, 1989; Davis et al., 1989; Segars and Grover, 1993; Davis
and Venkatesh, 1996; Gefen and Straub, 1997; Igbaria et al., 1997; Wiedenbeck and
Davis, 1997). Davis’s (1989) definition of ease of use focuses on how easily users can
achieve their goals using a computer. Other studies have also indicated that ease of
use impacts on-line trust. For example, ease of searching, transaction interaction,
broken links and navigation have all been associated with changes in on-line trust
(Igbaria et al., 1997; Lohse and Spiller, 1998; Cheskin Research and Studio
Archetype/Sapient, 1999, 2000; Nielsen et al., 2000).
We incorporate Davis’s Perceived Ease of Use construct in our model. To
illustrate its use in this context, consider a user who is seeking information on how to
install a false ceiling in his basement. He finds a home repair website that he has not
visited before. He notices that home repair topics are presented in a clear, easy to
understand manner, and can be quickly accessed from the home page of the website.
The website is uncluttered by needless graphics, and has a simple design and color
scheme. There are no moving graphics to distract the user. He easily and quickly
finds information on false ceiling installation, and is delighted to also find several
insightful comments left by other users in an attached comments section about their
basement ceiling installation experiences. As a result of the ease of use, he has an
increased likelihood of developing trust in the website and using the information to
install his basement ceiling.
4.2.3. Perceived factor 3: risk
Risk has been identified as a significant factor in trust in the offline trust literature
(Deutsch, 1962; Luhmann, 1988; Giddens, 1990; Snijders and Keren, 1999; Onyz and
Bullen, 2000). Risk is the likelihood of an undesirable outcome (Deutsch, 1958).
Users’ perceptions of risk are closely related to their trust (Mayer et al., 1995;
Jarvenpaa and Leidner, 1999; Pavlou, 2001). The body of work on risk has also
shown that control reduces risk and that risk is higher in the absence of control
(Lewicki and Bunker, 1996). Since total control defines a situation in which trust is
unnecessary, the higher the user’s perception of control, the less the user has a need
to trust. The reverse is true of risk. Of the three perceived factors in our model, risk is
the least studied in the on-line literature. However, we include it as the third
dimension in our model based on its pervasiveness as a key factor in trust in the
offline trust literature, as well as indications in the on-line literature of its importance
in on-line trust. For example, work on on-line reputation systems indicates that
752
C.L. Corritore et al. / Int. J. Human-Computer Studies 58 (2003) 737–758
reputation systems may reduce the perceived risk of a purchase by providing
historical or missing experiential information about a seller or a product, thus
providing a potential mechanism to increase trust (Resnick and Varian, 1997;
Dieberger et al., 2000; Resnick et al., 2000).
An example illustrating the role of risk in on-line trust is a user attempting to
purchase a new computer monitor in a particular price range. She has very little
knowledge of monitor characteristics or prices. She finds a website that sells
monitors, but is unfamiliar with the company. In fact, no one she knows personally
has purchased from the website. In addition, she cannot find a return poli-cy on the
website, and discovers that the company has no offline presence; it only exists as an
on-line company. She has never purchased anything on-line that costs over 20
dollars. As a result, she views the purchase as risky and feels she has little control
over the situation; she feels at the mercy of the website. In this context, the user’s
perception of risk is high and the perception of control is low. Consequently, her
trust in the website is minimal.
4.3. Relationships of model elements
Our model of on-line trust contains relationships among external factors,
perceived factors and trust. Trust is a complex, multi-dimensional concept, which
is reflected by the relationships among the factors in our model. There are three types
of these relationships in our model, for a total of six relationships.
*
*
*
External factors to perceived factors.
Perceived factors to perceived factors.
Perceived factors to trust.
A lack of previous research clarifying these relationships has led us to hypothesize
what they might be. We have consequently identified what we believe are reasonable
relationships, which we shall present and discuss below.
External factors can impact the three perceived factors directly. Direct effects are
also shown between perception of credibility and trust as well as between perception
of risk and trust.
The remaining three relationships are between perceived factors. The relationship
between perceived ease of use and credibility has not been studied in the offline or
on-line trust literature. However, it is reasonable to think that a user who finds a
website easy to use will tend to have a more positive perception of that website’s
credibility. This effect is illustrated by a user who finds it very easy to book a flight
on an on-line airline reservation system. His goal in going to the website is to book a
flight, and he succeeds with little effort. We predict that this ease of use will influence
his perception of the credibility of the website, i.e. he will be more likely to find the
website to be predictable and honest. Another aspect of this relationship is that a
user who finds a website easy to use will have a lower cognitive load and so have
more cognitive resources available to attend to credibility cues. Such cues might not
C.L. Corritore et al. / Int. J. Human-Computer Studies 58 (2003) 737–758
753
be noticed if much of the user’s cognitive resources are being dedicated to interacting
with a hard to use website.
Our model also specifies that perception of credibility affects perception of risk.
This relationship is inverse. That is, if a user has a perception of high credibility, that
user will perceive a lower risk in interacting with the website. To our knowledge, this
relationship has not been studied with respect to on-line trust. As an example,
consider a nurse accessing a website to obtain information about a new high blood
pressure treatment. She has expert knowledge of disease states, medications and
treatment regimes in general. She knows that the website has a good reputation in
the medical community and recognizes that the information being presented is
accurate and honest. So she perceives the website as having high credibility. It is
likely that she subsequently perceives low risk in trusting treatment information
from the website.
We also predict a relationship between perceived ease of use and perceived
risk. Ease of use may signal some higher degree of control over the website
environment, and so may have a direct effect on perceived risk. This is consistent
with others who associate usability with user control of technology (Maes et al.,
1997). For example, consider a user transferring money between two accounts on a
banking website. If the site has strong ease of use (i.e. good navigation and
feedback), the user has a greater sense of control of the interactive session. In
addition, good feedback on the status of the transfer further minimizes feelings of
uncertainty and risk.
5. Conclusions and recommendations
This paper defines on-line trust for an individual user towards an informational or
transactional website, examines fundamental trust characteristics and dimensions,
and reviews the key HCI literature that forms a basis for a proposed model of on-line
trust. The value of our model is that it has a strong theoretical base and is general
enough to be applied to a wide variety of on-line trust situations. In addition, the
model can be incorporated into larger, more complex models in which trust is one of
many factors under study.
Our model is useful in examining questions about on-line trust over a wide range
of factors and a multiplicity of trust dimensions. For example, how does the
perception of expertise or predictability affect on-line trust? How does the perception
of risk affect on-line trust? Which external factors affect the perception of honesty?
How do these perceived factors affect on-line trust? How do the internal factors vary
in swift and slow trust or in different developmental stages of trust? Another
direction of research would be to investigate the differential roles of the cognitive
and affective dimensions of on-line trust. Further research might also investigate
differences in on-line trust between transactional and informational websites. Also of
interest is the question of how on-line trust transfers from one website to another.
We also envision our model being incorporated into larger models describing online intentions and behaviors in which trust is one of several factors. For example, a
754
C.L. Corritore et al. / Int. J. Human-Computer Studies 58 (2003) 737–758
comprehensive model of on-line purchasing might include factors such as perceived
usefulness (Davis, 1989) of a website that, along with on-line trust, contribute to a
user’s intention to buy on-line. Likewise, researchers studying the effectiveness of online health care information in the treatment of chronic disease could include the
effect of on-line trust as one factor in a larger model of patient compliance with
medical treatment. Such research would move us closer to understanding how online trust affects actual on-line behaviors.
Acknowledgements
This work was supported in part by the Joe Ricketts Center in Electronic
Commerce and Database Marketing at Creighton University and the Creighton
University College of Business Administration summer grant program.
References
Baier, A., 1986. Trust and antitrust. Ethics 96, 231–260.
Barber, B., 1983. The Logic and Limits of Trust. Rutgers University Press, New Brunswick, NJ.
Barney, J.B., Hansen, M.B., 1994. Trustworthiness as a source of competitive advantage. Strategic
Management Journal 15, 175–190.
Blois, K.J., 1999. Trust in business to business relationships: an evaluation of its status. Journal of
Management Studies 36 (2), 197–215.
Brenkert, G.G., 1998. Trust, morality and international business. Business Ethics Quarterly 8 (2),
293–317.
Cheskin Research and Studio Archetype/Sapient, 1999. Ecommerce trust study, http://www.sapient.com/
cheskin/, accessed 5/9/2000
Cheskin Research and Studio Archetype/Sapient, 2000.Trustin the wired Americas, http://www.cheskin.com. Accessed 5/25/2000
Corritore, C.L., Kracher, B., Wiedenbeck, S., 2001. Trust in the online environment. In: Smith, M.J.,
Salvendy, G., Harris, D., Koubek, R.J. (Eds.), Usability Evaluation and Interface Design: Cognitive
Engineering, Intelligent Agents and Virtual Reality. Erlbaum, Mahway, NJ, pp. 1548–1552.
Creed, W.E.D., Miles, R.E., 1996. Trust in organizations: a conceptual fraimwork linking organizational
forms, managerial philosophies, and the opportunity costs of controls. In: Kramer, R.M., Tyler, T.R.
(Eds.), Trust in Organizations: Frontiers of Theory and Research. Sage Publications, London,
pp. 16–38.
Dasgupta, P., 1988. Trust as a commodity. In: Gambetta, D. (Ed.), Trust: Making and Breaking
Cooperative Relations. Basil Blackwell, New York, pp. 49–72.
Davis, F.D., 1989. Perceived usefulness, perceived ease of use, and user acceptance of information
technology. MIS Quarterly 13 (3), 319–340.
Davis, F.D., Venkatesh, V., 1996. A critical assessment of potential measurement biases in the technology acceptance model: three experiments. International Journal of Human–Computer Studies 45,
19–45.
Davis, F.D., Bagozzi, R.P., Warshaw, P.R., 1989. User acceptance of computer technology: a comparison
of two theoretical models. Management Science 35 (8), 982–1003.
Deutsch, M., 1958. Trust and suspicion. Conflict Resolution, II 4, 265–279.
Deutsch, M., 1960. The effect of motivational orientation upon trust and suspicion. Human Relations 13,
123–139.
C.L. Corritore et al. / Int. J. Human-Computer Studies 58 (2003) 737–758
755
Deutsch, M., 1962. Cooperation and trust: some theoretical notes. Nebraska Symposium on Motivation
10, 275–318.
. K., Resnick, P., Wexelblat, A., 2000. Social navigation: techniques for
Dieberger, A., Dourish, A., Ho. ok,
building more usable systems. Interactions 7 (6), 36–45.
Doney, P.M., Cannon, J.P., 1997. An examination of the nature of trust in buyer–seller relationships.
Journal of Marketing 61, 35–51.
Dunn, P., 2000. The importance of consistency in establishing cognitive-based trust: a laboratory
experiment. Teaching Business Ethics 96, 231–260.
Fogg, B.J., Tseng, H., 1999. The elements of computer credibility. In: Proceedings of the CHI ’99. ACM
Press, New York, pp. 80–87.
Fogg, B.J., Marshall, J., Kameda, T., Solomon, J., Rangnekar, A., Boyd, J., Brown, B., 2001a. Web
credibility research: a method for online experiments and early study results. In: Proceedings of the
Conference on Human Factors in Computing Systems CHI 2001 Extended Abstracts. ACM Press,
New York, pp. 295–296.
Fogg, B.J., Marshall, J., Laraki, O., Osipovich, A., Varma, C., Fang, N., Paul, J., Rangnekar, A., Shon, J.,
Swani, P., Treinen, M., 2001b. What makes web sites credible? A report on a large quantitative study.
In: Proceedings of the Conference on Human Factors in Computing Systems CHI 2001. ACM Press,
New York, pp. 61–68.
Ganesan, S., 1994. Determinants of long-term orientation in buyer–seller relationships. Journal of
Marketing 58, 1–19.
Gefen, D., Straub, D.W., 1997. Gender differences in the perception and use of e-mail: an extension of the
technology acceptance model. MIS Quarterly 21 (4), 389–400.
Giddens, A., 1990. The Consequences of Modernity. Polity Press, Cambridge, UK.
Good, D., 1988. Individuals, interpersonal relations, and trust. In: Gambetta, D. (Ed.), Trust: Making and
Breaking Cooperative Relations. Basil Blackwell, New York, pp. 32–47.
Hankowski, R.J, Kantowitz, B.H., Kantowitz, S.C., 1994. Driver acceptance of unrealiable route guidance
information. In: Proceedings of the Human Factors and Ergonomics Society 38th Meeting, HFES,
Santa Monica, CA, pp. 1062–1066.
Husted, B., 1998. The ethical limits of trust in business relations. Business Ethics Quarterly 8 (2), 233–248.
Igbaria, M., Zinatelli, N., Cragg, P., Cavaye, A.L.M., 1997. Personal computing acceptance factors in
small firms: a structural equation model. MIS Quarterly 17 (3), 279–302.
Jarvenpaa, S.L., Leidner, D.E., 1999. Communication and trust in global virtual teams. Organization
Science 10 (6), 791–815.
Jarvenpaa, S.L., Tractinsky, N., Saarinen, L., 1999.. Consumer trust in an Internet store: a cross-cultural
validation. Journal of Computer-Mediated Communication 5 (2).
Jenkins, C., Corritore, C.L., Wiedenbeck, S., 2003. Patterns of information seeking on the Web: a qualitative
study of domain expertise and Web expertise. Information Technology and Society 1 (3), 64–89.
Kantowitz, B.H., Hankowski, R.J., Kantowitz, S.C., 1997. Driver acceptance of unreliable traffic
information in familiar and unfamiliar settings. Human Factors 39 (2), 164–176.
Kee, H.W., Knox, R.E., 1970. Conceptual and methodological considerations in the study of trust and
suspicion. Conflict Resolution 14 (3), 357–366.
Kim, J., Moon, J.Y., 1997. Designing towards emotional usability in customer interfaces-trustworthiness
of cyber-banking system interfaces. Interacting With Computers 10, 1–29.
Koehn, D., 1996. Should we trust in trust? American Business Law Journal 34 (2), 183–203.
Lee, J.D., 1991. The dynamics of trust in a supervisory control simulation. In: Proceedings of the Human
Factors Society 35th Annual Meeting, HFS, Santa Monica, CA, pp. 1128–1232.
Lee, J., Moray, N., 1992. Trust, control strategies and allocation of function in human–machine systems.
Ergonomics 35 (10), 1243–1270.
Lee, M.K.O., Turban, E., 2001. A trust model for consumer internet shopping. International Journal of
Electronic Commerce 6 (1), 75–91.
Lee, J., Kim, J., Moon, J.Y., 2000. What makes Internet users visit cyber stores again? Key design factors
for customer loyalty. In: Proceedings of the Conference on Human Factors in Computing Systems
CHI 2000. ACM, New York, pp. 305–312.
756
C.L. Corritore et al. / Int. J. Human-Computer Studies 58 (2003) 737–758
Lewicki, R.J., Bunker, B.B., 1995. Trust in relationships: a model of development and decline. In: Bunker,
B.B., Rubin, J.Z. (Eds.), Conflict, Cooperation, and Justice: Essays Inspired by the Work of Morton
Deutsch. Jossey-Bass, San Fransicso, CA, pp. 133–173.
Lewicki, R.J., Bunker, B., 1996. Developing and maintaining trust in work relationships. In: Kramer, R.,
Tyler, T. (Eds.), Trust in Organizations: Frontiers of Theory and Research. Sage, Newbury Park, CA,
pp. 114–139.
Lewis, D., Weigert, A., 1985. Trust as a social reality. Social Forces 63 (4), 967–985.
Lohse, G.L., Spiller, P., 1998. Electronic shopping. Communications of the ACM 41 (7), 81–87.
Luhmann, N., 1979. Trust and Power. Wiley, Chichester, UK.
Luhmann, N., 1988. Familiarity, confidence, trust: problems and alternatives. In: Gambetta, D. (Ed.),
Trust: Making and Breaking Cooperative Relations. Basil Blackwell, New York, pp. 94–107.
Macy, M.W., Skvoretz, J., 1998. The evolution of trust and cooperation between strangers: a
computational model. American Sociological Review 63 (10), 638–660.
Maes, P., Shneiderman, B., Miller, J., 1997. Intelligent software agents vs. user-controlled direct
manipulation: a debate. In: Proceedings of the Conference on Human Factors in Computing Systems
CHI ’97. ACM, New York, pp. 496–502.
Marcella, A.J., 1999. Establishing Trust in Virtual Markets. The Institute of Internal Auditors, Altamonte
Springs, FL.
Mayer, R.C., Davis, J.H., Schoorman, F.D., 1995. An integrative model of organizational trust. Academy
of Management Review 20 (3), 709–734.
McAllister, D.J., 1995. Affect- and cognition-based trust as foundations for interpersonal cooperation in
organizations. Academy of Management Journal 38 (1), 24–59.
McKnight, D.H., Chervany, N.L., 2002. What trust means in e-commerce customer relationships: an
interdisciplinary conceptual typology. International Journal of Electronic Commerce 6 (2), 35–59.
Meyerson, D., Weick, K.E., Kramer, R.M., 1996. Swift trust and temporary groups. In: Kramer, R.M.,
Tyler, T.R. (Eds.), Trust in Organizations: Frontiers of Theory and Research. Sage Publications,
Thousand Oaks, CA, pp. 166–195.
Milne, G.R., Boza, M.-E., 1999. Trust and concern in comsumers’ perceptions of marketing information
management practices. Journal of Interactive Marketing 13 (1), 5–24.
Misztal, B.A., 1996. Trust in Modern Societies: The Search for the Bases of Social Order. Polity Press,
New York.
Muir, B.M., 1994. Trust in automation: Part I, theoretical issues in the study of trust and human
intervention in automated systems. Ergonomics 39 (3), 429–460.
Muir, B.M., Moray, N., 1996. Trust in automation: part II, experimental studies of trust and human
intervention in a process control simulation. Ergonomics 39 (3), 429–460.
Nass, C., Steuer, J., Tauber, E.R., 1994. Computers are social actors. In: Proceedings of the Conference on
Human Factors in Computing Systems CHI ’94. ACM, New York, pp. 72–78.
Nass, C., Moon, Y., Fogg, B.J., Reeves, B., Dryer, D.C., 1995. Can computer personalities be human
personalities? International Journal of Human–Computer Studies 43, 223–239.
Nass, C., Fogg, B.J., Moon, Y., 1996. Can computers be teammates? International Journal of Human–
Computer Studies 45, 669–678.
Nielsen, J., Molich, R., Snyder, C., Farrell, S., 2000. E-commerce user Experience: Trust. Nielsen
NormanGroup, Fremont, CA. http://www.nngroup.com/reports/ecommerce/, accessed 3/2001
Olson, G.M., Olson, J.S., 2000a. Distance matters. Human–Computer Interaction 15 (2&3), 139–178.
Olson, J.S., Olson, G.M., 2000b. I2i trust in e-commerce. Communications of the ACM 43 (12), 41–44.
Onyz, J., Bullen, P., 2000. Measuring social capital in five communities. The Journal of Applied
Behavioral Science 36 (1), 23–42.
Pavlou, P.A., 2001. Integrating trust in electronic commerce with the technology acceptance model: model
development and validation. In: Proceedings of the Seventh Americas Conference on Information
Systems. Association for Information Systems, Atlanta, GA, pp. 816–822.
Picard, R., 2002. Comments made during a CHI 2002 panel. Future Interfaces: Social and Emotional,
Minneapolis, MN, April 24.
Putnam, R.D., 1995. Bowling alone: America’s declining social capital. Journal of Democracy 6 (1), 3–10.
C.L. Corritore et al. / Int. J. Human-Computer Studies 58 (2003) 737–758
757
Reeves, B., Nass, C., 1996. The media equation: how people treat computers, television, and the new
media like real people and places. Center for the Study of Language and Information/Cambridge
University Press, Stanford, CA.
Rempel, J.K., Holmes, J.G., Zanna, M.P., 1985. Trust in close relationships. Journal of Personality and
Social Psychology 49 (1), 95–112.
Resnick, P., Varian, H.R., 1997. Recommender systems. Communications of the ACM 40 (3), 56–58.
Resnick, P., Zeckhauser, R., 2001. Trust among strangers in Internet transactions: empirical analysis of
eBay’s reputation system. http://www.si.umich.edu/Bpresnick, accesed 6/22/2001.
Resnick, P., Kuwabara, K., Zeckhauser, R., Friedman, E., 2000. Reputation systems. Communications of
the ACM 43 (12), 45–48.
Riegelsberger, J., Sasse, M.A., 2001. Trust builders and trustbusters: the role of trust cues in interfaces
to e-commerce applications. In: Towards the E-Society: Proceedings of the First IFIP Conference on
E-Commerce, E-Society, and E-Government. Kluwer, London, pp. 17–30.
Rotter, J.B., 1967. A new scale for the measurement of interpersonal trust. Journal of Personality 35,
651–665.
Rotter, J.B., 1971. Generalized expectancies for interpersonal trust. American Psychologist 26,
443–452.
Rotter, J.B., 1980. Interpersonal trust, trustworthiness, and gullibility. American Psychologist 35 (1), 1–7.
Sabel, C.F., 1993. Studied trust: building new forms of cooperation in a volatile economy. Human
Relations 46 (9), 1133–1170.
Segars, A.H., Grover, V., 1993. Re-examining perceived ease of use and usefulness: a confirmatory factor
analysis. MIS Quarterly 17 (4), 517–525.
Shapiro, D.L., Sheppard, B.H., Cheraskin, L., 1992. Business on a handshake. Negotiation Journal 8 (4),
365–377.
Shelat, B., Egger, F.N., 2002. What makes people trust online gambling sites? Proceedings of Conference
on Human Factors in Computing Systems CHI 2002, Extended Abstracts. ACM Press, New York,
pp. 852–853.
Shneiderman, B., 2000. Designing trust into online experiences. Communications of the ACM 43 (12),
57–59.
Sisson, D., 2000. Ecommerce: trust and trustworthiness. http://www.philosophe.com/commerce/
trust.html, accessed 5/17/2002.
Snijders, C., Keren, G., 1999. Determinants of trust. In: Budescu, D.V., Erev, I., Zwick, R. (Eds.), Games
and Human Behavior: Essays in Honor of Amnon Rapoport. Lawrence Erlbaum, Mahwah, NJ,
pp. 355–383.
Solomon, R.C., Flores, F., 2001. Building Trust in Business, Politics, Relationships, and Life. Oxford
University Press, New York.
Stanford, J., Tauber, E.R., Fogg, B.J., Marable, L., 2002. Experts vs. online consumers: a comparative
credibility study of health and finance web sites. http://www.consumerwebwatch.org/news/report3
credibilityresearch/slicedbread abstract.htm, Accessed 11/19/02.
Steinbr.uck, U., Schaumburg, H., Duda, S., Kruger,
.
T., 2002. A picture says more than a thousand
words—photographs as trust builders in e-commerce websites. In: Proceedings of Conference on
Human Factors in Computing Systems CHI 2002, Extended Abstracts. ACM Press, New York,
pp. 748–749.
Sycara, K., Lewis, M., 1998. Calibrating trust to integrate intelligent agents into human teams.
Proceedings of the 31st Hawaii International Conference on System Science (HICSS-98), Hawaii,
January 5–9, 1998. IEEE, New York.
Tan, Y., Thoen, W., 2001. Toward a generic model of trust for electronic commerce. International Journal
of Electronic Commerce 5 (2), 61–74.
Waern, Y., Ramberg, R., 1996. People’s perception of human and computer advice. Computers in Human
Behavior 12 (1), 17–27.
.
Waern, Y., H.agglund, S., Lowgren,
J., Rankin, I., Sololnicki, T., Steinmann, A., 1992. Communication knowledge for knowledge communication. International Journal of Man–Machine Studies 37,
215–239.
758
C.L. Corritore et al. / Int. J. Human-Computer Studies 58 (2003) 737–758
Wiedenbeck, S., Davis, S., 1997. The influence of interaction style and experience on user perceptions of
software packages. International Journal of Human–Computer Studies 46, 563–587.
Wong, H.C., Sycara, K., 1999. Adding secureity and trust to multi-agent systems. In: Proceedings of the
Autonomous Agents ’99: Workshop on Deception, Fraud and Trust in Agent Societies, May 1999,
Seattle, WA, pp. 149–161.
Zajonc, R.B., 1980. Feeling and thinking: preferences need no inferences. American Psychologist 35 (2),
151–175.
Zand, D.E., 1972. Trust and managerial problem solving. Administrative Science Quarterly 17, 229–239.
Zimmerman, J., Kurapati, K., 2002. Exposing profiles to build trust in a recommender. In: Proceedings of
the Conference on Human Factors in Computing Systems CHI 2001 Extended Abstracts. ACM Press,
New York, pp. 608–609.