Rational Trust Modeling: Mehrdad Nojoumian
Rational Trust Modeling: Mehrdad Nojoumian
Rational Trust Modeling: Mehrdad Nojoumian
Mehrdad Nojoumian
Department of Computer & Electrical Engineering and Computer Science
Florida Atlantic University, Boca Raton, Florida, USA
mnojoumian@fau.edu
Abstract. Trust models are widely used in various computer science disciplines.
The main purpose of a trust model is to continuously measure trustworthiness of
a set of entities based on their behaviors. In this article, the novel notion of ratio-
arXiv:1706.09861v1 [cs.CR] 29 Jun 2017
nal trust modeling is introduced by bridging trust management and game theory.
Note that trust models/reputation systems have been used in game theory (e.g.,
repeated games) for a long time, however, game theory has not been utilized in
the process of trust model construction; this is where the novelty of our approach
comes from. In our proposed setting, the designer of a trust model assumes that
the players who intend to utilize the model are rational/selfish, i.e., they decide
to become trustworthy or untrustworthy based on the utility that they can gain.
In other words, the players are incentivized (or penalized) by the model itself
to act properly. The problem of trust management can be then approached by
game theoretical analyses and solution concepts such as Nash equilibrium. Al-
though rationality might be built-in in some existing trust models, we intend to
formalize the notion of rational trust modeling from the designers perspective.
This approach will result in two fascinating outcomes. First of all, the designer
of a trust model can incentivise trustworthiness in the first place by incorporat-
ing proper parameters into the trust function, which can be later utilized among
selfish players in strategic trust-based interactions (e.g., e-commerce scenarios).
Furthermore, using a rational trust model, we can prevent many well-known at-
tacks on trust models. These two prominent properties also help us to predict
behavior of the players in subsequent steps by game theoretical analyses.
1 Introduction
The main purpose of a trust model is to continuously measure trustworthiness of a
set of entities (e.g., servers, sellers, agents, nodes, robots, players, etc) based on their
behaviors. Indeed, scientists across various disciplines have conducted research on trust
over decades and produced fascinating discoveries, however, there is not only a huge
gap among findings in these research communities but also these discoveries have not
been properly formalized to have a better understanding of the notion of trust, and
consequently, practical computational models of trust. We therefore intend to look at
the problem of trust modeling from an interdisciplinary perspective that is more realistic
and closer to human comprehension of trust.
From a social science perspective, trust is the willingness of a person to become
vulnerable to the actions of another person irrespective of the ability to control those
actions [1]. However, in the computer science community, trust is defined as a personal
2 Nojoumian
expectation that a player has with respect to the future behavior of another party, i.e.,
a personal quantity measured to help the players in their future dyadic encounters. On
the other hand, reputation is the perception that a player has with respect to another
players intention, i.e., a social quantity computed based on the actions of a given player
and observations made by other parties in an electronic community that consists of
interacting parties such as people or businesses [2].
From another perspective [3], trust is made up of underlying beliefs and it is a
function based on the values of these beliefs. Similarly, reputation is a social notion
of trust. In our lives, we each maintain a set of reputation values for people we know.
Furthermore, when we decide to establish an interaction with a new person, we may
ask other people to provide recommendations regarding the new party. Based on the
information we gather, we form an opinion about the reputation of the new person. This
decentralized method of reputation measurement is called referral chain. Trust can be
also created based on both local and/or social evidence. In the former case, trust is built
through direct observations of a player whereas, in the latter case, it is built through
information from other parties. It is worth mentioning that a player can gain or lose her
reputation not only because of her cooperation/defection in a specific setting but also
based on the ability to produce accurate referrals.
Generally speaking, the goal of reputation systems is to collect, distribute and aggre-
gate feedback about participants past behavior. These systems address the development
of reputation by recording the behavior of the parties, e.g., in e-commerce, the model
of reputation is constructed from a buying agents positive or negative past experiences
with the goal of predicting how satisfied a buying agent will be in future interactions
with a selling agent. The ultimate goal is to help the players decide whom to trust and
to detect dishonest or compromised parties in a system [4]. There exist many fasci-
nating applications of trust models and reputation systems in various engineering and
computer science disciplines.
To the best of our knowledge, there is no literature on rational trust modeling, that
is, using game theory during the construction of a trust model. However, trust models
are widely used in various scientific and engineering disciplines such as electronic com-
merce [5,6,7,8,9,10], computer security and rational cryptography [11,12,13,14,15,16],
vehicular ad-hoc networks [17,18], social/semantic web [19,20], multiagent systems
[21,22,23], robotics/autonomous systems [24,25], game theory and economics [26,27].
In our setting, the designer of a trust model assumes that the players who intend to
utilize the model are rational/selfish meaning that they cooperate to become trustworthy
or defect otherwise based on the utility (to be defined by the trust model) that they can
gain, which is a reasonable and standard assumption. In other words, the players are
incentivized (or penalized) by the model itself to act properly. The problem of trust
modeling can be then approached by strategic games among the players using utility
functions and solution concepts such as Nash equilibrium.
Although rationality might be built-in in some existing trust models, we formalize
the notion of rational trust modeling from the model designers perspective. This ap-
proach results in two fascinating outcomes. First of all, the designer of a trust model can
incentivise trustworthiness in the first place by incorporating proper parameters into the
trust function, which can be later utilized among selfish players in strategic trust-based
interactions (e.g., e-commerce scenarios between sellers and buyers). Furthermore, us-
ing a rational trust model, we can prevent many well-known attacks on trust models, as
we describe later. These two prominent properties also help us to predict behavior of
the players in subsequent steps by game theoretical analyses.
Suppose there exist two sample trust functions: The first function f1 (Ti p1 , i ) receives
the previous trust value Ti p1 and the current action i of a seller Si (i.e., cooperation
or defection) as two inputs to compute the updated trust value Ti p for the next round.
However, the second function f2 (Ti p1 , i , `i ) has an extra input value known as the
sellers lifetime denoted by `i . Using the second trust function, a seller with a longer
lifetime will be rewarded (or penalized) more (or less) than a seller with a shorter life-
time assuming that the other two inputs (i.e., current trust value and the action) are the
same. In this scenario, reward means gaining a higher trust value and becoming more
trustworthy, and penalty means otherwise. In other words, if two sellers Si and S j both
cooperate i = j = C and their current trust values are equal Ti p1 = T jp1 but their
lifetime parameters are different, for instance, `i > ` j , the seller with a higher lifetime
parameter, gains a higher trust value for the next round, i.e., Ti p > T jp . This may help
Si to sell more items and accumulate more revenue because buyers always prefer to buy
from trustworthy sellers, i.e., sellers with a higher trust value.
Now consider a situation in which the sellers can sell defective versions of an item
with more revenue or non-defective versions of the same item with less revenue. If
we utilize the first sample trust function f1 , it might be tempting for a seller to sell
defective items because he can gain more utility. Furthermore, the seller can return to
the community with a new identity (a.k.a, re-entry attack) after selling defective items
and accumulating a large revenue. However, if we use the second sample trust function
f2 , its no longer in a sellers best interest to sell defective items because if he returns
to the community with a new identity, his lifetime parameter becomes zero and he
loses all the credits that he has accumulated overtime. As a result, he loses his future
customers and a huge potential revenue, i.e., buyers always prefer a seller with a longer
lifetime over a seller who is a newcomer. The second trust function not only incentivises
trustworthiness but also prevents the re-entry attack.
4 Nojoumian
Note that this is just an example of rational trust modeling for the sake of clar-
ification. The second sample function here utilizes an extra parameter `i in order to
incentivise trustworthiness and prevent the re-entry attack. In fact, different parameters
can be incorporated into trust functions based on the context (whether its a scenario
in e-commerce or cybersecurity and so on), and consequently, different attacks can be
prevented, as discussed in Section 3.
Cooperation Defection
Decrease
Increase
-1 1- +1 -1 -1 +1
Trust Value Trust Value
1. Behavioral: how the model performs among a large enough number of players by
running a number of standard tests, i.e., executing a sequence of cooperation
and defection (or no-participation) for each player. For instance, how fast the
model can detect defective behavior by creating a reasonable trust margin between
cooperative and non-cooperative parties.
2. Adversarial: how vulnerable the trust model is to different attacks or any kinds of
corruption by a player or a coalition of malicious parties. Seven well-known attacks
on trust models are listed below. The first five attacks are known as single-agent
attacks and the last two are known as multi-agent or coalition attacks [29].
(a) Sybil: forging identities or creating multiple false accounts by one player.
(b) Lag: cooperating for some time to gain a high trust value and then cheat.
(c) Re-Entry: corrupted players return to the scheme by new identities.
(d) Imbalance: cooperating on cheap transactions; defecting on expensive ones.
(e) Multi-Tactic: any combination of aforementioned attacks.
(f) Ballot-Stuffing: fake transactions among colluders to gain a high trust value.
(g) Bad-Mouthing: submitting negative reviews to non-coalition members.
3. Operational: how well the future states of trust can be predicted with a relatively
accurate approximation based on possible action(s) of the players (prediction can
help us to prevent some well-known attacks), and how well cooperation can be
incentivised by the model in the first place.
In the next section, we clarify what considerations should be taken into account by the
designer in order to construct a proper trust model that resists against various attacks
and also encourages trustworthiness in the first place.
1. Cooperation: selling the non-defective version of the item for $3 to different buyers.
2. Defection: selling the defective version of the item for $2 to different buyers.
Assuming that the buyers are not aware of the existence of the defective version of
the item, they may prefer to buy from the seller who offers the lowest price. This is
a pretty natural and standard assumption. As a result, the seller who offers the lowest
price has the highest chance to sell the item, and consequently, he can gain more utility.
An appropriate pay-off function/matrix can be constructed for this sellers dilemma
based on the probability of being selected by a buyer since there is a correlation between
the offered price and this probability, as shown in Figure 2. In other words, if they both
offer the same price ($2 or $3), they have an equal chance of being selected by a buyer,
otherwise, the seller who offers a lower price ($2) will be selected by probability of 1.
6 Nojoumian
S2 S2
S1: what if I defect
C: n-def: $3 D: def: $2
0.5 , 0.5 0,1 0.5 , 0.5 0,1
C: n-def: $3 0.5 , 0.5 0,1
S1 S1 S1
1,0 0.5 , 0.5 1,0 0.5 , 0.5
D: def: $2 1,0 0.5 , 0.5
S1: what if I cooperate
S2
It is not hard to show that, by using function f2 rather than function f1 , defection is
no longer Nash equilibrium in the sellers dilemma, as we illustrate in Section 2.3. When
we assume the sellers are rational/selfish and they decide based on their utility functions,
we can then design a proper trust function similar to f2 to incentivise cooperation in the
first place. Furthermore, we can deal with a wide range of attacks, as we mentioned
earlier. Finally, at any point, behavior of a seller can be predicted by estimation of his
pay-off through trust and utility functions.
In the following equations, the first function f1 does not depend on the sellers
lifetime `i , however, the second function f2 has an extra factor that is defined by lifetime
`i and constants . We can assume that `i is in the same range as of depending on the
players lifetime. Also, its always positive meaning that no matter a player cooperates
or defects, he will be always rewarded by `i . We stress that parameter `i in function
f2 is just an examples of how a rational trust function can be designed. The designer
can simply consider various parameters (that denote different concepts) as additive or
multiplicative factors based on the context in which the trust model is supposed to be
utilized. We discuss this issue later in Section 3 in detail.
|i |
f1 : Ti p = Ti p1 + (3)
i
|i |
f2 : Ti p = Ti p1 + + `i (4)
i
The first function f1 rewards or penalizes the sellers based on their actions and inde-
pendent of their lifetimes. This makes function f1 vulnerable to different attacks such as
the re-entry attack because a malicious seller can always come back to the scheme with
a new identity, and then, starts re-building his reputation for another malicious activity.
It is possible to make the sign-up procedure costly but it is out of the scope of this paper.
8 Nojoumian
On the other hand, the second trust function f2 has an extra term that is defined
by the sellers lifetime `i . This term will be adjusted by as an additional reward or
punishment factor in the trust function. In other words, the sellers current lifetime `i
in addition to a constant (in the case of cooperation/defection) determine the extra
reward/punishment factor. As a result, it is not in the best interest of a seller to reset his
lifetime indicator `i to zero because of a short-term utility. This lifetime indicator can
increase the sellers trustworthiness, and consequently, his long-term utility overtime.
Let assume our sample utility function is further extended as follows, where is a
constant, for instance, can be $100:
p
Ti + 1 T p +1
ui = where 0 i 1, 1 Ti p +1 (5)
2 2
The utility function simply indicates a seller with a higher trust value (which de-
pends on his lifetime indicator as well) can gain more utility because he has a higher
chance to be selected by the buyers. In other words, Eqn. (5) maps the current trust value
Ti p to a value between zero and one, which can be also interpreted as the probability of
being selected by the buyers. For the sake of simplicity, suppose Ti p1 is canceled out
in both f1 and f2 as a common factor. The overall utility Uif1 is shown below when f1 is
used. Note that ui computes the utility of a seller in the case of cooperation or defection
whereas Ui also takes into account the external utility or future loss that a seller may
gain or lose, for instance, more savings through selling the defective version of an item
instead of its non-defective version.
++1
2
using f1 when i = 1
Uif1 = +1
+ using f1 when i = 0 plus , which is the external utility
2
that the seller obtains by selling the defective item
As shown in Uif1 , function f1 rewards/penalizes sellers in each period by factor 2 .
Accordingly, we can assume external utility that the seller obtains by selling the
defective item is slightly more than the utility that the seller may lose by defection;
otherwise, he wouldnt defect, that is, = 2 + | 2 | + = + ; note that the seller
not only loses a potential reward 2 but also he is penalized by factor 2 .
As a result, +1
2 + =
+1
2 +( + ) = 2 + . Therefore, Defection is always
+1
Nash Equilibrium when f1 is used, as shown in Table 1. We can assume the seller cheats
on rounds until he is labeled as an untrustworthy seller. At this point, he leaves and
returns with a new identity with the same initial trust value of newcomers, i.e., re-entry
attack. Our analysis remains the same even if cheating is repeated for rounds.
S2
C ooperation Defection
S1
C ooperation +1 +1
2 , 2
+1 +1
2 , 2 +
Defection +1
2 + , + +1
2
+1 +1
2 +, 2 +
Without loss of generality, suppose the seller defects, leaves and then comes back
with a new identity. As a result the lifetime index `i becomes zero. Let assume this
index is increased by the following arithmetic progression to reach to where it was:
0 , 51 `i , 25 `i , 53 `i , 45 `i , `i . In reality, it takes a while for a seller to accumulate this credit
based on our definition, i.e., years of existence and number of transactions. Therefore,
1 2 3 4
(`i 0) + (`i `i ) + (`i `i ) + (`i `i ) + (`i `i ) + (`i `i )
2 5 5 5 5
4 3 2 1 3
= `i + `i + `i + `i + `i + 0 = `i
2 5 5 5 5 2
E.g., (`i 51 `i ) denotes the lifetime could be `i , or even more, but its now 15 `i meaning
that the seller is losing 45 `i , and so on. We now simplify the Uif2 when i = 0 as follows:
( + `i ) + 1
Uif2 : +
2
z }| {
( + `i ) + 1 3 (+ + `i ) + 1 3
= + + `i = + `i
2 2 2 2
This is a simple but interesting result that shows, as long as 2 `i > , C ooperation
3
is always Nash Equilibrium when f2 is used, Table 2. In other words, as long as future
loss is greater than the short-term gain through defection, its not in the best interest of
the seller to cheat and commit to the re-entry attack, that is, the seller may gain a small
short-term utility by cheating, however, he loses a larger long-term utility because it
takes a while to reach to `i from 0. The analysis will be the same if the seller cheats on
rounds before committing to the re-entry attack as long as the future loss is greater
than the short-term gain. In fact, the role of parameter `i is to make the future loss costly.
S2
C ooperation Defection
S1
C ooperation , , + 32 `i
Defection + 23 `i , + 32 `i , + 23 `i
Table 3. Sample Parameters: deal with single-agent attacks by rational trust modeling.
Rational Trust Modeling 11
4 Concluding Remarks
In this paper, the novel notion of rational trust modeling was introduced by bridging
trust management and game theory. In our proposed setting, the designer of a trust
model assumes that the players who intend to utilize the model are rational/selfish, i.e.,
they decide to become trustworthy or untrustworthy based on the utility that they can
gain. In other words, the players are incentivized (or penalized) by the model itself to
act properly. The problem of trust management can be then approached by strategic
games among the players using utility functions and solution concepts such as NE.
Our approach resulted in two fascinating outcomes. First of all, the designer of a
trust model can incentivise trustworthiness in the first place by incorporating proper
parameters into the trust function. Furthermore, using a rational trust model, we can
prevent many well-known attacks on trust models. These prominent properties also
help us to predict behavior of the players in subsequent steps by game theoretical anal-
yses. As our final remark, we would like to emphasize that our rational trust modeling
approach can be extended to any mathematical modeling where some sorts of utility
and/or rationality are involved.
References
1. Roger C Mayer, James H Davis, and F David Schoorman. An integrative model of organiza-
tional trust. Academy of Management Review, 20(3):709734, 1995.
2. Lik Mui, Mojdeh Mohtashemi, and Ari Halberstadt. Notions of reputation in multi-agents
systems: a review. In 1st ACM Int Joint Conf on Autonomous Agents and Multiagent Systems,
AAMAS02, pages 280287, 2002.
3. C. Castelfranchi and R. Falcone. Principles of trust for mas: Cognitive anatomy, social
importance, and quantification. In 3rd International Conference on Multi Agent Systems,
pages 7279. IEEE, 1998.
4. P. Resnick, K. Kuwabara, R. Zeckhauser, and E. Friedman. Reputation systems: facilitating
trust in internet interactions. Comm of the ACM, 43(12):4548, 2000.
5. Jie Zhang, Robin Cohen, and Kate Larson. Combining trust modeling and mechanism design
for promoting honesty in e-marketplaces. Computational Intelligence, 28(4):549578, 2012.
6. Xin Liu, Anwitaman Datta, Hui Fang, and Jie Zhang. Detecting imprudence of reliable
sellers in online auction sites. In 11th IEEE International Conference on Trust, Security and
Privacy in Computing and Communications, TrustCom12, pages 246253, 2012.
7. Lizi Zhang, Siwei Jiang, Jie Zhang, and Wee Keong Ng. Robustness of trust models and
combinations for handling unfair ratings. In 6th IFIP Int Conference on Trust Management,
volume 374, pages 3651. Springer, 2012.
8. Joshua Gorner, Jie Zhang, and Robin Cohen. Improving the use of advisor networks for
multi-agent trust modelling. In 9th IEEE Annual Conference on Privacy, Security and Trust,
PST11, pages 7178, 2011.
12 Nojoumian
9. Mehrdad Nojoumian and Timothy C. Lethbridge. A new approach for the trust calculation
in social networks. In E-business and Telecom Networks: 3rd International Conference on
E-Business, volume 9 of CCIS, pages 6477. Springer, 2008.
10. Audun Jsang, Roslan Ismail, and Colin Boyd. A survey of trust and reputation systems for
online service provision. Decision Support Systems, 43(2):618644, 2007.
11. Mehrdad Nojoumian. Novel Secret Sharing and Commitment Schemes for Cryptographic
Applications. PhD thesis, Department of Computer Science, UWaterloo, Canada, 2012.
12. Mehrdad Nojoumian and Douglas R. Stinson. Socio-rational secret sharing as a new direc-
tion in rational cryptography. In 3rd International Conference on Decision and Game Theory
for Security (GameSec), volume 7638 of LNCS, pages 1837. Springer, 2012.
13. Mehrdad Nojoumian and Douglas R. Stinson. Social secret sharing in coud computing using
a new trust function. In 10th IEEE Annual International Conference on Privacy, Security
and Trust (PST), pages 161167, 2012.
14. Carol J. Fung, Jie Zhang, and Raouf Boutaba. Effective acquaintance management based on
bayesian learning for distributed intrusion detection networks. IEEE Trans on Network &
Service Management, 9(3):320332, 2012.
15. Mehrdad Nojoumian, Douglas R. Stinson, and Morgan Grainger. Unconditionally secure
social secret sharing scheme. IET Information Security (IFS), Special Issue on Multi-Agent
and Distributed Information Security, 4(4):202211, 2010.
16. Mehrdad Nojoumian and Douglas R. Stinson. Brief announcement: Secret sharing based
on the social behaviors of players. In 29th ACM Symposium on Principles of Distributed
Computing (PODC), pages 239240, 2010.
17. Q. Li, A. Malip, K. Martin, S. Ng, and J. Zhang. A reputation-based announcement scheme
for VANETs. IEEE Trans on Vehicular Technology, 61:4095 4108, 2012.
18. J. Zhang. A survey on trust management for VANETs. In 25th IEEE International Conf. on
Advanced Information Networking and Applications, AINA11, pages 105112, 2011.
19. J. Gorner, J. Zhang, and R. Cohen. Improving trust modelling through the limit of advisor
network size and use of referrals. E-Commerce Research & Applications, 2012.
20. L. Zhang, H. Fang, W.K. Ng, and J. Zhang. Intrank: Interaction ranking-based trustwor-
thy friend recommendation. In 10th IEEE International Conference on Trust, Security and
Privacy in Computing and Communications, TrustCom11, pages 266273, 2011.
21. Yonghong Wang and Munindar P. Singh. Evidence-based trust: A mathematical model
geared for multiagent systems. ACM Trans on Auto & Adaptive Sys., 5(4):128, 2010.
22. Yonghong Wang and Munindar P. Singh. Formal trust model for multiagent systems. In 20th
International Joint Conference on Artificial Intelligence, IJCAI07, pages 15511556, 2007.
23. Yonghong Wang and Munindar P. Singh. Trust representation and aggregation in a dis-
tributed agent system. In 21st National Conf. on AI, AAAI06, pages 14251430, 2006.
24. Matthew Aitken, Nisar Ahmed, Dale Lawrence, Brian Argrow, and Eric Frew. Assurances
and machine self-confidence for enhanced trust in autonomous systems. In RSS 2016 Work-
shop on Social Trust in Autonomous Systems, 2016.
25. Scott R Winter, Stephen Rice, Rian Mehta, Ismael Cremer, Katie M Reid, Timothy G Rosser,
and Julie C Moore. Indian and american consumer perceptions of cockpit configuration
policy. Journal of Air Transport Management, 42:226231, 2015.
26. G.J. Mailath and L. Samuelson. Repeated games and reputations: long-run relationships.
Oxford University Press, USA, 2006.
27. L. Mui. Computational models of trust and reputation: Agents, evolutionary games, and
social networks. PhD thesis, Massachusetts Institute of Technology, 2002.
28. Mehrdad Nojoumian. Trust, influence and reputation management based on human reason-
ing. In 4th AAAI Workshop on Incentives and Trust in E-Communities, pages 2124, 2015.
29. Reid Kerr. Addressing the Issues of Coalitions & Collusion in Multiagent Systems. PhD
thesis, UWaterloo, 2013.