Concept: Trust in Multi-Agent Systems
The concept of trust in a multi-agent system describes an expectation of another agents' behaviour. It is often quantified with a trust value.
Relationships
Parent Practices
Related Elements
Main Description

Mui et al. (see Bibliography for Trust and Reputation in Multi-Agent Systems) define trust as "a subjective expectation an agent has about another’s future behaviour based on the history of their encounters". Trust depends on experience and is subject to change over time. When two persons meet, their attitude towards each other is influenced by previous encounters in a similar context. They may have a positive or a negative default attitude towards their counterpart and may thus be more or less inclined to cooperate, a phenomenon coined basic trust or initial trust. This initial trust is then gradually replaced by experiential trust that is accumulated from the evaluations of interactions with the new counterpart. If no interactions take place between two individuals for some time, the trust slowly deteriorates and has to be rebuilt. Often, the initial trust can be supplemented by organisational measures such as reputation, where the lack of experience with a new interaction partner is replaced by the knowledge of other, more experienced members of the system.

Trust is usually quantified by a trust value. This value is calculated by a trust model that uses the experiences gathered by an agent to derive a trust value. In systems in which interactions are infrequent, a reputation value that is calculated from the experiences of other agents can also be used.

A more general view on trust is that it helps measure uncertainty. The behaviour of another agent is uncertain, especially if no interaction has taken place yet. A trust or reputation value can, however, decrease this uncertainty by giving a numerical value of the potential utility of an interaction. This notion is not limited to interactions between agents but can be extended to other sources of uncertainty, such as predictions, sensor readings, etc.

(Description adapted from Steghöfer, J.-P., Kiefhaber, R., Leichtenstern, K., Bernard, Y., Klejnowski, L., Reif, W., Ungerer, T., André, E., Hähner, J. & Müller-Schloer, C. (2010), Trustworthy organic computing systems: Challenges and perspectives, in B. Xie, J. Branke, S. Sadjadi, D. Zhang & X. Zhou, eds, ‘Autonomic and Trusted Computing’, Vol. 6407 of Lecture Notes in Computer Science, Springer Berlin / Heidelberg, pp. 62–76. The original source also includes references to the concepts mentioned in the description.)

More Information