Getting Started
First of all, the designers and developers should get a firm grasp of the concept of trust in the context of multi-agent systems or Organic Computing systems and of reputation. Refer to the reading material provided in this practice. Also, make sure to understand the
requirements that warrant the introduction of a trust model and the required infrastructure. Collecting the necessary
data to feed a trust model and the evaluation of experiences as described in the trust lifecycle introduce an overhead in the system. Thus, trust and reputation
should only be used in situations where the benefits outweigh the additional cost and complexity.
Once you have established that trust will be a part of your system, familiarise yourself with the trust lifecycle. It is also helpful to understand commonly used trust models and
their semantics. The supporting material can be helpful in this regard. Using trust and reputation will strongly
influence the way agents make decisions in a number of situations, so a firm grasp of the consequences of the decision
to introduce trust in a system is mandatory.
Common Pitfalls
Testing trust-based interactions can be quite difficult. In particular, it is often very hard to foresee what sources
of uncertainty will actually behave like in an actual system. It is therefore necessary to make as little assumptions
as possible during system validation. As interactions can also take place between several agents at the same time,
sufficiently large test scenarios have to be defined.
As trust models are often tailored to a specific purpose, designers need to bear in mind that interactions and system
goals can change in long-lived self-organising systems, even if current requirements do not explicitly state this. The
agents and the observation infrastructure should thus be designed in such a way that new interactions can easily be
used in the trust lifecycle if necessary.
|