Concept: Trust in Organic Computing Systems
In Organic Computing systems, trust has a broader meaning than in multi-agent systems and encompasses a number of facets.
Relationships
Main Description

The definition of trust in Organic Computing (OC) systems differs from the definition in multi-agent systems, since OC systems have a number of properties that are not commonly found in a MAS. It encompasses a wider range of aspects that contribute to the overall trustworthiness of the system.

Organic Computing Systems

Organic Computing systems are highly dynamic, composed of a possibly vast number of adaptable components and are located in an ever changing environment. To cope with these circumstances, OC systems employ self-organisation mechanisms which yield a number of highly desirable properties, e.g., the ability to self-heal, to self-adapt, or to self-configure. However, classical techniques for analysis and design of software systems are no longer suitable for systems of such complex structure. Novel aspects that could not be observed in other systems, such as emergent properties, and the extreme dynamics of OC-systems require a new way to think about such systems as well as the development of new mechanisms to describe, measure and harness these properties.

Trust in Organic Computing Systems

Trust is a multi-faceted concept that incorporates all constituting entities and users of a system and thus enables cooperation in systems of distributed entities. It allows the entities to gauge the confidence they place in their interaction partners in a given context and evolves with the experiences of the entities over time. It is comprised of the following facets:

  • Functional correctness: The quality of a system to adhere to its functional specification under the condition that no unexpected disturbances occur in the system’s environment.
  • Safety: The quality of a system to be free of the possibility to enter a state or to create an output that may impose harm to its users, the system itself or parts of it, or to its environment.
  • Security: The absence of possibilities to defect the system in ways that disclose private information, change or delete data without authorization, or to unlawfully assume the authority to act on behalf of others in the system.
  • Reliability: The quality of a system to remain available even under disturbances or partial failure for a specified period of time as measured quantitatively by means of guaranteed availability, mean-time between failures, or stochastically defined performance guarantees. Credibility: The belief in the ability and willingness of a cooperation partner to participate in an interaction in a desirable manner. Also, the ability of a system to communicate with a user consistently and transparently.
  • Usability: The quality of a system to provide an interface to the user that can be used efficiently, effectively and satisfactorily that in particular incorporates consideration of user control, transparency and privacy.

Many of the aspects of the term trust as used in the literature on multi-agent systems and artificial societies are subsumed in credibility. Models and mechanisms are used to find malicious or selfish agents and to exclude them from interactions or to enact special measures when forced to interact with them. This includes “liars”, agents that claim to provide a certain quality of service and do not deliver or try to deceive other agents or the user in similar ways.

(Taken from Steghöfer, J.-P., Kiefhaber, R., Leichtenstern, K., Bernard, Y., Klejnowski, L., Reif, W., Ungerer, T., André, E., Hähner, J. & Müller-Schloer, C. (2010), Trustworthy organic computing systems: Challenges and perspectives, in B. Xie, J. Branke, S. Sadjadi, D. Zhang & X. Zhou, eds, ‘Autonomic and Trusted Computing’, Vol. 6407 of Lecture Notes in Computer Science, Springer Berlin / Heidelberg, pp. 62–76.)