Comparing and Evaluating Agent-oriented Software Engineering Approaches


It is difficult to compare AOSE processes due to a number of reasons. First of all, many of them are not universally applicable in the sense that they are focused on a specific system class. This makes it impossible to use the same case study in the comparison. Furthermore, if processes are based on specific meta-models, assumptions about the agent architecture, the implementation framework, or the resulting system design can be dramatically different. Since the outcome and the complexity of the processes are very much dependent on the design, it becomes hard to make qualitative and quantitative statements. Finally, processes are always to some degree a matter of taste and their successful execution a matter of experience. Even the selection of evaluation criteria can be subjective and based on the focus of the evaluation. For instance, Garcia et al. (2008) state that many comparisons do not take into account organisational aspects, a fact that seems natural considering that organisational issues have not been at the forefront of the respective investigations.

Many authors thus resort to comparing external properties like notation and structure of the processes or tool support (e.g., Al-Hashel et al., 2007). While these are important factors, it is doubtful that they allow comprehensive statements about the applicability and expressiveness of a software engineering process. Thus, a number of attempts have been performed to create objective evaluation frameworks that focus on the execution of the processes and use criteria rooted in the internals of the processes. Ideally, these frameworks are employed while a case study is simultaneously executed within different processes (as done, e.g., in Abdelaziz et al. (2007) and DeLoach et al., 2009). Even in such cases, however, the results of the comparison are anecdotal evidence at best and are still deeply influenced by the investigators' experience with the individual processes and the personal preferences. Furthermore, since AOSE methodologies tend to be rather specific, it is difficult to perform comparative studies by developing the same system. All applied methodologies would have to match the requirements of the system, e.g., with regard to the selected implementation platform or meta-model used.

A case-study based comparative study would, however, be the ideal evaluation: if it was possible to start a number of parallel development processes in which teams create the same system based on the same requirements, it would at least be possible to determine how well a methodology works in that specific context. The time and money required to create the project would be the final evaluation criterion. But of course, such a comparison is impossible since the application of a process is always strongly dependent on the knowledge of the software engineers participating in the project and the success of a software development effort is hugely dependent on human factors (Nah et al., (2001) and Cockburn and Highsmith, 2001). Since it is impossible to compose three teams of the same knowledge, dedication, and leadership, it is impossible to conduct a truly comparative study this way. Efforts that have been made in this direction have either had one team develop the same case study with several methodologies, thus neglecting learning effects and prior familiarity (e.g., by Al-Hashel et al., 2007) or by different teams, each with significant experience in the methodology applied (most prominently by DeLoach et al., 2009).

The most complete comparison of agent-oriented methodologies to date has been conducted by Tran and Low (2005). The authors use a feature analysis approach that is based on the combination of different evaluation criteria defined in previous studies, both for traditional, object-oriented approaches and for agent-oriented methodologies. The framework distinguishes four areas:

Notably, to evaluate coverage of the development process, the authors identified 19 commonly used development steps that were confirmed to be necessary by a survey conducted among experts. While the overall evaluation suffers from some of the drawbacks pointed out above (e.g., assessment of ``ease of understanding of process steps''), the feature analysis approach and the very in-depth analysis of the processes lets the study stand out in terms of quality and comprehensiveness. The findings are, however, that no single existing approach is ideal. None of them can satisfy all criteria of the catalogue. A more detailed look at the evaluation criteria used by Tran and Low (2005) is given here and an evaluation of existing AOSE processes is provided here—especially in relation to PosoMAS.

Bibliography

Copyright

This material is made available under the Creative Commons—Attribution-ShareAlike License v3.0.

© Copyright 2013, 2014 by Institute for Software & Systems Engineering, University of Augsburg
Contributors
Contact: Jan-Philipp Steghöfer