Evalica: Supporting Software Evaluation Logistics

Book iconFirst published May 2013, Crosslink® magazine

Supporting the logistics of a software architecture evaluation can be cumbersome and error-prone, especially as the depth of the evaluation and the number of participants grows. The Aerospace software architecture evaluation framework consists of more than one thousand questions that could be part of an evaluation. The software evaluation team must tailor the evaluation questions, assign questions to individual evaluators, capture the answers to those questions, and roll up the results for presentations to stakeholders. While these tasks can be done with ordinary office software (e.g., word processors, spreadsheets) and a shared document repository, these tools have little or no support for common evaluation tasks, such as tracking the evaluation’s status, integrating responses from multiple evaluators, and modifying questions after the evaluation has begun.

To make supporting the logistics of a framework-based software architecture evaluation easier, Aerospace developed a tool called Evalica™. Evalica is a Web-based database-driven tool that provides a shared, collaborative space where evaluators and other stakeholders can work through the lifecycle of a software evaluation. It supports the following three common evaluation activities:

Tailoring. All software evaluation questions are loaded into the Evalica database. From there, the questions can be answered as-is, modified, or reorganized in Evalica. Questions can be annotated with metadata—guidance to evaluators indicating what to look for when answering questions or how to interpret questions in different circumstances. Questions can also be annotated with user-defined tags that can later be used in search queries, allowing users to more easily select subsets from the full question database. In addition, while collaborative tailoring can be done in Evalica, questions can be exported to and edited in Microsoft Excel or other spreadsheet software, and reimported into Evalica for users who prefer the spreadsheet interface.

Capturing Responses. Once a framework of questions is set up, questions are assigned to evaluators. The same question can be assigned to multiple evaluators if multiple responses (perhaps from different perspectives) are desired. Evaluators log into Evalica and respond to the questions as they perform their evaluations. Responses can include answers to the questions, what evidence was examined to reach the answers, and qualitative and quantitative assessments of the software architecture based on the answers. For users who want to work offline, response forms can be generated and filled out in Microsoft Word or other word-processing software, and reimported into Evalica. Evaluators can track their own progress, and software evaluation administrators can track each evaluator’s progress or the evaluation as a whole through reporting screens.

Rolling Up Results. The answers to detailed questions about the software architecture are an intermediate, not final, product of an evaluation. Evalica allows users to create roll-up items such as conclusions, recommendations, and deficiencies, and link them back to the responses that prompted them. Users can export such findings directly into Microsoft PowerPoint.

Evalica cannot do the hard work of a software evaluation: identifying what questions to ask and answering them. That work requires experienced evaluators and subject matter experts. However, Evalica can reduce the burden of supporting the coordination among all the evaluators and experts, freeing them up to focus on the software evaluation. Evalica has been used internally at Aerospace to support evaluations and has served as the primary mechanism through which the software architecture evaluation framework questions are managed. Beyond software architecture, Evalica can also be used to support the logistics of other question-and-answer-based evaluations. By loading the Evalica database with a different set of questions, Evalica can support various kinds of evaluations.

Back to Spring 2013 Table of Contents

Go to main article:  Evaluating Software Architectures for National Security Space Systems