Unell article banner, 640x80Evaluating Software Architectures for National Security Space Systems

A framework geared toward evaluating software architectures for national security space programs on the front end saves costs and minimizes schedule delays.

Book iconFirst published May 2013, Crosslink® magazine

Alan Unell

 

National security space programs rely on dependable and effective software for ground and space systems. Developing the software for these systems presents immense challenges. Ground systems software requires millions to tens of millions of lines of code. Spaceflight systems require far fewer lines of code; however, the complexity of working with real-time embedded systems and factors of mission criticality adds to the software development challenges. Historically, insufficient front-end work on software architecture and design has led to software acquisitions for ground and space systems that have incurred as much as 50–150 percent cost overruns and corresponding schedule delays, which can last for years.

In an effort to address software-related cost and schedule issues and also to reemphasize front-end engagement, The Aerospace Corporation has instituted a framework for evaluating its customers’ software architectures. By developing a framework for evaluating software architectures designed for national security space systems, areas that demand attention can be identified, understood, and rectified earlier in a system’s development lifecycle, thereby minimizing avoidable rework and operational deficiencies later in the process.

Software Architecture Fundamentals

Software architecture refers to a simplified representation, or model, of a software system and encompasses the significant decisions about a system’s organization, including the structural elements and interfaces that constitute the system; the behavior, as shown by the interactions among the structural elements; the composition of the structural and behavioral elements into a larger subsystem; and the architectural style guiding the organization. The software architecture also accounts for a software system’s features, such as usage, functionality, performance, resilience, reuse, comprehensibility, economic and technology trade-offs, and aesthetic concerns.

When implemented correctly, the software architecture can demonstrate a software system’s technical feasibility to its stakeholders. In DOD environments, it allows development and acquisition stakeholders to make more-accurate programmatic and technical decisions at each milestone in the development lifecycle. Key DOD stakeholders may include enterprise representatives, contractors, operators, commercial users, product vendors, and subject matter experts.

A major challenge in software architecting is to identify the mandatory attributes of a software system, which are determined by stakeholder and domain-specific concerns. For example, a software system may require several simultaneous capabilities specific to national security space, such as commanding a vehicle and processing sensor data while being resilient to attacks, scalability to meet peak and future usage, flexibility to incorporate new capabilities, and timeliness and reliability to support the warfighter.

During the preliminary software design phase, the architectural design principles, requirements, constraints, and assumptions provide formal guidance to the software development engineers. In this phase, engineers often discover initial design constraints and assumptions that must be revised so that detailed software designs and implementations will meet target system requirements. Such cases can emerge due to changes in existing requirements, or from the addition of new requirements. While revisions to architectural designs are ideally conducted quickly to remove software development roadblocks, they are sometimes done without fully regarding or understanding their impact on other areas of the software architecture. Conducting broad or even targeted software architecture evaluations at regular intervals during the preliminary design and implementation phases of software development assures stakeholders that the evolving architectural design will continue to meet all functional and nonfunctional system requirements.

Software Architecture Standards

The Aerospace Corporation has instituted a framework for evaluating its customers' software architectures. Here is a view of that framework, named Evalica, and its question browser and editor. Users can browse, reorganize, and edit questions in a common repository, sharing the information with other developers.

The Aerospace Corporation has instituted a framework for evaluating its customers’ software architectures. Here is a view of that framework, named Evalica, and its question browser and editor. Users can browse, reorganize, and edit questions in a common repository, sharing the information with other developers.

An IEEE working group has developed IEEE Standard 1471-2000 (“Recommended Practice for Architectural Description of Software-Intensive Systems”) with input from industry, academia, and other standards bodies. It provides a conceptual framework for architectural descriptions of software systems. Designed to be independent of other architectural description techniques, this standard establishes content requirements for a given software architecture description. These include identification of the stakeholders and their concerns, the views of the software system from the perspective of related concerns, templates for developing the views, consistencies among the views, and rationale. (A view is a representation of a set of software system components and the relationships among them.) The standard leaves the choice of views to the software architects, so there is no set depiction of software architecture.

According to the standard, a common language should be established during software design and evaluation, in which different program concerns can be expressed, negotiated, and resolved (e.g., creating a program-tailored DOD architecture framework or unified modeling language). The absence of such a language causes difficulties in sharing the program’s design philosophy among the stakeholders and development team, which guides the day-to-day development decisions that influence the quality and utility of the final software product. An established communication medium facilitates regular software architecture evaluations, which ensure that evolving program and stakeholder needs continue to be satisfied. Such evaluations provide ongoing insight into the potential impact of new or changed requirements and design constraints.

Software Architecture Evaluations

The key factors contributing to software cost and schedule overruns are incomplete mapping and not fully understanding the software system requirements and system design attributes. Conducting early and regular software architecture evaluations ensures that early in a program lifecycle, the software system design is addressing all of the operational requirements and stakeholder needs. It also provides an early reality check of the program plans. Throughout the entire development lifecycle, understanding the software architecture provides a methodical approach for facilitating interactions among the structural and behavioral elements, enabling replacement of individual elements without breakage, and withstanding attacks on or failures of the system with minimal impact to ongoing operational activities.

Evalica's question response editor. Responses can include rich text and hyperlinks. By developing a framework for evaluating software architectures designed for national security space systems, areas that demand attention can be identified, understood, and rectified earlier in a systems development lifecycle.

Evalica’s question response editor. Responses can include rich text and hyperlinks. By developing a framework for evaluating software architectures designed for national security space systems, areas that demand attention can be identified, understood, and rectified earlier in a systems development lifecycle.

Software assurance is the practice of focusing on enabling software that is created using methods that support good quality from the start, rather than testing for and implementing quality after the fact. Paying attention to compliance and quality assurance early in the software development lifecycle is important. Conducting software architecture evaluations is the best way to ensure that the software system scope is adequately and correctly defined at the front end, so as to avoid wasting software development efforts later in the process.

Software architecture evaluations that focus on thorough requirements analysis and design, early verification and validation, and up-front prototyping and simulation can avoid costly fixes downstream. Such software architectural practices can reduce cost escalations for large critical software systems.

Software architecture evaluations also assess the faithful derivation of the architecture and design from the software system’s requirements and constraints. The software architecture must provide for, or at least not preclude, any of the functional capabilities defined in the software system’s specifications, including capabilities that are anticipated for the future. Since the software architecture contains the blueprints for lower levels of design and implementation, it should describe all the software’s requirements, functional capabilities, internal and external interfaces, significant algorithms, and usage constraints.

However, not all the software system’s requirements must be determined before architectural design. In iterative software development lifecycles in which at any point some of the requirements definitions have not been fully addressed, the level of design detail only needs to be specified as is appropriate for those requirements. As software system uncertainties are removed through prototyping, analysis of available options, and other methods, the software architecture should evolve accordingly in scope and specificity. In this methodology, software architecture evaluations take into account the appropriate level of detail and the evolvable nature of the architecture.

Evaluation Methods

Software developers use a variety of methods to evaluate software architectures. One widely used technique is the Architectural Tradeoff Analysis Method (ATAM) that was developed by the Software Engineering Institute at Carnegie Mellon University (Pittsburgh). This method assesses the consequences of software architectural decisions as they relate to quality attribute requirements and business goals. The method provides a set of steps that help stakeholders ask appropriate questions to discover potentially problematic software architectural areas and use scenario-based assessments early in a software development program to address quality attributes (e.g., modifiability, performance, and availability). It is aimed at raising awareness of critical issues, localizing and analyzing trade-offs, and focusing on the highest risk areas.

ATAM and similar methods focus primarily on the process of doing software architecture evaluations and are not targeted to specific software applications. While complementary to ATAM, Aerospace has its own software architecture evaluation framework designed for space system development. It provides a set of questions and evaluation guidance that is tailored to national security space systems software.

The Aerospace Software Architecture Evaluation Framework

Evalica's modularity and layered architecture dimension. Software architecture refers to a simplified representation or model of a software system's organization, including the structural elements and interfaces that constitute the system, as well as its behavior and composition, and the architectural style guiding the organization.

Evalica’s modularity and layered architecture dimension. Software architecture refers to a simplified representation or model of a software system’s organization, including the structural elements and interfaces that constitute the system, as well as its behavior and composition, and the architectural style guiding the organization.

The Aerospace Corporation’s software architecture evaluation framework questions are grouped into these top-level categories: architecture fundamentals, architecture documentation, architectural functionality and quality attributes, and architecture development and evolution methodology. Each category is then broken into dimensions that represent areas of concern and evaluation criteria. The dimensions include conventional software quality attributes (e.g., scalability and availability) and concerns specific to national security space programs such as reprogrammability, resilience to cyber attack, and appropriateness of commercial and government off-the-shelf products.

The framework questions are written to evaluate national security space systems software and are defined by three levels. Level one questions are nondomain specific and are applicable to most software systems. They provide a basis for discussions between subject matter experts and software experts to refine the generic questions into level two questions that pertain to national security space domains (e.g., command and control, mission planning). Based on the requirements of the software system being evaluated, the evaluation team can then tailor the level one and level two questions into national security space system-specific (level three) questions.

Space system software developers can support the front-end evaluation of software for such systems in a number of ways. One way is by interpreting the criteria for various national security space systems. For example, software developers might determine the scalability of a ground system and its capability to support a variety of different types of space vehicles. (This type of work differs from determining the scalability of an information technology system.)

Software developers can also work to improve the program evaluation of current and next-generation software systems by harnessing decades of Aerospace engineering and scientific experience in building national security space systems, along with the software development expertise that is built into the software architecture evaluation framework. Aerospace’s evaluation framework development team consists of several software architects and engineers with many decades of experience in building or overseeing national security space systems.

The software architecture evaluation framework can be used at different phases or milestones throughout a program’s lifecycle. The manner in which the evaluation framework is applied and the benefits gained from it will vary depending on the particular phase or milestone. For example, during the early project and presystems acquisition phase, software architecture design questions about the framework help to determine potential constraints on software system concepts. Asking the right questions up front ultimately serves as the foundation for the creation of system-level requirements.

The Aerospace software evaluation framework offers capabilities that enable full evaluations of complete and existing operational software systems. The framework can also be tailored to address the specific level of software architectural design detail that is commonly expected at a particular review milestone (e.g., system, preliminary, and/or critical design review). Using the software architecture evaluation framework at a preliminary design review milestone is invaluable in providing a detailed picture of the contractor’s architectural design and allows for supplemental input into the formal review process.

Applying the Aerospace Evaluation Framework

Several national security space programs have implemented the Aerospace software architecture evaluation framework since its inception in 2010. For example, a ground system’s software architecture that is currently under development was evaluated using the framework as part of the preliminary design review. The Aerospace framework more thoroughly examined the risks and opportunities that an earlier ATAM-based evaluation of the software system had identified. While determining the scope of the Aerospace evaluation, the ATAM-identified risks were used to select the evaluation criteria. This program illustrated the complementary nature of the Aerospace framework with general, scenario-based evaluation methods such as ATAM.

In another program evaluation, the Aerospace framework analyzed performance issues on a software system for a national security space system. The framework’s methodical analysis of the software system was instrumental in identifying the sources of the performance issues, which would not have been possible using any ad hoc software design technique.

The Aerospace framework has also been used to identify potential software architecture and design areas that needed evaluation during the source selection process.

These examples have helped Aerospace software evaluation development teams validate the utility of the Aerospace software architecture evaluation framework for various software architecture and design evaluation purposes. The work has also enhanced the overall framework by adding previous lessons learned to the database.

Software architecture evaluations have not historically been recognized as useful tools for effective technical oversight of programs. That has changed, and today Aerospace has developed relevant guidance and specific language for drafting requests for proposals and subsequent contracts. Part of this guidance is the recommendation that Aerospace’s customers should include software architecture evaluations as part of the front-end development process.

Acknowledgements

The author would like to thank Paulette Acheson, John Arcos, Sheri Benator, Eric Dashofy, Steven Meyers, Leon Palmer, Eltefaat Shokri, Mario Tinto, and Richard Yee for creating the Aerospace software architecture evaluation framework and for contributing to this article.

Further Reading

Aerospace Report No. ATR-2012 (9010)-12, “Evaluation Software Architectures in Space and Ground Systems” (The Aerospace Corporation, El Segundo, CA, 2012).

R. Banani and T. C. N. Graham, “Methods for Evaluating Software Architecture: A Survey,” Technical Report No. 2008-545, School of Computing, Queen’s University at Kingston, Ontario, Canada (April 14, 2008).

B. Boehm and V. R. Basili, “Software Defect Reduction Top 10 List,” IEEE Computer, Vol. 34, No. 1, pp. 135–137 (January 2001).

P. Clements et al., Documenting Software Architectures: Views and Beyond, Second Edition (Addison-Wesley Professional, Reading, MA, 2010).

J. Garland and R. Anthony, Large-Scale Software Architecture: A Practical Guide Using UML (Wiley, Hoboken, NJ, 2003).

IEEE Standard 1471-2000, “Recommended Practice for Architectural Description of Software-Intensive Systems,” http://standards.ieee.org/findstds/standard/1471-2000.html (October 26, 2012).

P. Kruchten, “Architectural Blueprints—The 4+1 View Model of Software Architecture,” IEEE Software, Vol. 12, No. 6, pp. 42–50 (November 1995).

F. Shull, “Disbanding the Process Police: New Visions for Assuring Compliance,” IEEE Software, Vol. 29, No. 3, pp. 3–6 (May/June 2012).

About the Author

Unell

Alan Unell

Senior Project Leader, Software Engineering Subdivision, joined Aerospace in 2008 and led the team that developed the software architecture evaluation framework. His 30 years in the aerospace industry include experience at Hughes Aircraft/Raytheon as a programmer, program manager, chief engineer, and mission assurance director. He has a Ph.D. in mathematics from Northwestern University.

Back to Spring 2013 Table of Contents

Go to sidebar:  Evalica: Supporting Software Evaluation Logistics