Cyber Protection and Space System Acquisition

Space systems will undoubtedly be targeted for future cyberattacks—but by focusing on capabilities, rather than components, developers can ensure that space assets will deliver critical functionality when it is needed most.

Cyber Protection and Space System Acquisition opener
Charles H. Lavine, Daniel Faigin, Michael H. Cole, Jerome Myers, and Deborah Shands

 

The process of acquiring a space system can take almost ten years, and once a system is fielded, it may operate much longer than a decade. This extended timeline gives an adversary years to plan, develop, coordinate, and execute an attack. Creating space systems that can withstand cyberattacks carried out using future technologies is a tremendous challenge. Mission protection must be addressed at each stage of the acquisition process: requirements development, systems engineering, testing, and sustainment. In each of these stages, system engineers must focus on insulating critical system components to provide eventual operators with the tools needed to continue mission-critical operations. The Aerospace Corporation has been applying its expertise in systems engineering and program acquisition to help ensure that future space systems are appropriately protected from the threat of cyberattack.

Developing Requirements

Cybersecurity requirements for space systems derive from three sources: environmental assumptions (what the system environment does for you), anticipated threats (what an attacker might do to you), and policies (what governance organizations require of you).

Environmental assumptions can make securing the system easier. For example, one can assume that the National Security Agency will provide a secure cryptographic algorithm and strong encryption keys. Other common assumptions are that the system will be physically protected on a U.S. Air Force base.

Anticipated threats describe attacker goals. For example, one might expect an attacker to attempt to expose sensitive mission data, corrupt mission-critical information, or prevent users from commanding the satellite or payload. Not all threats apply to every system. If a system has no sensitive mission data, for example, then it would not be vulnerable to attacks designed to breach the confidentiality of mission data.

Policy directives from governance organizations identify the controls (high-level statements from which system-specific requirements can be derived) that must be implemented as well as the processes for certifying that the controls have been implemented. Standard controls are defined primarily to ensure that all systems are engineered to withstand common threats. As threats evolve, governance organizations update their control sets and impose stronger requirements.

Historical Perspective

Until the early 2000s, development of information security requirements was typically an ad hoc exercise that resulted in wide variation from one space system to the next. A common security lexicon was not used, and many systems were developed with no security considerations at all. System connectivity was rarely addressed. Later, as system interconnection became more common, security requirements emphasized enterprise security, with reduced focus on individual systems.

In 2002, the Department of Defense released its 8500 series directives, which defined policy and responsibilities regarding information assurance. That same year, the Federal Information Security Management Act (FISMA) was passed, requiring federal agencies to audit and assess the security of their information and information systems.

FISMA advocated a risk-based approach that emphasized compliance with appropriate standards. It sought to foster a system awareness through requirements for regular status reports. It also included an enforcement infrastructure for certification and accreditation.

The introduction of the 8500 series was a vast improvement over past requirements development efforts, and it brought a strong dimension of standardization. Aerospace played a critical role in increasing security awareness and execution by providing training and guidance to space programs in requirements definition and implementation of the FISMA/8500 series governance.

New Governance for Space Systems

New requirements are slated to address many of the pitfalls subsequently identified in the 8500 series, including poor language usage, ambiguities, and a lack of requirements specifically targeting system acquisition. For example, as part of the FISMA implementation effort, the National Institute of Standards and Technology (NIST) released the Special Publication (SP) 800 series of requirements. SP 800-37 replaced the traditional certification and accreditation process with a six-step risk-management framework. The controls for the risk-management framework were presented in SP 800-53, Revision 3, which was developed in consultation with the defense and intelligence communities and reissued by the Committee on National Security Systems (CNSS, the organization responsible for establishing information assurance policies for national security systems) as CNSS Instruction No. 1253. Aerospace is helping to prepare the next revision of SP 800-53, with particular focus on areas related to assurance.

The NIST SP 800-37 risk-management framework changes the traditional task of risk identification and system certification from a static, procedural activity to a more dynamic activity that promotes effective management of information-security risks in the face of increasingly complex threats, vulnerabilities, and mission objectives. The process entails more than simply selecting the baseline standards and applying appropriate extensions; security engineers must tailor and supplement the controls that dictate requirements based on specific system threats and resiliency needs.

 

Challenges in Requirements Definition

Security Life Cycle

A graphic depicting the NIST SP 800-37 revision 1 risk management framework.

Even with the new risk-based approach to requirements definition and compliance monitoring, considerable challenges remain. Two significant challenges include establishing system boundaries and defining security-control allocations. Well-defined boundaries establish the scope of protection for information systems, preventing requirements from becoming either too complex or too small. They help identify mission-critical system components to ensure that the functions they provide are appropriately protected against cyber threats. For example, for space systems, security engineers must decide whether and when to split apart a ground system into its constituent parts of mission control, satellite operations, ground antenna, and network transport. Classification of data and data flows is another significant factor in establishing boundaries, as well as the timeframe in which specific parts of a system will become available in the acquisition process. This, in turn, influences the categorization of a system, which helps define how serious a loss of function would be in terms of confidentiality, integrity, and availability of information.

This new approach is changing the way requirements are allocated for space systems. The requirements-allocation process incorporates system-specific, common requirements. While there is significant flexibility in selecting requirements from NIST SP 800-53, the protean nature of information-system technology makes it hard to select the right ones. For example, net-centric architectures (e.g., service-oriented architectures, cloud computing) can introduce subsystems that may not be part of the larger system throughout all stages of the life cycle (also known as dynamic subsystems). These may be operated by external entities that are not under control of the space system operators. Consequently, the inclusion of a subsystem may require reassessment of the security risks to the space system, using the appropriate controls. Because space systems typically require years to develop, requirements flexibility is essential.

Supply-Chain Considerations

 
Malware may be introduced via virtually any part of the supply chain network, leaving the remainder of that chain (depicted in the green-to-red change) vulnerable to attack. This can occur during many different stages: when the prime contractor purchases software (1,2); when the sub-contractor obtains software from a domestic developer that employs foreign developers (3); or when legacy (reuse) software contains code that was not constructed with a modern emphasis on security (4).

Malware may be introduced via virtually any part of the supply chain network, leaving the remainder of that chain (depicted in the green-to-red change) vulnerable to attack. This can occur during many different stages: when the prime contractor purchases software (1,2); when the sub-contractor obtains software from a domestic developer that employs foreign developers (3); or when legacy (reuse) software contains code that was not constructed with a modern emphasis on security (4).

Over the past couple of decades, space systems have been increasingly exposed to threats through the supply chain. The globalization of manufacturing capabilities and the increased reliance upon commodity software and hardware have expanded the opportunities for the malicious modification of the basic building blocks of these systems in a manner that could compromise critical functionality.

The protection of critical technologies from a loss or compromise that would reduce a capability or eliminate a competitive advantage has long been a requirement for all acquisitions. Indeed, the procedures and techniques used to implement those protections have been significant cost drivers on some acquisitions. Still, in general, the risks presented by weaknesses in the supply chain were not considered significant enough to garner much attention. Concerns about the supply chain typically focused on ensuring a sufficient supply of components to sustain a program throughout its life, or on detecting materials, parts, and software that were being counterfeited for profit rather than for malicious purposes. Setting aside the economic impacts of counterfeiting (which could affect future product availability), the main perceived threats from counterfeit components were functionality, reliability, and performance failures. Those types of failures could often be detected during acceptance testing. In circumstances where special care was needed to protect technologies from loss or modification, trusted foundries and software developers were used to design and manufacture the parts, components, and software.

This diagram depicts space-based and other distributed systems connected in the cyberspace domain.

This diagram depicts space-based and other distributed systems connected in the cyberspace domain. Courtesy of Wikimedia Commons.

Recent counterfeiting has become more sophisticated. Advanced techniques have made counterfeiting for profit harder to detect. Those techniques have also made it economically feasible to produce counterfeit components that meet their specification but also include malicious modifications that would not be detected during normal acceptance testing. Similarly, software or firmware can be maliciously modified in a manner that goes undetected. The modifications can be made directly by individuals that have access to the development processes or by the tools that are used during design, development, or manufacturing. The malicious capabilities could be triggered at a later time by other cooperating components or by environmental factors. The possibilities are seemingly endless. The problem is compounded by the fact that many electronics hardware foundries have moved overseas and most commercial software is created and maintained by international teams of software developers.

The Defense Department has begun publishing guidelines that address supply-chain threats. One such publication is the “Key Practices and Implementation Guide for the DoD Comprehensive National Cybersecurity Initiative 11 Supply Chain Risk Management Pilot Program,” dated February 25, 2010. It includes a description of 32 key practices that cover the entire life of a system. Another relevant document is DoD 5200.39, “Critical Program Information Protection Within the Department of Defense.” An associated document, “Program Protection Plan Outline and Guidance, Version 1.0,” provides guidance concerning the protection plan required by DoD 5200.39; it elaborates upon the supply-chain activities that must be performed before major milestone decisions.

Collectively, these publications underscore the widening perception that supply-chain risk management should be considered an integral part of program protection. The 2008 version of DoD 5200.39 expanded the definition of critical program information, bringing commodity products and broader supply-chain risk-management considerations into the realm of program protection. Prior to that release, the focus of DoD 5200.39 was protecting systems against the loss of traditional program information (though some defense services already had requirements for program protection that went beyond traditional program information—for example, the “critical system resource” referenced by Air Force guidelines).

As these publications explain, effective supply-chain risk management begins with an analysis of mission-critical functions and the components that implement them. This analysis addresses the threats, vulnerabilities, potential countermeasures, and associated risk trade-offs. The idea is to focus on mission impacts, rather than on specific measures of data protection. Logic-bearing components (hardware, firmware, and software) are of special interest because they offer the most opportunity for a malicious attack that might go undetected until after a critical mission capability had been compromised.

One of the 32 key practices in the Defense Department guide is generically described as “know your suppliers.” Globalization of the supply chain has made this difficult. Acquisition managers tend to know the integrators and first-tier component suppliers well; however, the use of commodity products complicates the task of identifying the second- and third-tier component suppliers. The most significant threats may be at the level of the individual logic-bearing component, such as a field programmable gate array.

Unfortunately, acquisition managers often lack contractual authority for obtaining sufficient visibility into their supply chain—and even when they can identify suppliers, they generally do not have the resources to investigate them directly. To provide greater insight, the Defense Intelligence Agency’s Threat Analysis Center compiles information about the threat of malicious modification to specific components in the supply chain. Programs may request information about specific products and suppliers, and the center provides details about risks that would be difficult for individual programs to obtain through their own research. This feedback is yet another factor that goes into the risk-management decisions that the program eventually makes.

Certification and accreditation and supply-chain risk management should be performed throughout the program lifecycle and should be revisited prior to every systems engineering review and milestone decision. Early in the design phase, a criticality analysis is unlikely to reveal specific components that need to be investigated in detail, but will identify critical capabilities that need to be protected. This will ensure that techniques for reducing the exposure of those critical capabilities to vulnerabilities introduced through the supply chain are considered as part of the trade space.

Security Assessment for Space Systems Acquisition

Security vulnerability test results. Security engineers collect test results from multiple platforms within the target space system. In this example, automated test results show that Windows XP has been deployed across multiple computers with inconsistent configurations, some of which deviate from DOD security configuration guidance. The engineer must still perform manual test procedures against potential vulnerabilities that are "not reviewed" yet. "Open" refers to discovered but unresolved vulnerabilities.

Security vulnerability test results. Security engineers collect test results from multiple platforms within the target space system. In this example, automated test results show that Windows XP has been deployed across multiple computers with inconsistent configurations, some of which deviate from DOD security configuration guidance. The engineer must still perform manual test procedures against potential vulnerabilities that are “not reviewed” yet. “Open” refers to discovered but unresolved vulnerabilities.

Security assessment is a part of verification and validation activities, which are themselves part of systems engineering processes. In the domain of information security, these concepts can be framed as: Does the system meet its security requirements, and do those requirements make the system acceptably secure? Systems are typically tested against security requirements both by the system developer and by an independent assessment team; Aerospace often oversees system developer testing and serves on independent assessment teams.

Control validation is the most common form of security testing for space systems. It establishes whether or not the system and its components comply with security controls and configuration mandates. The assessment team generally includes independent agents, but may also include representatives from the development and sustainment contractors and the program office.

An important component of control validation is configuration compliance testing. For defense systems, the Security Technical Implementation Guides published by the Defense Information Systems Agency (DISA) serve as the primary source of configuration guidance. Product- and technology-specific guides provide input to DISA’s vulnerability scanning tools; these tools partially automate compliance testing to aid in the detection of component vulnerabilities.

Penetration testing sometimes supplements control validation and compliance testing. Penetration testing can prove the existence of a suspected vulnerability, but cannot prove a lack of vulnerability. Therefore, it is used sparingly to confirm details of potential attack paths. During penetration testing, a team of security professionals plays the role of an adversary in a simulated cyberattack. The team often uses software that partially automates attacks and includes scanning tools similar to those used during validation and compliance testing. While penetration testing can be effective at demonstrating the existence and implications of security vulnerabilities, it is also the most invasive and potentially destructive form of security testing and is not a substitute for compliance testing. Because it can disrupt services and corrupt the system, penetration testing on operational systems and critical assets must be carefully planned to minimize undesirable side effects, and most system owners relegate penetration testing to chartered “red teams,” such as those operating under the 24th Air Force.

The results of security testing—including any identified vulnerabilities—contribute to a risk assessment. The purpose of a risk assessment is to highlight noteworthy security flaws in the system and characterize the overall risk to mission success that they may cause. Vulnerabilities identified during tests are weighed against the likelihood that they might be exploited and the impact of such an attack. In determining likelihood and impact, assessors should consider the context for identified vulnerabilities. Based on the security-risk assessment, the program manager authorizes the system’s operation in its current configuration (and accepts the risk of operating it) or denies authorization to operate (indicating that there is an unacceptable level of operational risk and additional mitigations are required). The risk assessment and supporting documentation also enable the notion of reciprocity: security test results and risk assessments provide information that can help officials understand the risks of connecting two systems.

Conclusion

Cyberattacks on space assets will certainly occur in the future. The success of a space system may well depend on its ability to provide even degraded mission support in the face of such an attack. Defense agencies have begun to recognize the need for improved mission protection and are developing guidance to drive increased focus on system functionality and operating processes to support it. Aerospace security analysts are involved at every stage of the space system acquisition process and are therefore in a unique position to help identify the necessary controls, mission-critical components, and supply-chain risks that need to be understood to ensure mission functionality. Space systems developed with an appropriate focus on mission protection stand a better chance of retaining mission-critical functionality when it is most needed.

Further Reading

“DoD Advances Supply Chain Risk Management Efforts,” IAnewsletter, Vol. 14, No. 3 (Summer 2011).

“Program Protection Plan Outline and Guidance, Version 1.0” (Office of the Deputy Assistant Secretary of Defense for Systems Engineering, Washington, DC, July 2011).

Key Practices and Implementation Guide for the DoD Comprehensive National Cybersecurity Initiative 11 Supply Chain Risk Management Pilot Program (Department of Defense, Washington, DC, Feb. 25, 2010).

“Top 10 Application Security Risks,” OWASP: The Open Web Application Security Project, https://www.owasp.org/index.php/Top_10_2010-Main (as of Dec. 15, 2011).

“Top 25 Most Dangerous Software Errors,” Common Weakness Enumeration, http://cwe.mitre.org/top25 (as of Dec. 15, 2011).

Back to the Spring 2012 Table of Contents

Got the sidebar: Laser-Scripted Modification of Nonomaterials for Supply-Chain Integrity

Got the sidebar: Secure Coding

Got the sidebar: The Risk Management Framework