Meeting National Security Space Needs in the Contested Cyberspace Domain
Growing concern over emerging cyber threats is shifting attention to mission resilience—the ability to operate through new and evolving threats in the cyberspace domain.
During the last two decades, the U.S. government and private sectors have come to heightened awareness of the challenges to national security that are emerging from cyberspace. News reports regularly highlight the vulnerability of industrial systems to intrusion and the resultant loss of massive amounts of data and even the loss of control over industrial processes. These challenges raise questions about the resilience of the functions of the economy and government while under cyberattack, including those functions provided by the national security space community.
As cyberspace becomes an increasingly contested domain, many aspects of national security space are also in flux. National security space has witnessed several periods of transition involving the nature of threats to space systems, the purpose and structure of space missions, the technologies that affect space system construction, and the role of systems in the missions they serve. Today, there are significant transitions occurring in all of these dimensions.
Many of today’s national security space capabilities were first conceived during the Cold War with well-defined and well-studied adversaries, and many of these capabilities (such as missile warning) were developed as isolated, single-mission systems. Today’s environment is dramatically altered and the threats are very different.
The strategic concerns of the Cold War are a relatively small, although still important, component of a much more complex environment today. The emphasis in the space community is now on fusing a wide variety of data sources to achieve information superiority for warfighters and intelligence analysts. This has created unrelenting pressure to connect information systems and to communicate all over the globe, including to users in the field. This connectivity is both an enabler and an Achilles heel: creating pathways for information to get out to authorized users can also help adversaries find pathways to get in to that same information.
Connectivity of systems is not the only source of vulnerability. If it were, then the solution would be simple but painful—disconnect the systems. This has been the response of last resort taken by several defense contractors under cyberattack in the last few years, but it would be a crippling response if it were necessary in the midst of an international conflict.
Another source of vulnerability is the increasing reliance on a wide range of commercially supplied hardware and software components that are manufactured throughout the world and provide ample opportunity for the introduction of malicious hardware and software. Any of today’s space system command and control centers contain a wide range of routers, firewalls, printers, desktops, telephones, video devices, disk farms, computing clusters, databases, Web servers, and other information processing capabilities, components of which may and probably do originate from indeterminate sources.
The inexorable trend of increased connectivity among national security space systems—with components of uncertain pedigree—amplifies the risks associated with system (and systems of systems) complexity. Increased complexity alone raises the risk of a cyberattack because more attention must be focused on managing the system just to achieve proper functioning, usually at the expense of attention on understanding the risks being created and new means of cyber intrusions. Whether or not increasingly complex systems (and even more complex systems of systems) can function properly under cyberattack becomes correspondingly more difficult to assess.
This complexity and the sheer magnitude of recent national security space systems have also changed the system acquisition process. Space systems are now acquired as separate segments with distinct acquisitions. These separate acquisitions make it harder to fully assess end-to-end behaviors when all of the segments are put into operation, and make it difficult to identify side effects or other unintended behavior under cyberattacks. The result is that developers often fail to obtain anything beyond a superficial understanding of the end-to-end system design, which reduces the effectiveness of understanding the true risks to the system.
New Technology Risks
The increasing pace of introducing new technologies into national security space missions creates another set of challenges in the cyberspace domain. For example, the need to make ground systems and mission processing systems more efficient—in effect, to do more with less—is fueling a desire to migrate terrestrial information technology capabilities to cloud services. Cloud computing allows computer users to tap into servers and storage systems scattered around the country and the world that are tied together by networks. Cloud services are designed to give users better, more reliable, more affordable, and more flexible access to much needed information technology infrastructures. On the other hand, the most significant barrier to adoption of clouds is trust: Will mission data confidentiality, integrity, and availability be better ensured by residing on the cloud? Will mission stakeholders be able to rely on the cloud? Will the cloud be as resilient and robust as the information would be in a more traditionally independent private operational environment? Aerospace is working with its customers to help them understand the vulnerabilities associated with cloud-based services.
Another area of concern is new mobile-user devices including smartphones, iPads, and other tablet computers, which are rapidly becoming integrated into the operational environment. As these new devices enable new concepts of operation, they introduce a dynamically changing need for service from national security space systems, as well as an increase in the need for adaptive, on-demand service provisions. Agile acquisition strategies and rapidly adaptable space asset architectures are becoming increasingly necessary to address the effects of these transformative and rapid technology changes. But these changes, as with migration to cloud environments, raise the specter of new vulnerabilities in national security space systems. Aerospace is conducting research on wireless security effects and countermeasures. In the future, new end-to-end assessment frameworks will be essential for understanding the dynamic system risks and for updating systems to address new threats.
Even the devices and software that are incorporated into national security space systems for the purpose of security represent an added level of complexity that makes managing systems a challenge. Firewalls and other devices that restrict information flow from one security regime to another, authentication and key management systems, access audit systems, and other mechanisms to control and observe possibly hostile access to mission critical information are themselves complex to develop, test, understand, configure, and control during operations. The result is that while some means of cyberattack may be attenuated by these mechanisms, others may be introduced, and the overall attack surface of the systems may become larger, and certainly becomes harder to understand. Furthermore, when systems with distinct mechanisms for implementing security policies are connected in new ways, inconsistencies may arise, introducing new gaps in the defense mechanisms that may be exploited by attackers.
Concern about cyber vulnerabilities has been dramatically growing, commensurate with the number of publicly acknowledged successful penetrations into information systems. Many of these cyberattacks have focused on theft of personal information (such as social security numbers and credit card numbers) used for identity theft and financial gain. The trend rapidly evolved to include cyber intrusions to steal intellectual property from the government and from private industry. In the last 5 to 10 years, such intrusions have become multiyear cyber campaigns across a broad spectrum of government and industry. To defend against these attacks, an entire industry has arisen to provide security to enterprises and individuals who use and depend on the Internet. In a predictable response, cyberattacks have extended to this industry. For example, there have been significant attacks against cryptographic certificate and security providers in an attempt to gain authentication information that will enable future cyberattacks to pass through existing protection barriers.
This growing list of cases certainly represents an alarming trend, and the theft of information is a serious concern for the U.S. government. But this trend does not accurately foretell the kind of threat that will likely materialize during a conflict with a near-peer adversary. In fact, today’s cyber threats and attacks could be viewed as preparation of the (cyber) battlefield. As systems are penetrated to extract information, it is possible that implants are being put in place that could be called upon in times of conflict.
The most concerning threat during a cyber conflict will likely be attacks that disable systems through either overt action (such as denial of service) or covert action (subtle manipulation of data and systems). The latter is particularly worrisome because of the difficulty of identifying the threat, attributing attacks to adversaries, understanding the extent of compromise, and assessing the extent to which trust in the systems has been endangered. No commander wants to engage in a mission with equipment he or she cannot trust. Once systems are compromised during conflict, the impact may go beyond the specifics of the attack. Entire systems may become untrusted, and therefore unused. Deceptive false indicators and warnings can provoke this unfavorable condition, so that trust may be lost even though actual cyber compromise has not been achieved.
Protecting Space Systems
The current offensive/defensive posture in cyberspace is asymmetrical: the offense has a substantial advantage over the defense. Cybersecurity is only as good as its weakest link. Consequently, there is a need to defend everywhere, and executing the defense needs to happen perfectly. On the other hand, the offense need only succeed in identifying and exploiting the weakest link of a system to be successful. These types of attacks on space systems are not currently coming from everywhere, but they could come from anywhere.
Attacks can be directed at many layers of a system’s operational structure and can cross layers. These include a physical layer with wired and wireless communication media; a hardware layer of network interfaces, routers, antennas, encryption/decryption devices, firewalls, computers, printers and many others; a system software layer with firmware in many of the devices on a network and the operating systems, database management systems, Web servers, virtualized servers, etc.; an application software layer with a broad range of custom-developed and commercial-off-the-shelf software such as e-mail systems, document management systems, and collaboration tools; and a mission layer that comprises the unique software and hardware used to accomplish a particular mission (such as missile warning).
For defense in the cyber domain, each layer must be protected in its own way. Much attention has been focused on protecting the physical and network layers of national security space systems. However, an attacker who introduces malware at higher layers can bypass these layers. Similarly, the best efforts to protect applications can be bypassed by attacks at the physical layer. All of these layers can be bypassed through social engineering. This involves manipulating the people who conduct the interface through malicious tactics like spear phishing, which consists of targeting people with apparently authentic personal appeals that, when responded to, unleash malware on their system and enterprise.
While the offense has a clear edge over the defense, it is important not to overestimate the capabilities of attackers, which could result in paralysis and an incorrect conclusion that the situation is hopeless. The offense does have a great advantage in being able to generally penetrate systems, exfiltrate data, and perform denial of service attacks. However, achieving specific effects is not as straightforward. An analogy can be made to the contrast between going fishing and catching a specific fish (no pun intended). Designing an attack to target a very specific component of a system—to achieve a specific effect such as altering a command sequence on a satellite—is a very challenging engineering problem. Much of what is happening today consists of relatively broad attacks intended to achieve broad effects.
However, there have been successful attacks to achieve specific effects by advanced persistent threat actors, who have sufficient motivation and resources to develop and conduct precision cyberattacks. For example, several cybersecurity researchers who reverse-engineered components of the widely publicized Stuxnet worm have commented that Stuxnet could have only been developed by a highly skilled team with extensive financial and intelligence resources. Stuxnet attacked supervisory control and data acquisition (SCADA) capabilities governing cyber-physical systems that conduct processes in the real world, and it was reputedly able to damage those systems, disrupting their processes. It is an example of malware whose impact moves beyond cyberspace into the physical world, with potentially deadly consequences. National security space systems are also cyber-physical systems engaged in processes critical to the nation’s security, so it is natural and appropriate to be concerned about cyber threats like Stuxnet.
Stuxnet-like attacks are not simple to execute; the attackers are challenged in testing the attacks in a representative environment and understanding the effectiveness of a particular attack after it has been deployed. In this regime of cyber conflict, the defense has significant opportunities to improve its prospects for protection. For example, introducing variability in a particular system may make the design of an attack more challenging. Creating countermeasures that introduce uncertainty for attackers can also be an effective defense, and in some cases, even act as a deterrent.
Still, the challenge of defending national security space systems from Stuxnet-like and other cyberattacks is daunting, especially if the adversary is an advanced persistent threat actor. Recent history has made it clear that these threats cannot be entirely kept out of any system important enough to attack. It is prudent to assume that such adversaries may already be in U.S. space systems, or will eventually be, and therefore the biggest cyber challenge has become what to do once they are in.
According to recent studies by the U.S. Air Force Scientific Advisory Board, the viability and predictability of successful attacks from advanced persistent threat actors mandates that attention be focused on the need for the United States and allied military forces to be able to “fight through and continue to operate” in the presence of attacks on the cyberspace infrastructure. The need for missions to be resilient in the presence of attacks and counterattacks has always been a preoccupation of military strategists and tacticians. However, the difference now is that attacks may be launched and conducted in part or in whole in cyberspace, and many traditional yardsticks by which to measure the resilience of missions (and of the systems they use) are no longer sufficient or even applicable.
Migration from a protection perspective to a resilience perspective requires several key activities. Resilience implies that the functionality of a system will continue despite the challenges that come with an attack. While continuity of missions is a key goal of resilience, continuity at full strength of all aspects of an entire mission is unrealistic—invariably the mission would be somewhat degraded. In this case, one solution might be that some lower-priority tasks have to be discarded—lower performance for certain missions may be acceptable and some “nice to have” sources of data may be discontinued.
Designing for resilience requires a thorough understanding of what the critical cyber components of a system are and how they impact a mission. These could be low-level items such as a database or switch, or a higher-level subsystem, such as command and data handling or a mission planning system. Identifying these elements requires an in-depth understanding of the mission, how it is performed (tactics, techniques, and procedures), the elements of information required to conduct the mission, the interdependencies among those elements, and the cyber components that are necessary to the flow of those elements. In the case of space cyber, analyzing criticality of components requires an intimate knowledge of the satellites, payloads, mission planning software, and the mission effect of the national security space system’s products.
Aerospace is supporting the Department of Defense in developing policies that extend to these program protection areas. As part of the Mission Assurance Improvement Workshop, Aerospace is working with the government and contractors to develop guidance for acquisition, development, and operations to improve space segment information assurance and mission resilience. Aerospace is also conducting research on the impact to space systems resiliency when trust in critical information is lost in varying degrees as a result of cyberattacks and other threats.
Implicit in mission resilience is that some particular functionality in a system may have to be sacrificed to enhance the continuity of the mission. Limiting the loss of functionality may not always be possible depending on the overall architecture (software and hardware) of a system. Identifying the most critical cyber components enables tactics for resilience to be employed in a cost-effective way, such as introducing redundancy of critical components but not ancillary ones, or architecting systems to allow for separation and isolation of mission functions.
Monolithic systems are quite challenging to secure from cyberattacks because even an attempt to sacrifice some functionalities to save others may not increase security by an appreciable amount. For example, intermixing mission-critical ground segment functions on the same local networks as nonmission-critical functions may not only compromise the security of one function, but also might prevent the implementation of any measures to reconstitute another impaired function. Similarly, the information architecture on spacecraft may depend on a single spacecraft bus to the extent that isolation of compromised payload functions may not be possible, jeopardizing the mission impact of the other payloads involved. The goal is to understand the role of cyber-critical components, allowing for a carefully articulated assurance profile that reflects different degrees for some elements, rather than one uniform bar that is so high as to be effectively ignored, or so low as to be useless.
In support of national security space customers, Aerospace developed a framework for assessing software architectures to ensure they are being built to meet current and future mission needs. The framework has been extended to include emerging needs for system and mission resilience, especially related to mission resilience in the contested
cyberspace domain. This enhanced assessment framework
is being applied to ongoing customer programs, and refinements are being introduced based on lessons learned.
One area that is notoriously difficult to secure is conventional Web-based architectures (designed using World Wide Web technologies). To address this challenge, Aerospace is exploring new Web architecture concepts, which are compatible extensions of conventional techniques, and are expected to enable trusted sharing among mutually suspicious networked parties.
One foundational component of mission continuity while under attack is cyber situational awareness. To effectively defend a system there needs to be knowledge that an attack is underway. The words “under attack” evoke thoughts of distributed denial of service attacks coming over a network, but a more accurate definition may be that the system is compromised, and that action by an adversary is having an effect on the system or its information. For example, a system under attack could be one in which data in a system has been altered, or one for which certain command sequences to a satellite have been modified to achieve a desired effect.
Recognizing when such sophisticated attacks are underway is perhaps the greatest challenge of cyber situational awareness. By comparison, recognizing that data is being exfiltrated from a system is a relatively simple task. For example, a rudimentary form of an attack recognition process involves checking the checksum of an executable program to determine if it has been modified. While this primitive check can be easily circumvented, the introduction of a number of simple consistency checks could significantly enhance situational awareness and make it more difficult for compromises to go undetected. However, sometimes understanding the cyber situation proves more challenging. Situational awareness may require the use of multiple sources (trusted to different extents) to identify discrepancies in systems; likewise, warnings and indicators signaling an attack may be underway might require more sophisticated follow-up analyses to confirm the existence and nature of the attack.
Aerospace has a broad spectrum of research projects underway that are focused on developing techniques and technologies for cyber situational awareness. One project looks at individual satellites and addresses onboard techniques for autonomous threat detection, assessment and recovery, and the design of feasible trusted computing and communication mechanisms on board. A second project focuses on the design of a distributed system-of-systems architecture that enables timely sharing of multiple-source threat/attack data to concurrently generate and update local and global situational awareness pictures and conducts collaborative assessment with tailored information sharing on demand. A third project addresses enterprise-level network anomaly detection, and a fourth explores the use of satellite-based communication to introduce timely trust assessment of routers in a TCP/IP networking architecture.
Resilience in systems also requires the identification and development of countermeasures that can be automatically triggered or put in the hands of system operators. Countermeasures are well understood in the air and maritime domains, but they are not as well understood in the cyber domain. In physical domains, countermeasures are developed to address specific attacks or specific classes of attacks (e.g., heat-seeking surface-to-air missiles). In the cyber domain, countermeasures are rarely focused on specific threats because they are evolving so rapidly. Countermeasures need to be more generic and address broader classes of attacks.
Defensive countermeasures in the cyber domain might involve a simple virus check, or they could be as complex as presenting to the public interface a honeypot or honeynet—
a deceptive substitute for the actual system under attack—or modifying the network topology (disconnecting some systems or subnetworks, and reconnecting them only when adequate boundary defenses can be employed). Another possibility involves reconstituting a system on alternate hardware or software, or reconstituting databases from known trusted sources. How to reconstitute systems by automatic or semiautomatic migration of computational and informational objects is an ongoing area of research at Aerospace.
Cyber countermeasures, much like those in the air, terrestrial, and maritime domain, are generally intended to get a system into a configuration that may be degraded in functionality but is more resistant to continued attack. Developing and employing such countermeasures requires a clear understanding of classes of attacks (at different levels), strong knowledge of the critical components of a system that are needed to continue to operate, effective predictive modeling of the potential consequences of employing countermeasures, and decision aid tools for the employment of countermeasures that require human intervention. The choice of which countermeasures to employ may depend on the degree of confidence operators have that the actual cyber situation is well understood, and that the countermeasure will achieve the desired effect.
This illustrates that an essential component of national security space mission resilience is the vigilant, well-trained operator. While defense of cyber systems will require some
autonomous response, human engagement will nearly always be required. Aerospace anticipates that the current organizational distinctions between cyber operations specialists and space system and mission operators will be refined over time to yield more effective and timely responses to adversarial cyber intrusions and attacks. Future national security space systems operators will need significantly greater training in cyber situational awareness, in the understanding and use of countermeasures, and in the ability to use systems with degraded functionality. The Aerospace Institute, the education and training arm of The Aerospace Corporation, is developing a cybersecurity curriculum designed to address some of the needs found at the intersection of space and cyberspace.
Aerospace Report No. TOR-2011(8591)-22, “Space Segment Information Assurance Guidelines for Mission Success” (The Aerospace Corporation, El Segundo, CA, 2011).
D. Alperovitch, “Revealed: Operation Shady RAT. An Investigation of Targeted Intrusions Into More Than 70 Global Companies, Governments, and Nonprofit Organizations During the Last Five Years,” McAfee, http://www.mcafee.com/us/resources/white-papers/wp-operation-shady-rat.pdf (as of Nov. 8, 2011).
W. Hennigan, “Taking iPads Into Battle,” Los Angeles Times, Sept. 25, 2011.
McAfee Labs and McAfee Foundstone Professional Services, “Protecting Your Assets. Lessons Learned from Operation Aurora,” McAfee, http://www.mcafee.com/us/resources/white-papers/wp-protecting-critical-assets.pdf (as of Nov. 8, 2011).
K. Stouffer, J. Falco, and K. Scarfone, Guide to Industrial Control Systems (ICS) Security (National Institute of Standards and Technology, U.S. Department of Commerce, Special Publication 800-82, June 2011).
Technology and Innovation Subcommittee Hearing, “The Next IT Revolution?: Cloud Computing Opportunities and Challenges,” http://science.house.gov/hearing/technology-and-innovation-subcommittee-hearing-cloud-computing (as of Nov. 8, 2011).
United States Air Force Scientific Advisory Board, “Defending and Operating in a Contested Cyber Domain Abstract,” https://www.sab.hq.af.mil/TORs/2008/Abstract_Cyber.pdf (as of Nov. 8, 2011).