Aerospace researchers are pursuing diverse means of endowing space systems with the intelligence, autonomy, and adaptability needed to overcome a range of future threats.
Elizabeth L. Scruggs, John Nilles, Jason P. Andryuk, Kirstie L. Bellman, Josh Train, Philip A. Dafesh, and Brad Wilkins
Emerging technologies have the potential to change the way information assurance and mission resilience are achieved. The Aerospace Corporation has been investigating various technologies that may eventually influence the development of space and cyber systems. Some of these include virtualized computers, protected terrestrial networks, cognitive radios, and biologically inspired mechanisms. These technologies may seem unrelated, but when considered collectively, they suggest the sort of grand synthesis that space and cyber systems will need to achieve in the coming years.
Traditionally, computer hardware runs a single operating system that runs user applications. Virtualization inserts another layer of software between the hardware and operating system, a “hypervisor.” The hypervisor implements and manages virtual machines, which allow the installation of operating systems and associated applications. The hypervisor manages and allocates the physical hardware to support concurrent operation of multiple virtual machines and therefore multiple operating systems. Each virtual machine is partitioned from the others to prevent interference.
Virtual machines offer some interesting capabilities. They can be started and stopped on demand, or paused and unloaded. The state of the paused machine can be saved as a snapshot that can later be reloaded and execution resumed. This capability provides a great benefit for forensic operations: if an attack is detected, a snapshot can be created and inspected before any malware has a chance to cover its tracks. Virtual machines have proved invaluable to malware research and analysis because they can be used to create a quarantined area for the study of malware execution. The malware can be observed without risk of spreading or infecting real systems.
While virtualization has many benefits, it also carries some associated risk. For example, what were formerly multiple operating system instances on distinct pieces of hardware become virtual machines on a single physical machine. Thus, a single hardware failure can affect multiple virtual machines.
All software has bugs, and hypervisors are no exception. If an attacker can exploit a bug in the hypervisor, he gains control of all virtual machines hosted on that server. What was formerly the compromise of a single server now extends to many. The weak link of a single virtual machine has the potential to affect many more.
Virtualization is a powerful technology with many applications. It is a fundamental part of cloud computing, providing the flexibility to handle dynamic workloads. But as with any technology, there is a cost in terms of price, performance, and risk. These need to be considered when deciding if virtualization is the correct tool for the job.
Aerospace has been active in the area of virtualization for many years. Initial efforts focused on defining data and system architectures to use as baselines for empirical studies. This work later led to the creation of a virtualization testbed in which to build and test secure virtualized applications. Some Aerospace research has focused on virtualization as a tool and not an end in itself. Researchers are investigating, for example, how functional components could be isolated into virtual machines so that a failure of one would not take down the entire system. In such a setup, malfunctioning components could be restarted with the goal of seamless operation.
Using Satellites to Protect Terrestrial Networks
In addition to research into virtualized machines, Aerospace has been exploring the dynamics and security implications of controlling large networks. The Internet, for example, is not actually a single network, but an aggregation of smaller, autonomously managed networks (known as domains) connected in such a way that a user of one domain can communicate with a user in any other domain. The key to this universal connectivity is that all of the routers—the switching devices that bridge two or more networks—must collaboratively teach each other how best to forward traffic.
The most popular network routing protocols—including the dominant Border Gateway Protocol (BGP), which supports the ubiquitous Internet Protocol (IP)—are considered insecure because no methodology exists for preventing a misconfigured or maliciously motivated agent from generating routing messages that negatively affect the delivery of traffic to and from other network participants. Such misdeliveries can pose serious threats to missions, business operations, and individual privacy. The primary techniques used in these disruptive events include IP address-prefix hijacking and path falsification. Prefix hijacking occurs when a network user maliciously (or accidentally) announces the same set of numerical IP addresses that have already been allocated to another organization, thus redirecting traffic to the malicious user. Path falsification occurs when a router artificially shortens the path length that it sends in its routing message to make itself appear closer to the destination than it really is. Because routing algorithms prefer shorter paths over longer ones, a malicious (or misbehaving) network can pull traffic to itself if it can convince other networks that it is a shortcut to the destination.
Although, in principle, individual networks are free to ignore routing announcements if they suspect that they are fictitious, in practice, it’s hard to tell when they are and when they aren’t. The problem is that routers and the messages they exchange are elements of a pervasive trusted architecture that is itself not trustworthy. One solution is to provide an independent verification capability that applies to and is available to all relevant routers. The challenge comes in designing one that is itself trustworthy and that works quickly enough to be useful.
Trustworthiness can be established through a true out-of-band communication infrastructure and trust-evaluation capability. Timeliness can be maintained by providing a true broadcast capability to all the routers of interest. Overhead assets with direct one-to-many broadcast capabilities provide an immediate candidate solution. Using a broadcast channel has the additional benefit that data cannot uniquely be manipulated for one node without being manipulated for (and observed by) the entire network.
If a broadcast channel could be used by routers to support the secure distribution of routing information, then the routing vulnerabilities could be eliminated or reduced significantly. For years, the Internet has supported a multicast feature that enables native broadcast to select groups, but the infrastructural support has never matured sufficiently to gain widespread acceptance. Network security researchers at Aerospace realized that a small set of satellite-based broadcast channels could be used to deploy a special out-of-band network for the secure exchange of routing information. Such a broadcast capability would not require the establishment of extensive new infrastructure, but could be hosted on existing satellite communication systems. The bandwidth requirements would be modest, even for the global coverage of the Internet’s border gateways.
In such an architecture, individual networks would send to a centralized collection point information about the destination IP addresses they host and the neighbors they connect to. The information would be verified using a set of authentication, authorization, and collaboration procedures and then rebroadcast to the entire network through a satellite broadcast channel. Subscriber networks would listen to this satellite channel to receive a verified set of routing information and form a local repository of routing information. When new routing messages from neighbors arrive at a particular router, the validity of each message would be checked against the data received from the satellite channel. If it didn’t match, the message would be ignored.
The specifics of the architecture consist of a few collection devices called routing fountains that collect routing information from participating networks through intermediary devices called routing fountain subscribers. Each of the routing fountains and routing fountain subscribers is assigned a unique set of private and public encryption keys so that their identities can be verified. The routing fountain must be provided with the public keys for all of the subscriber nodes, but the subscribers only need to have the public key of the routing fountain. The routing fountain is also provided information from a regional Internet registry regarding which organizations have been allocated what set of IP addresses. Neighboring edge information received at the collection point is corroborated in such a way that if network A claims to be a neighbor of network B, the routing fountain waits until B claims to be a neighbor of network A before forwarding the message.
Although this same information could be communicated through a ground-based network, satellites offer several advantages. For example, satellite transmissions can be received by both mobile and fixed networks alike, making them particularly attractive for military applications. A single transmission can reach the entire set of subscribers and can scale to accommodate the dynamics of highly mobile large networks. Because it uses a satellite broadcast channel, the verification data generated from the routing fountain can reach subscribers in a single hop; this greatly reduces the likelihood of the data being manipulated by malicious parties at the network layer. Receivers can be equipped with a minimal set of keys instead of having to possess a key for every other router in the network.
Remarkably, because the topology of the Internet approximates a scale-free graph, significant improvements can be made even when only a small percentage of nodes are protected by this methodology. Aerospace researchers performed extensive simulations on the corporation’s high-performance computing clusters using actual topologies of the existing Internet routing structure (consisting currently of approximately 37,000 routers connected by more than 100,000 links). For these experiments, researchers compared the effectiveness of prefix hijacking in a random attacker-victim pair across the entire Internet topology, and chose different strategies for selecting the nodes that were protected. In the first case, no nodes were protected; in the second, a random selection of 20 percent of the nodes were protected; in the third, the top 1 percent of nodes with the most neighbors were protected; in the last, the top 5 percent of nodes with the most neighbors were protected. The relative effectiveness of the perpetrator was measured by the percentage of nodes that were fooled. The results not only indicate that this approach can diminish the effectiveness of the perpetrator, they also suggest that overall effectiveness depends on which nodes are protected. In this experiment, the best results were achieved by protecting the 5 percent of nodes with the most neighbors, even though the number of protected nodes was significantly less than for the 20 percent randomly selected nodes.
This result highlights that not all nodes have equal influence in a network, and even if only a small number of routers can be equipped with protection, a smart selection strategy can achieve significant results.
A number of open issues remain—for example, how to prevent single points of failure in the protection architecture and how to prevent the architecture itself from being used by network attackers. Aerospace is continuing to explore these areas.
The U.S. military’s trend toward more combat units comprised of smaller teams has generated an unprecedented increase in the number of required communication links.
To meet the increased demand, the military relies on unhardened, commercial satellites. A recent study reported that 21 out of 50 ground-to-commercial-satellite interference incidents could not be accounted for. On the basis of geographic location and transmission type, the authors concluded that intentional jamming was a likely cause. In a sense, jamming represents a denial of the cyber domain and is therefore a form of cyber warfare that targets the physical layer of a communication system.
One approach to mitigating such attacks is to avoid spectral regions where such jammers operate. So-called cognitive radios do exactly that by adjusting their frequency of operation to avoid interference from incumbent users and jammers. When interference or jamming energy is sensed, cognitive radios apply a decision process or set of rules to find a portion of unused spectrum (white space) in which to reestablish communication. This process is the heart of the cognitive radio and takes place without user intervention. Cognitive radios represent an evolutionary step in development from a fixed radio (designed to operate at a fixed frequency allocation and waveform) to a software-defined radio (capable of adapting its communication waveform) to an intelligent device that adapts autonomously in response to a changing radio frequency environment.
In 2006, Aerospace began a multiyear investigation to demonstrate the feasibility of two-way cognitive-radio communication in the presence of jammers and incumbent users. This work involved a set of software-defined radio prototypes that had been developed in-house. These prototypes were capable of adapting their data rates, modulations, coding, and carrier frequencies in real time over a wide range spanning frequencies greater than 1 gigahertz. These research efforts developed a proof-of-concept system that was shown to adapt its transmit and receive frequencies to permit full-duplex communication using a cooperative sensing approach. The system employed partial-duty cycle sensing, whereby the detection of interference was performed during periodic gaps in the transmission (e.g., sensing for 5 percent of the time, transmitting for the remaining 95 percent).
This work demonstrated the viability of a cognitive radio sensing and transmission protocol that is very similar to the one eventually adopted by the IEEE 802.22 standard for a wireless regional area network using white space in the TV frequency spectrum.
Subsequent Aerospace research sought to develop a more comprehensive system that varied other signaling dimensions. Initial steps focused on adapting the carrier frequency, while later work broadened to include modulation type and data rate. These efforts led to the development of techniques for autonomous detection and recognition of modulation and data-rate changes using techniques such as cyclostationary detection, constellation matching, and spectral-curve fitting.
While cognitive radios can improve the ability to defend against jamming attacks, the decision-making process that they use presents opportunities for a new form of cyberattack from cognitive jammers that targets the radio’s ability to make a correct decision. Moreover, the software-defined aspect of cognitive radios makes it feasible for adversaries to induce malicious operation if the radio’s microprocessor software and FPGA firmware are not adequately authenticated or encrypted.
To improve antijam performance and address these weaknesses, Aerospace conducted further research that led to the invention of the Cognitive Antijam Receiver System (CARS). Implementations of CARS were developed in MATLAB and applied to mitigate performance degradation in a direct-sequence spread-spectrum receiver (such as those employed by GPS); it demonstrated the ability to effectively mitigate real-world interference in the GPS L5 band and showed that CARS was significantly more effective at mitigating jammer effects than fixed antijam techniques such as frequency excision or pulse-blanking. Unlike conventional cognitive radio, CARS does not attempt to mitigate interference by avoiding it. Instead, CARS analyzes the radio frequency spectrum, including the specific characteristics of the signal and the interference, and adapts a receiver’s antijam processing approach to most effectively mitigate that interference. As such, CARS prevents physical-layer cyberattacks even when the jammer cannot be avoided. Moreover, CARS is the only approach known today that can effectively counter future cognitive jammers that target cyber domain weaknesses in communication systems.
A central component of CARS is an adaptive windowing technique for handling hybrid or rapidly changing interference combined with the application of antijam processing to the complete set of sensed characteristics of the jammer and signal of interest (i.e., modulation type, frequency extent, time extent, etc.). Using this technique, a processing module first analyzes a received signal and classifies the sample segments based on the existence and type of interference. In this signal analysis, the time and extent of the jammer is determined, and the excision processing is adapted to most effectively mitigate the effects of the jammer while minimizing the effects on the signal of interest.
Aerospace’s efforts have shown that this approach may also be generalized to excision in other dimensions such as a signal’s cyclic Wiener frequency. Excision in this dimension incorporates cyclostationary techniques that distinguish the modulation of the signal of interest from that of the jammer signal. With the addition of Wiener-based excision, CARS demonstrated the excision of even the more problematic matched-spectral jamming (in which the spectrum is overlaid with that of the desired signal, preventing the use of frequency-based excision without excising the signal itself).
A patent is pending for the CARS method. In the meantime, researchers are investigating how the technology might be applied to serve the needs of specific defense programs.
Using Biology to Inspire Cybersecurity Strategies
A primary benefit of cognitive radios is their ability to autonomously monitor their operational environment and adapt accordingly. Aerospace has been investigating how the study of biological systems can lead to greater autonomy and adaptability in cyber systems. More specifically, researchers are examining whether the means by which a biological system recognizes and protects itself can have implications for cybersecurity.
The term “self” involves much more than recognition of what is or is not a part of a system; it also presupposes an ability to generate and act on objectives and to evaluate performance. A degree of self-awareness, or self-reflection, is critical to adaptive systems. Self-reflection means more than self-monitoring and does not imply consciousness, necessarily. Computational reflection can be defined as an engineered system’s ability to reason about its own resources, capabilities, and limitations in the context of its current operational environment. This extends to the system’s own reasoning, planning, and decision processes. Reflection can range from simple adjustments of parameters or behaviors (for example, altering the step size on a numerical process or the application of rules governing which models are used at different stages in a design process) to sophisticated analyses of the system’s own decision-making processes (for example, noticing when an approach to a problem is not working, or correcting the poor implementation of a plan).
Aerospace has developed an approach for achieving computational reflection known as “Wrappings.” In this approach, machine-interpretable descriptions of a system’s resources and algorithms are used to organize and manage the system’s response to problems. Developed in 1988, the method grew out of work in conceptual design environments for space systems that had hundreds of models and computational components.
The approach requires explicit, machine-processable, qualitative information (the Wrappings) about the system components, architecture, and processing elements (the resources). These Wrappings describe not just how to use particular resources, but also whether and when and in what kinds of combinations they should or can be used. The idea is not to wrap a resource for all purposes, but rather to wrap different uses of a complex resource. This allows an incremental strategy. Combinations of resources that operate together are also wrapped, as are the algorithms and programs that support the wrappings processes that lead to the machine-interpretable model of itself (the computational reflection).
Algorithms known as problem managers use the Wrappings descriptions to collect and select appropriate resources. The problem managers are also resources, and they are also wrapped. All interpretation and performance activities are managed by problem managers. They use the Wrappings descriptions to determine which resources to use, how to combine them, and how to organize the system’s computational resources in response to problems posed to it by users (who can be either computing systems or humans).
Problem managers use the metaknowledge in the Wrappings to help organize resource selection, combination, execution, and evaluation. The resources may be part of a cycle of activity called the coordination manager. Some resources request problems from the user (pose problem), others organize the study of the posed problem (study problem), and others collect and integrate the results (assimilate results), into the growing operational context (find context). Study operations are part of the study manager resource, which in its default form is a simple sequence of steps.
The power of computational reflection is that each of these steps is actually a posed problem to which any of a number of resources may be applied in context. That means that in the appropriate user-defined context, other specialized resources may be used instead of these default ones. That design choice allows enormous flexibility in the operation of a Wrappings-based system.
In this approach, everything gets wrapped: analyses, tools, data, user interfaces, databases, platforms, operating systems, devices—even the problem managers. The wrapped resources and their interactions allow, in essence, a simulation of the system. This allows sophisticated instrumentation and adaptive processing. The ability of the system to analyze and modify its own behavior provides the power and flexibility of resource use. These ideas have proved useful in several application areas, even when implemented and applied in informal and ad hoc ways.
The benefits of system self-reflection are numerous. First, a system can more effectively respond to environmental inputs if it can place itself in a context. It can also reason about its extent and boundaries, which can be essential in evaluating the results of events and actions (and hence in interpreting feedback). With explicit knowledge about its own capabilities, state of knowledge, or deficiencies and injuries, the system can better reason about mitigation strategies or alternative plans. Such self-knowledge can be shared for better coordination with others, including intents and goals, or for reporting state and decisions to operators or designers. Furthermore, reflection can help provide the needed perspective in a complex system that avoids local optimization by subparts and promotes local responsiveness along with the means for more global strategic analysis.
One of biology’s protective strategies is to provide intermediate layers or buffers between events outside and inside a system and between the system’s own commands and behavior. These buffers allow monitoring and adjustment, which facilitates resiliency and adaptation. Resource Wrappings can serve the function of these intermediate layers or buffers in an engineered system. They also enable dynamic, real-time insertion of new sensing, monitoring, and processing capabilities, enhancing a system’s ability to identify threats and to respond in new ways.
Although the Wrappings approach has not yet been applied directly to cybersecurity, Aerospace is using Wrappings in an ongoing experimental testbed involving an adaptive system of robotic cars. This experiment, performed in conjunction with researchers at California Polytechnic State University, is evaluating how robotic cars, with six types of sensors, perform in four different scenarios with the same control infrastructure while adjusting for sensor and component failures. The reflective and adaptive capabilities provided by the Wrappings approach enables the cars to alter their approach to different problems and modify their sensor use and their game-playing approaches.
One lesson from Wrappings and reflection for fault maintenance (and hence, cybersecurity) is that self-monitoring cannot just be monitoring for signs of disruption; rather, the system must also have clear goals (problems to overcome) and clear expectations of what constitutes success or proper functioning. Hence, in the car example, to correctly identify a failure, each car needed to assess both what was occurring—and not occurring—among all of its sensors and reasoning processes; together this information provided the necessary convergent evidence of a failure. Reasoning in a similar fashion about cybersecurity attacks and mitigation strategies may well provide some new fruitful directions.
As these research efforts suggest, adaptability is a vital component of cybersecurity and mission resiliency. The goal is for a system to be even more adaptable than the tools of its attackers. But while the goal may be clear, the best strategies for achieving it are not. Aerospace work in this area is by necessity diverse, largely because the challenges and threats continue to evolve and are not always easy to predict. Moreover, research in a seemingly unrelated area may have surprising applications in the realm of cybersecurity. Success may depend on being able to recognize all possibilities and bring them to their full potential.
The authors would like to thank Joseph Bannister of the Computer Science and Technology Subdivision and Alan Foonberg of the Communications and Network Architecture Subdivision for their contributions to the section on satellite-based network security.
H. Arslan, Cognitive Radio, Software Defined Radio, and Adaptive Wireless Systems (Springer, Dordrecht, The Netherlands, 2007).
K. Bellman, “Self-Conscious Modeling,” IT—Information Technology, Vol. 47, No. 4., pp. 188–194 (Oldenbourg Verlag, Munich, 2005).
J. Burbank, “Security in Cognitive Radio Networks: The Required Evolution in Approaches to Wireless Network Security,” Third International Conference on Cognitive Radio Oriented Wireless Networks and Communications (May 2008).
K. Butler, T. Farley, P. McDaniel, and J. Rexford, “A Survey of BGP Security Issues and Solutions,” Proceedings of the IEEE, Vol. 2010, No. 1, pp. 100–122 (Jan. 2010).
“Details Emerge on YouTube Block,” BBC News (Feb. 27, 2008); http://news.bbc.co.uk/2/hi/technology/7266600.stm.
C. Landauer and K. Bellman, “Active Integration Frameworks: The Wrapping Theory,” Proceedings of the 1st IEEE International Conference on Engineering of Complex Computer Systems (IEEE Computer Society, Washington, DC, 1995).
C. Landauer and K. Bellman, “Generic Programming, Partial Evaluation, and a New Programming Paradigm,” Chapter 8 in Software Process Improvement, G. McGuire, ed. (IGI Global, Hershey, PA, 1999).
A. Mody, R. Reddy, T. Kiernan, and T. Brown, “Security in Cognitive Radio Networks: An Example Using the Commercial IEEE 802.22 Standard,” Proceedings of 2009 IEEE MILCOM (Oct. 2009).
H. Rausch, “Jamming Commercial Satellite Communications During Wartime: An Empirical Study,” 4th IEEE International Workshop on Information Assurance Information (2006).
“Staring into the Gorge: Router Exploits,” Renesys blog (Aug. 19, 2009); www.renesys.com/blog/2009/08/staring-into-the-gorge.shtml.
J. Train, J. Bannister, and C. Raghavendra, “Routing Fountains: Leveraging Wide-Area Broadcast to Improve Mobile Inter-Domain Routing,” Proceedings of 2011 IEEE MILCOM (Baltimore, Nov. 2011).
J. Train, B. Etefia, and H. Green, “Hub and Spoke BGP,” Proceedings of 2010 IEEE Aerospace Conference (Big Sky, MT, Mar. 2010).