Securing Computer and Data Networks
Internet and military data networks are under constant assault by a wide range of cyberattacks. One such attack on the networks of a large U.S. defense contractor demonstrates the magnitude of this problem: Its networks were compromised after adversaries replicated the rotating passwords of RSA-SecurID hardware tokens used by its employees. An earlier attack on—and data exfiltration from—the RSA corporate network made the network compromise possible. These sophisticated attacks pose a risk to the networks and computers of The Aerospace Corporation and those of its customers.
In many of these types of cyberattacks, adversaries (and their malicious software, also known as malware) will linger in infected computers or networks for long periods of time. By installing a command and control network connection back to their operations center, the adversaries can monitor the activity of victims and plan future malicious activity. These types of attacks have been aptly named advanced persistent threats. Many research organizations and commercial companies, including The Aerospace Corporation, have been investing resources into threat detection, reverse engineering of malware to understand its effects, and techniques to remediate infected computers and networks.
“At Aerospace we are developing improved network traffic analysis techniques to be used in defense of our computers and networks,” said Bob Lindell, senior project leader in the Computer Science and Technology Subdivision. “Rather than working with synthetic datasets, like many security researchers do, we are developing techniques by analyzing real-world Internet traffic generated by Aerospace. We daily process approximately 100 gigabytes of network traffic and compute a set of discriminators that can be used to differentiate network traffic, such as e-mail, Web transactions, plain text, and cipher text. Our ultimate goal is to refine and use these discriminators to separate malicious cyberattack traffic from normal traffic.”
Lindell is the principal investigator of a research team that is developing methodologies that discover and prevent advanced persistent threats. The team includes Joe Bannister and Jim Gillis, also of Computer Science and Technology, and Eric Coe and Nehal Desai, Computer Systems Research Department, and focuses on finding low-profile, stealthy traffic entering and leaving the Aerospace network. Lindell explained that this could be control messages using an interactive backdoor exploited by an adversary to monitor, administer, or stage data files to be processed and later sent to a remote destination. Or, it could be the explicit exfiltration of information itself. “The team does not generally look for gross perturbations of network traffic that might occur during a denial of service because many ways already exist to identify such attacks,” Lindell said. “The fact that these attacks cause users to experience loss of service often suffices to alert system administrators to the problem. Many Aerospace users have been affected by slow network response time during denial-of-service attacks.”
On any given day, Aerospace users make approximately nine million distinct network connections to other computers on the Internet. One idea researchers have is to filter out well-known servers, such as Google, Yahoo, and Facebook, and analyze what remains. However, there are thousands or millions of connections to those servers alone, and so this method does not work well when trying to analyze the data. Approximately 25 percent of all network connections communicate with a unique computer each day. Viewed as a probability distribution, this “long tail” in the distribution of unique Internet destinations makes it nearly impossible to find bad connections to obscure destinations.
The Aerospace team realizes the challenge involved in discovering subtle behavioral differences among encrypted traffic that is sent and received from Internet sources. The team developed a set of discriminators that can measure differences between network traffic that is used for secure online purchases (e.g., Amazon) and Skype, a peer-to-peer protocol used for Internet phone calls (VoIP) and file transfers. Skype is an undocumented commercial protocol that uses encryption, but has subtle characteristics that differ from other secure Web traffic. While not permitted on the Aerospace network, Skype does appear in the network traces from time to time. “Given that it rarely appears in the data, and that its network behavior and protocol are not well understood, it was particularly interesting that a subset of discriminators we have been investigating provided some anomaly detection capability for Skype. We believe additional combinations of discriminators will ultimately detect other types of stealthy anomalous traffic in the datasets we are analyzing,” Lindell said.
In the financial industry, fraud detection is a well-known anomaly detection problem. When people try to embezzle money, their modifications to bookkeeping records are intentionally subtle, with the goal of going undetected. One method used to detect this behavior is Benford’s Law, which is an observation that the leading digits from a sampling of numbers derived from real-world sources of data have a nonintuitive distribution. For example, if the heights of buildings in a city are collected, the leading digit of “1” will occur about 30 percent of the time. Naively, one might have expected each of the digits 1 through 9 to appear about 11 percent of the time, as in a uniform distribution. In the real world, Benford’s Law is usually indicative of an underlying distribution that is log-normal. When people intentionally modify bookkeeping records, most are unaware of this nonintuitive distribution of the leading digit of transaction values. These falsified entries tend to skew the distribution away from a log-normal distribution and can be detected efficiently through the use of Benford’s Law.
“At Aerospace, we attempted a novel application of Benford’s Law to analyze network traffic. Based on our analysis, the amount of data downloaded from the Internet, from each inbound network connection, agrees closely with Benford’s Law. Intuitively and in retrospect this is not surprising, but in over 35 years of network experimental research it had never before been documented. Similar to other real-world data, such as the heights of buildings, or the lengths of rivers, the file sizes of information stored in the Internet also display a log-normal distribution. While we continue to refine this approach, a Benford’s Law-based network detector thus far does not appear to be sensitive enough to detect low-profile, stealthy attacks, such as remote backdoors,” Lindell said.
Beginning this year, the team started exploring how machine learning algorithms can be used to cluster and discriminate between different types of network flows. “Our goal for next year is to further develop this technique to detect unwanted interactive traffic that may be traversing our network path to the Internet. While other researchers are focused on signature-based methods, or are looking at detecting the next worm spread, we will continue to focus our research efforts in the area of detecting the advanced persistent threat of stealthy backdoor exfiltration methods,” Lindell said.
Most of the subsystems on spacecraft function via some degree of software or computer control, and thus are susceptible to cyberattack. In an effort to understand and combat such threats, researchers at The Aerospace Corporation and the Air Force Space and Missile Systems Center have been developing a spaceflight processing testbed. The goal is to investigate cyber threats to national security space systems, identify vulnerabilities, and develop defensive techniques.
The research team, led by Todd Kaiser of the Software Systems Analysis Department with coinvestigators Daniel Balderston of Software Systems Analysis and John Nilles of Cyber Engineering, faces a set of unprecedented challenges in developing ways to safeguard spacecraft from cyber-threats along with the terrestrial systems they rely upon. “The number of potential susceptibilities is exceptionally large, almost without limit,” Balderston said.
For example, national security spacecraft are often built with common bus designs or architectures. Because these common designs are used on multiple missions, the potential for discovery and exploitation of vulnerabilities can jeopardize an entire spacecraft fleet. “We are exploring inherent vulnerabilities in these common elements, such as LEON and RAD-750 processors, operating systems, MIL-STD-1553 buses, and spacewire data links,” Balderston said.
The trustworthiness of commercial components is another issue. “As the number of commercial-off-the-shelf flight components and vendor suppliers—particularly lower-tier foreign suppliers—increases, there are concerns of malicious code being embedded in processing boards, software, or digital electronic components, which could become the source of a cyberattack,” Balderston said. “The research team is assessing common spacecraft and payload components to understand the risks and countermeasures or other mitigations to address these risks. These risks could become widespread, and potentially pose a fleet-wide vulnerability.”
Spacecraft fault management is a significant area of concern. Researchers are exploring the possibility of implementing affordable autonomous response capabilities that would not require major changes to the system architecture. “We are exploring how to react to a space system anomaly. A key challenge is differentiating whether the anomaly is due to a natural fault or a cyber event. This may drive requirements for additional telemetry or enhanced ground procedures to support cyber anomaly resolution,” Balderston said.
The flight cyber testbed consists of two main segments: the test unit and the testbed infrastructure. Tests have so far focused on generic real-time operating systems, processors, and common flight data buses, including a single onboard computer system with a MIL-STD-1553 bus and a generic field-programmable gate array (FPGA). Other items tested include an Air Force Research Laboratory plug-and-play avionics computer, a Boeing Colony II CubeSat, and an Air Force space-vehicle emulator that includes bus-flight software.
The testbed infrastructure offers overall test monitoring and control. Together with the test unit, it functions as a tabletop satellite, with sensors and actuator models, an optional payload model, and a communications subsystem. It provides a common environment for testing and analysis that supports elements of all units being tested, thereby reducing costs when new units are added.
The near-term priority is on developing real-time operating system solutions, fault-management augmentation, flight-software audit and run-time monitoring, data bus mitigations, and onboard data protection (e.g., embedded encryption).
The team is also investigating software tools and techniques for static and dynamic attack detection, remediation, and countermeasure to determine their efficacy at the time of attack as well as during the various phases of system development, operation, and maintenance. A selected set of onboard mitigations and countermeasures are being prototyped to research trade-offs of implementing operational cyber-resilient elements on the spacecraft. “One key premise of the test effort is the assumption that unauthorized access to the spacecraft has occurred,” Balderston said. “We are researching how the attack could have been detected and what affordable countermeasures, mitigations, or responses could be taken to operate through an attack in the future, or how to otherwise maintain mission effectiveness.”
The research also involves prototyping an onboard sensor system for space situational awareness that could be applied to the current fleet of space vehicles. The intent is to provide an initial capability for space systems to sense cyber-related events and either take simple autonomous action or report the events to the ground for further assessment. “We want to explore elements and trade-offs of a more cyber-resilient space vehicle architecture that enhances overall mission success and will continuously adapt to evolving threats,” Balderston said.
This flight-cyber initiative has created an important capability for space segment vulnerability testing as well as a platform to develop affordable mitigations to cyberattacks. It provides empirical vulnerability and countermeasure data to programs and organizations that develop and support national security space systems.
“Although flight cyber defense is still in its infancy, immediate contributions to national security space systems in all phases of acquisition are expected from this research effort, including on spacecraft already launched and those in operations. We hope this effort will stimulate awareness and collaboration among subject matter experts and stakeholders in related technology domains, and ultimately lead to development of affordable cyber resilient flight systems that ensure mission success,” Balderston said.
Cyberspace Command and Control and Battle Management
Fast and reliable situational awareness is vital to support command decisions. This is true in space as well as in the cyber domain. One challenge for cyber operations is to generate timely situational awareness across a geographically distributed enterprise. Considerable research has been done in distributed computing—but applying these advances to cyberspace command and control has proved difficult for national space systems.
Information assurance efforts have typically focused on developing technologies to detect and analyze a cyberattack after it has occurred, with less consideration on performance in a distributed computing environment. Meanwhile, cyberattacks have exposed the shortcomings of relying exclusively on information assurance—a defense-in-depth strategy—and a singular defensive computer network for many space systems. Defense-in-depth has been somewhat effective in securing systems, but has not been adequate in addressing the operational realities of cyberattacks, which must be confronted in real time. The current strategy lacks a battlespace view, which must include the full spectrum of cyber operations.
John Sarkesain, senior engineering specialist in the Cyber Engineering Department, is leading a research project at The Aerospace Corporation exploring cyberspace situational awareness to support cyber command and control and battle management (C2/BM) for space systems. “One critical need in this area is better correlation and distribution of cyber- sensor data in near real-time to generate accurate distributed situational awareness across an enterprise,” he said. Sarkesain is working with team members Jandria Alexander and Meredith Hennan of the Cyber Engineering Department and Donald Lanzinger of Network Systems to develop a testbed that would allow investigation of promising methods and technologies.
Initial research has focused on developing DOD architectural framework operational and system models. The team modeled five geographically distributed domains, each analogous to a distributed area of responsibility or networked computing environment. One domain serves as the resident area of responsibility for the cyber commander, and the other four are peers. They perform similar and related kinetic missions at a tactical level, or they represent different combatant commanders at different global operational levels. The battlespace for cyber operations is almost always globally defined, so near real-time partnering and information sharing is a critical C2/BM requirement.
The framework is modeled after a simple distributed correlation architecture that detects an intrusion and correlates it locally to generate a local situational awareness picture. It shares each local picture with the commander’s domain as it is generated. Key architectural patterns employed include peer processing, distributed data-space sharing, and publish-and-subscribe messaging.
“We anticipate experimenting with DECAF (Distributed Event Correlation and Analysis Framework), a commercial, distributed correlation application with high-performance messaging and virtual machines,” Sarkesain said. The long-term goal is to create a geographically distributed testbed that accurately depicts a cyber battlespace and enables realistic experiments and analysis of system behaviors under cyberattacks. We envision developing cyber force-on-force test scenarios, cyber C2/BM experimentation, and cyber-kinetic integration experimentation,” Sarkesain said.
To analyze the performance of the distributed and local correlation, the team has developed a metrics model that measures the time it takes to generate distributed near real-time cyberspace situational awareness pictures during an attack. Specifically, it measures the time it takes to generate local situational pictures at local nodes while simultaneously measuring the time it takes to generate a global situational awareness picture from inputs from the local nodes.
Various attack sequences and scenarios can be applied to assess system response to different types of intrusions. These might include a series of attacks at a particular domain or a simultaneous attack at several domains. Standard open-source traffic sensors and packet sniffers are used to identify attacks and attack sequences.
Other project work includes specifying critical properties of the testbed using Temporal Logic of Behaviors (TLB), a theoretical construct of mathematical logic that can be used to formally specify, reason, and prove distributed systems and their paradoxical behaviors.
“We believe the paradoxical and nondeterministic behaviors of distributed systems have significant implications for distributed cyber C2/BM, information assurance, and distributed cyber situational awareness. We plan to use TLB to specify critical properties for distributed correlation, for which we have defined informal specifications using the DOD architectural framework. We are trying to better understand distributed behaviors and correctness,” Sarkesain said. For example, TLB can be used to explain the paradoxical behaviors of distributed systems caused by the uncertainty of global states. “If we can better understand these distributed system paradoxes as they apply to global states, we may be able to generate more timely and reliable cyber situational awareness and improved cyber C2/BM,” Sarkesain said.
“We are at a disadvantage in trying to design and implement cyber solutions in a distributed environment where we may not fully understand its behaviors, so we are trying to learn how C2/BM applications behave in the battlefield. We also believe information assurance, which, ideally, should be operationally managed through the cyber C2/BM processes, must be designed and implemented to support distributed real-time cyber operations. Current information assurance implementations appear to fall short of this requirement,” Sarkesain said.
Eventually, a cyberspace C2/BM system will be required to manage and conduct full-spectrum operational testing. Distributed real-time operational and systems architectures that have complementary relationships may offer the best path forward.
“We have observed that a globally distributed C2/BM system architecture that must meet high-performance requirements may be best complemented or congruent with specific distributed operational architectures. We are exploring cyber C2/BM operational-system architecture combinations to better understand their relationships, so as to develop better C2/BM solutions for cyber operations,” Sarkesain said.