Cyber Challenges in National Security Space
Tactics, techniques, and procedures used by the authors of the Stuxnet computer worm that attacked Iran’s nuclear program can be used by advanced persistent threat actors in attacking other targets, including national security space systems. Architects must consider these capabilities as they design and implement modern space systems.
Space-system architectures today incorporate information technology based on commodity hardware and software as well as architectures that mirror traditional information technology network behavior, mechanisms, and protocols. This evolution in space systems has created vulnerabilities that can be exploited by adversaries to compromise national space system information or to control or damage the systems themselves. The increased complexity of information technology used to build the next generation of space systems and the use of hardware and software readily available in nonmilitary settings means that threat actors can research vulnerabilities, develop and test means to exploit those vulnerabilities, and develop complex exploitation scenarios and campaigns to compromise national security space targets.
This threat represents a challenge to national security space architectures in the context of techniques exhibited by the Stuxnet exploitation, which was a successful “next generation” cyberattack on the Iranian nuclear production facilities. The Stuxnet cyberattack used multidisciplinary exploitation techniques to achieve a specific result on a particular target. The engineering and forethought that went into the production of this cyberattack is unprecedented and demonstrates the capability that can be achieved by an advanced persistent threat actor with extensive resources against a specific target, perhaps a national security space target. In fact, other attacks (such as the “DuQu” worm) have subsequently been uncovered using the same methods (and possibly the same code) as Stuxnet to attack a totally different kind of target, highlighting the threat posed by this new breed of malicious software, also known as malware.
The Stuxnet computer worm used vulnerabilities found in several commercial products, as well as techniques found in rootkits (computer code, usually undetectable by users and administrators) that enables continual privileged access to a system. The Stuxnet worm targeted the supervisory control and data acquisition (SCADA) system used to control centrifuges in the Iranian facility to accomplish its objective of rendering the centrifuges inoperable.
The Stuxnet exploitation is called a “worm” because it contained code to propagate itself, which was necessary because the attackers did not have direct access to the systems they wanted to exploit. In addition to using propagation techniques through the exploitation of vulnerabilities in remote systems over the network, Stuxnet was also able to propagate through the use of removable media. This is a critical capability in cases where the targeted systems may be separated by an “air gap” (meaning there is no network connectivity between the two systems or between the networks to which the systems are connected) or where network protection measures (e.g., firewalls) are used to block general remote network access. To propagate and elevate its privileges on the host system, Stuxnet used one of several so-called zero-day vulnerabilities, meaning that such vulnerabilities had not been reported and therefore were unpatched, or not fixed. To ensure it ran at a privilege level of the operating system that would allow it to execute any command on the system, the Stuxnet worm used privilege escalation exploits once it gained a foothold into the system, allowing it to write to system files and allow its continued existence across reboots.
Modern operating systems contain some countermeasures for exploits running at elevated privilege levels, including requiring digital signatures on software that must be checked before that software is loaded for execution. The Stuxnet authors resolved this issue by obtaining (most likely through physical exploitation) digital certificates that contained valid credentials and were used to sign drivers that were part of the worm that could then be loaded onto the operating system. This allowed Stuxnet to in effect hide on infected systems as well as infect other systems through removable media drivers. The worm could also communicate remotely with command-and-control servers so that it could update itself.
The Stuxnet worm targeted systems running a particular SCADA software package called “STEP 7,” which interfaces with programmable logic controllers that, in turn, control the centrifuges. Once the worm reached a target system, it rewrote part of the STEP 7 application so that it effectively controlled the instructions the software was passing down to the controllers. This involved not only programming the destructive behavior of the controller equipment, but also feeding back false statuses to the monitoring process so that the destructive behavior of the equipment would remain undetected. These techniques allowed the worm to achieve its desired outcome without being detected.
The Stuxnet worm was able to achieve its presumed objectives because of the threat actors’ ability to effectively exploit its target. Once the target was identified, characteristics of the target were researched. These included the exact hardware and software used by the target, the architecture of the target system and systems the target depended on, and the operational characteristics of the system, including connectivity and identity of communication links and trust dependencies. It is likely that the high-level objective to disrupt the nuclear program was translated into a number of potential actions with corresponding targets, and that the target attacked by the Stuxnet worm was perhaps that which was most readily exploitable.
Threat actors would most likely take a corresponding course of action against national security space systems. A scenario might go like this: The attacker defines mission objectives (disrupt communications, obtain targeted data from an asset, or develop a capability to prevent an identified system from performing its mission at a particular time) and then identifies assets used to accomplish the mission. In this phase, the threat actor gathers information about each system in the targeted architecture, which includes identifying particular hardware and software, communication patterns and protocols, trust relationships between equipment and enclaves, and points of ingress and egress. For the types of advanced persistent threat actors that are a danger to national security space systems, it does not matter whether there is an established vulnerability in any of the systems; if one does not exist, one can be developed.
Once a system is targeted, a means of exploitation must be developed. Typically this is done by looking at the hardware, software, protocols, and procedures that are used by the target in performing its function, and identifying (or developing) vulnerabilities that can be used in an exploitation scenario. In the Stuxnet example, the end target was compromised because the STEP 7 software on the host was compromised; no native security controls in the programmable logic controller hardware needed to be overcome. The attackers determined they needed to compromise the host systems for the STEP 7 software, and also hide the worm so that it would be present long enough to infect the control software. This involved identifying the host systems as Windows systems and then writing rootkit-like drivers specifically for those systems, which needed to be signed. The attackers obtained digital certificates that could be used to do this.
Information assurance/cybersecurity results are necessary for cyber operations.
To deliver the worm to the targeted systems, the attackers used several vectors: remote exploits for connected systems and those that could “jump” over air gaps, which the attackers either presumed existed, or acquired intelligence revealing these vulnerabilities. Some of the attacks used by the worm were publicly known while others, namely the zero-day attacks, were presumably developed by the threat actors.
In relation to national security space systems, increasingly all segments of the architecture—ground, space, and user equipment—are using commodity hardware, software, and protocols to accomplish their missions. While this increases reusability and reduces the cost associated with custom builds, it also enables an attacker to obtain the exact equipment used and perform extensive testing and research to identify vulnerabilities that may be used in an exploitation scenario. In the Stuxnet example, it is presumed that the attackers purchased the STEP 7 hardware and software to develop the exact attack sequence needed to produce the desired effect.
Attackers will use multiple methods in the same exploitation scenario to accomplish their objectives; addressing just one avenue of attack is generally not going to be sufficient to thwart an attack from an advanced persistent threat actor. The threat is characterized by existing vulnerabilities as well as those that may be developed in the future. The assumption is that the cyber environment has vulnerabilities, and the objective today is to develop methods to detect and react (including by dynamic system adaptation) to those vulnerabilities.
The Stuxnet worm also shows that trust relationships (users or systems in one domain who can access network resources located in another domain based solely on the identity of the system requesting access) need to have dynamic components. In particular, while Windows driver signing undoubtedly stops a large number of attacks from being initiated, the use of stolen digital certificates circumvents this protection measure. Since signed drivers are presumed safe, detection of this aspect of the Stuxnet attack was circumvented.
Advanced persistent threat actors are able to implement exploits that may be beyond the capabilities of a “script kiddie” or unsophisticated hacker, and need to be taken into account when performing cybersecurity engineering exercises. Exploits can be introduced in any part of the development lifecycle, including maintenance. It may be the case that certain hardware and software is well protected during development, but updates may be less rigorously tested and controlled, thus providing a threat actor with the means to compromise a critical portion of the system. Any point in the supply chain can provide a vector for such a threat, so all points should be considered when assessing the risk and the security countermeasures that need to be implemented.
This diagram depicts where the engineering and operations lifecycles converge to develop cyber “operational art,” and the ability to fight through an attack. Cyber threats and vulnerabilities are evolving at netspeed. The coupling of these lifecycles must become even tighter.
The iterative process of exploitation development can be involved, and depends on the countermeasures implemented, identified target, and desired outcomes. In developing a scenario to accomplish the exploitation objectives, the vulnerabilities discovered or developed are linked so that the objective is realized. Often during this process alternative paths or additional vulnerabilities need to be identified or developed to ensure that there is sufficient robustness in the attack to circumvent the countermeasures that have been put in place in the target system. It may also be the case that additional targets need to be identified or are discovered during the course of the exploitation development process that will provide a more efficient, less detectable, or higher-yield exploitation. Advanced persistent threat actors possess the resources—both cyber and physical—to obtain the architectural and design information needed to support this process. This implies that system architects will need to develop a solid security design to mitigate these threats, rather than assume that “security by obscurity” will protect their systems.
Once the scenario is developed, it needs only to be implemented. This is not necessarily an instantaneous process; in the Stuxnet example, the worm was seeded in systems close (in network terms) to the target, but the actual infection presumably took place at a later time when the worm had a chance to spread through the systems. The capability of the worm to communicate with command and control systems, in addition to the capability to update itself, indicates that the attackers planned for a longer-term exploit if it was needed.
From a national security space systems perspective, exploitation may be done in stages. An initial stage might plant malicious software (malware) and hardware into systems to create a botnet (a collection of infected computers that can be remotely commanded) used for a mission and then at a later time the exploitation might be completed through additional commands and procedures. In the Stuxnet case, as well as with many current botnets, the malware was capable of hiding its presence on the system and updating itself through communications with command and control servers, thus including a capability to adapt the exploitation to changing attacker or target priorities.
As evidenced by the Stuxnet worm and other similar malware, advanced persistent threat actors can develop sophisticated and effective attacks that are based on their ability to obtain information about the target system, use and develop vulnerabilities in the system to craft an exploitation scenario, and then launch an attack so that their objective can be achieved. A space system has many similarities to the SCADA system that was the ultimate target of the Stuxnet attack, and the threat vectors that were developed and exploited in the Stuxnet case are also applicable to space systems. Recognition of the capabilities of advanced specific threat actors must guide space system architects as they design and implement current and future space systems.
Back to the Spring 2012 Table of Contents