While much attention is paid to detecting and remedying flaws or vulnerabilities in software, the way a system is designed can also create large opportunities for attackers. System designers primarily focus on ensuring a program is adept at executing a specific task, focusing on how a design can best support intended features and behaviors and on how they will be implemented within the design.
Attackers have also discovered that these design structures and implementation behaviors can be repurposed for their own malicious purposes. Unexpected – or emergent – behaviors that these features could exhibit are not often taken into consideration at the time of design.
As a result, attackers often find that they can generate emergent behaviors by using what’s already built into a system, providing a way to exploit flaws that are several layers down. In other words, systems are unknowingly being designed in ways that support adversarial programmability and combinations of features and unprotected abstractions. These amount to embedded exploit execution engines – creating what is colloquially known as “weird machines.”
“When it comes to exploits, the common thinking is that there is a flaw in the program and then there is a crafted input that can trigger the flaw resulting in the program doing something it shouldn’t like crashing or granting privileges to an attacker,” said Sergey Bratus, a program manager in DARPA’s Information Innovation Office (I2O).
“Today, the reality is somewhat different as those existing flaws aren’t immediately exposed, so an attacker needs help getting to them. This help is unwittingly provided by the system’s own features and design. Attackers are able to make use of these features and force them to operate in ways they were never intended to.”
This challenge becomes increasingly problematic when observing a class of systems that rely on similar features. When an attacker discovers an exploit on one system, this can give a big hint on how to find similar exploits for other systems that have been developed independently by different vendors but make use of similar mechanisms. This creates persistent exploitable patterns that can be used across a whole host of programs.
The Hardening Development Toolchains Against Emergent Execution Engines (HARDEN) program seeks to give developers a way to understand emergent behaviors and thereby create opportunity to choose abstractions and implementations that limit an attacker’s ability to reuse them for malicious purposes, thus stopping the unintentional creation of weird machines.
HARDEN will explore novel theories and approaches and develop practical tools to anticipate, isolate, and mitigate emergent behaviors in computing systems throughout the entire software development lifecycle (SDLC).
Notably, the program aims to create mitigation approaches that go well beyond patching. At present, patches tend to only address a particular exploit and do not disrupt the underlying exploit execution engine residing at the design-level.
HARDEN will also focus on validating the generated approaches by applying broad theories and generic tools to concrete technological use cases of general-purpose integrated software systems. Potential evaluation systems include the Unified Extended Firmware Interface (UEFI) architecture and boot-time chain of trust, as well as integrated software systems from the Air Force and Navy domains, such as pilots’ tablets.
“There are many ways to theorize about addressing these challenges, but the test of the theory is how it will apply to an actual integrated system that we base trust on, or want to base trust on. We want to ensure we’re creating models that will be of actual use to critical defense systems,” noted Bratus.