This blog post samples a growing body of research which leverages formal methods techniques to solve computer architecture challenges. While certainly not exhaustive, it is meant to serve as a starting point for further reading and brainstorming.
Constructing Formal Models
Leveraging formal methods to solve computer architecture problems has two key prerequisites:
- System formalization: To evaluate a particular hardware architecture or microarchitecture, one must encode it in some format that a computer can understand. Such a formalization might be derived from a hardware design itself, or it may be more abstract, omitting many low-level implementation details.
- Property formalization: One must also formalize a property to check on the design.
This blog post focuses on a couple popular property classes in modern hardware verification and validation research, which aim to address the following challenges:
- Concurrency properties: The parallel and heterogeneous nature of modern hardware poses challenges to ensuring correctness for concurrent programs [23, 24, 25, 26, 27], in which multiple threads concurrently execute while modifying a shared memory.
- Security properties: A slew of recent microarchitectural attacks—many of which exploit hardware speculation [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 16, 17, 18, 19, 20, 21], but others (just as severe) which do not —demonstrate that hardware vulnerabilities capable of completely compromising confidentiality can go undetected for years.
Given the above challenges, researchers have produced both hardware system and property formalizations to support reasoning about the implications of microarchitecture on program correctness and security. This section summarizes a handful of such formalization efforts, which underpin many of the formal verification approaches discussed later in this blog post.
Formal Models for Hardware Concurrency
To facilitate formal reasoning about concurrent program correctness, research over the past ~15 years has produced formally-specified memory consistency models (or MCMs). Simply put, MCMs specify the values that can be legally returned by shared memory loads in a concurrent program via constraints on the legal ordering and visibility of shared memory accesses. Notably, a sound high-level programming language MCM is not sufficient to ensure correct execution of a parallel program. In particular, a program is only guaranteed to run correctly if a compiler correctly translates language-level MCM primitives to assembly instructions, and if the target microarchitecture is indeed implementing the MCM specified by its ISA. Thus, MCM formalization efforts span the hardware-software stack. At the ISA-level, the following MCMs have all been formalized: x86-TSO, IBM Power, NVIDIA PTX, RISC-V RVWMO, RISC-V RVTSO, ARMv7, ARMv8 [30, 31, 32, 33, 28, 29, 34]. At the microarchitecture-level, the Check tools [51, 52, 53, 54, 55, 57] provide a first-order logic DSL for specifying hardware designs, including their MCM implementation, axiomatically.
Formal Models for Hardware Security
Due to an increased severity of microarchitectural attacks, researchers have put forth proposals for formally articulating the implications of microarchitecture on software security [38, 39, 40, 41, 42, 43, 44, 46]. These proposals can be coarsely partitioned along the following dimensions:
- Microarchitectural scope: Some proposals require hardware enhancements to explicitly track/enforce contract-level security primitives [35, 36], like ARM’s Data Independent Timing (DIT) extensions , or restrict their scope with respect to modern hardware features like out-of-order [38, 39, 43] and multi-core execution [38, 39, 40, 41, 42, 43, 44, 46].
- Leakage scope: Others focus on formalizing forms of transient leakage [38, 39, 41, 42, 43, 44, 46] or restrict the non-transient leakage they can capture .
- Contract fidelity: Existing contract proposals do not provide any mechanism by which a microarchitecture, written in RTL (such as Verilog or SystemVerilog), can be validated against a given contract.
Analyzing Formal Models
Hardware system and property formalizations can be designed to support a variety of formal analyses, such as SAT/SMT solving, model checking, symbolic execution, program synthesis, and interactive theorem proving which are discussed in this blogpost.
Given a Boolean logic formula, a SAT solver decides whether the formula is satisfiable, reporting a satisfying assignment if it is. An SMT solver generalizes this idea for decidable first-order theories, such as equality, linear arithmetic, bit vectors. Notable applications of SAT/SMT in computer architecture include:
- Litmus test synthesis—synthesizing complete suites of litmus test programs (up to a bound on program size) that encode all unique ordering behaviors imposed by a user-supplied formal MCM specification [48, 50]. The resulting programs improve coverage of litmus test-based hardware MCM validation approaches. Beyond MCMs, SAT/SMT-based synthesis of test program suites from formal models has been leveraged in the context of validating hardware virtual memory implementations  and crash consistency models . There is reason to believe such an approach can be applied to in hardware security context as well .
- Hardware MCM verification (w.r.t. formal hardware models)—proving that a microarchitecture’s MCM implementation—provided as a formal axiomatic model—is correct with respect to a suite of litmus test programs [51, 52, 53, 54, 55], or even over the space of all programs .
- Hardware security verification—evaluating microarchitecture-susceptibility to formally specified classes of security exploits and synthesizing proof-of-concept exploit programs when applicable , again with the help of formal axiomatic models of hardware.
- Constant-time verification of hardware—verifying whether or not a particular hardware circuit described in Verilog is capable of executing in constant time with respect to a specification of design sources/sinks and a set of usage assumptions .
Model checking refers to the problem of determining if some state machine M, such as a microprocessor, satisfies a property P. State-of-the-art model checking techniques are SAT- and SMT-based, leveraging repeated calls to satisfiability checkers to manipulate sets of states represented as formulas. Model checking has a rich history in the literature. Some recent noteworthy applications of model checking in computer architecture include:
- Hardware MCM verification (w.r.t. RTL)—motivated by limitations of prior work that require axiomatic models of hardware to conduct MCM verification, researchers have pursued both top-down  and bottom-up approaches to link said models with RTL. The latter approach will appear at MICRO 2021.
- Single-action correctness—ISA-Formal , checks RTL correctness by leveraging a model checker to compare a processor’s architectural state immediately before and after the execution of an instruction against the machine-readable definition of the Arm® Architecture.
- Functional consistency checking—hard-to-detect bugs may require particular sequences of instructions to trigger them. Thus, recent work [66, 67] leverages bounded model checking (BMC) to check functional consistency—a property that says a sequence of instructions given the same input will produce the same output—of an input microarchitecture. A similar approach has been leveraged to conduct hardware security verification .
- Transient execution vulnerability detection—recent work  leverages a speculative operational semantics and adversary model to translate an input program into a transition system on which a secure speculation property can be checked, specifically using an SMT-based model checker, called UCLID5 .
Symbolic Execution 
A symbolic execution engine runs a program on symbolic input values. For each explored control flow path, it maintains a path condition—a quantifier-free formula over symbolic inputs that encodes all branch decisions taken so far. Path conditions can be checked by an SMT solver to deduce path feasibility. Some recent computer architecture work which leverages symbolic execution includes:
- Identifying exploitable memory access—CacheD  leverages symbolic execution in combination with taint analysis and SMT-assisted constraint solving to identify program memory accesses that are susceptible to leaking functions of program secrets through cache timing side-channels.
- Exploit program synthesis—Coppelia  conducts “backwards” symbolic execution (based on KLEE ) of C++ representations of RTL (generated with Verilator ) to synthesize programs capable of violating security-critical invariants.
- Detecting Spectre attacks—Spectector  and Pitchfork  effectively simulate speculative execution in software in order to detect Spectre v1, v1.1, and v4 vulnerabilities with symbolic execution engines.
Program synthesis is the task of automatically finding programs from an underlying program language that satisfy user intent expressed in some form of constraints. Recent computer architecture work leverages program synthesis-style techniques as follows:
- Synthesizing axiomatic MCM specifications—MemSynth  synthesizes axiomatic MCM specifications, given a suite of litmus test programs and a framework sketch that defines a class of MCMs.
- Secure hardware synthesis—VeriSketch  leverages program sketching/synthesis in combination timing-sensitive hardware information flow analysis to produce secure-by construction hardware designs.
- Synthesizing cache coherence protocols—Protogen  automatically generates concurrent multicore coherence protocols from atomic protocol specifications.
Interactive Theorem Provers
Since many interesting proofs cannot be fully automated, interactive theorem provers provide a means for a human user and computer to collaborate on the development of formal proofs. A notable example of interactive theorem proving for computer architecture includes:
- Hardware verification—Kami  is a Coq library that supports implementing, specifying, verifying, and compiling hardware designs written in Bluespec. The resulting proven-correct Bluespec can be used to generate proven-correct hardware, subject to the correctness of the Bluespec compiler.
About the Author: Caroline Trippel is an Assistant Professor of Computer Science and Electrical Engineering at Stanford University. Her research on architectural concurrency and security verification has been recognized with IEEE Top Picks distinctions and the 2020 ACM SIGARCH/IEEE CS TCCA Outstanding Dissertation Award, and the 2020 CGS/ProQuest® Distinguished Dissertation Award in Mathematics, Physical Sciences, & Engineering.
Disclaimer: These posts are written by individual contributors to share their thoughts on the Computer Architecture Today blog for the benefit of the community. Any views or opinions represented in this blog are personal, belong solely to the blog author and do not represent those of ACM SIGARCH or its parent organization, ACM.