Scaling symbolic evaluation for automated verification of systems code with Serval

  • Luke Nelson
  • James Bornholt
  • Ronghui Gu
  • Emina Torlak
  • Xi Wang

27th ACM Symposium on Operating Systems Principles (SOSP) |

Published by ACM

Best paper award, Distinguished artifact award

Download PDF | Author's Version | Related File

This paper presents Serval, a framework for developing automated verifiers for systems software. Serval provides an extensible infrastructure for creating verifiers by lifting interpreters under symbolic evaluation, and a systematic approach to identifying and repairing verification performance bottlenecks using symbolic profiling and optimizations.

Using Serval, we build automated verifiers for the RISC-V, x86-32, LLVM, and BPF instruction sets. We report our experience of retrofitting CertiKOS and Komodo, two systems previously verified using Coq and Dafny, respectively, for automated verification using Serval, and discuss trade-offs of different verification methodologies. In addition, we apply Serval to the Keystone security monitor and the BPF compilers in the Linux kernel, and uncover 18 new bugs through verification, all confirmed and fixed by developers.