Deriving specialized program analyses for certifying component-client conformance

  • ,
  • Alex Warshavsky ,
  • John Field ,
  • Deepak Goyal ,
  • Mooly Sagiv

PLDI '02 Proceedings of the ACM SIGPLAN 2002 conference on Programming language design and implementation |

Published by ACM

Publication

We are concerned with the problem of statically certifying (verifying) whether the client of a software component conforms to the component’s constraints for correct usage. We show how conformance certification can be efficiently carried out in a staged fashion for certain classes of first-order safety (FOS) specifications, which can express relationship requirements among potentially unbounded collections of runtime objects. In the first stage of the certification process, we systematically derive an abstraction that is used to model the component state during analysis of arbitrary clients. In general, the derived abstraction will utilize first-order predicates, rather than the propositions often used by model checkers. In the second stage, the generated abstraction is incorporated into a static analysis engine to produce a certifier. In the final stage, the resulting certifier is applied to a client to conservatively determine whether the client violates the component’s constraints. Unlike verification approaches that analyze a specification and client code together, our technique can take advantage of computationally-intensive symbolic techniques during the abstraction generation phase, without affecting the performance of client analysis. Using as a running example the Concurrent Modification Problem (CMP), which arises when certain classes defined by the Java Collections Framework are misused, we describe several different classes of certifiers with varying time/space/precision tradeoffs. Of particular note are precise, polynomial-time, flow- and context-sensitive certifiers for certain classes of FOS specifications and client programs. Finally, we evaluate a prototype implementation of a certifier for CMP on a variety of test programs. The results of the evaluation show that our approach, though conservative, yields very few “false alarms,” with acceptable performance.