Dynamic software analyses are powerful mechanisms for finding software errors. Unfortunately, their high performance overheads stymie their adoption. This talk discusses techniques for accelerating such tools in an effort to make them available to beta testers and end users.
One method of reducing these slowdowns, “on-demand analysis,” uses simple hardware features to inform an analysis tool that an interesting event has occurred. By disabling the tool during uninteresting periods, it is possible to significantly reduce that tool’s overall slowdown.
Another method is to sample the analyses, meaning individual users test a small portions of a program each execution. While, individually, they may miss errors, a large population will see many errors in aggregate. These users can report the potential software errors to developers, while collectively observing more program state space than any individual tester would ever see.