Comparison requires valid measurement: Rethinking attack success rate comparisons in AI red teaming
- Alex Chouldechova ,
- A. Feder Cooper ,
- Solon Barocas ,
- Abhinav Palia ,
- Dan Vann ,
- Hanna Wallach
In this position paper, we argue that conclusions drawn about relative system safety or attack method efficacy via AI red teaming are often not supported by evidence provided by attack success rate (ASR) comparisons. We show, through conceptual, theoretical, and empirical contributions, that many conclusions are founded on apples-to-oranges comparisons or low-validity measurements. Our arguments are grounded in asking a simple question: (When) can attack success rates be meaningfully compared? To answer this question, we draw on ideas from social science measurement theory and inferential statistics, which, taken together, provide a conceptual grounding for understanding when numerical values obtained by quantification of system attributes can be meaningfully compared. Through this lens, we articulate sufficient conditions under which ASRs can be meaningfully compared. Using jailbreaking as a running example, we provide examples and extensive discussion of apples-to-oranges ASR comparisons and measurement validity challenges.