Learning to Play Stackelberg Security Games

  • Avrim Blum ,
  • Nika Haghtalab ,
  • Ariel D. Procaccia

Chapter 25, in Improving Homeland Security Decisions

2017

Publication

As discussed in previous chapters, algorithmic research on Stackelberg Security Games has had a striking real-world impact. But an algorithm that computes an optimal strategy for the defender can only be as good as the game it receives as input, and if that game is an inaccurate model of reality then the output of the algorithm will likewise be flawed. Consequently, researchers have introduced Bayesian frameworks that capture uncertainty using a probability distribution over possible games. Others have assumed that the unknown parameters of the game lie within known intervals. These approaches are discussed in Chapter 17 of this book [17].

In this chapter, we present an alternative, learning-theoretic approach for dealing with uncertainty in Stackelberg security games. In order to paint a cohesive picture, we focus on one type of uncertainty: unknown attacker utilities. Learning will take place in a repeated Stackelberg security game, where the defender gathers information about the attacker purely by observing the attacker’s responses to mixed strategies played by the defender.