(Locally) Differentially Private Combinatorial Semi-Bandits

  • Xiaoyu Chen ,
  • Kai Zheng ,
  • Zixin Zhou ,
  • Yunchang Yang ,
  • ,
  • Liwei Wang

Proceedings of the 37th International Conference on Machine Learning (ICML) |

PDF

In this paper, we study Combinatorial Semi-Bandits (CSB) that is an extension of classic Multi-Armed Bandits (MAB) under Differential Privacy (DP) and stronger Local Differential Privacy (LDP) setting. Since the server receives more information from users in CSB, it usually causes additional dependence on the dimension of data, which is a notorious side-effect for privacy preserving learning. However for CSB under two common smoothness assumptions (Kveton et al., 2015; Chen et al., 2016), we show it is possible to remove this side-effect. In detail, for B_\infty-bounded smooth CSB under either \epsilon-LDP or \epsilon-DP, we prove the optimal regret bound is \Theta(m B_\infty^2 ln T/(\Delta \epsilon^2) ) or \tilde{\Theta}(m B_\infty^2 ln T/(\Delta \epsilon) ) respectively, where T is time period, \Delta is the gap of rewards and m is the number of base arms, by proposing novel algorithms and matching lower bounds. For B_1-bounded smooth CSB under \epsilon-DP, we also prove the optimal regret bound is \tilde{\Theta}(m K B_1^2 ln T/(\Delta\epsilon) ) with both upper bound and lower bound, where K is the maximum number of feedback in each round. All above results nearly match corresponding non-private optimal rates, which imply there is no additional price for (locally) differentially private CSB in above common settings.