Probabilistic Inference is the task of given a certain set of observations, to deduce the probability of various outcomes. This is a very basic task both in statistics and in machine learning.
Robust probabilistic inference is an extension of probabilistic inference, where some of the observations are adversarially corrupted. Examples of where such a model may be relevant are spam detection, where spammers try adversarially to fool the spam detectors, or failure detection and correction, where the failure can be modeled as a “worse case” failure. The framework can be also used to model selection between a few alternative models that possibly generate the data.
Technically, we model robust probabilistic inference as a zero-sum game between an adversary, who can select a modification rule, and a predictor, who wants to accurately predict the state of nature. Our main result is an efficient near optimal algorithm for the robust probabilistic inference problem. More specifically, given a black-box access to a Bayesian inference in the classic (adversary-free) setting, our near optimal policy runs in polynomial time in the number of observations and the number of possible modification rules.
This is a joint work with Aviad Rubinstein and Moshe Tennenholtz.