A Reinforcement Learning Framework for Explainable Recommendation
- Xiting Wang ,
- Yiru Chen ,
- Jie Yang ,
- Le Wu ,
- Zhengtao Wu ,
- Xing Xie
IEEE International Conference on Data Mining (ICDM) |
Published by IEEE
Explainable recommendation, which provides explanations about why an item is recommended, has attracted increasing attention due to its ability in helping users make better decisions and increasing users’ trust in the system. Existing explainable recommendation methods either ignore the working mechanism of the recommendation model or are designed for a specific recommendation model. Moreover, it is difficult for existing methods to ensure the presentation quality of the explanations (e.g., consistency).
To solve these problems, we design a reinforcement learning framework for explainable recommendation. Our framework can explain any recommendation model (model-agnostic) and can flexibly control the explanation quality based on the application scenario. To demonstrate the effectiveness of our framework, we show how it can be used for generating sentence-level explanations. Specifically, we instantiate the explanation generator in the framework with a personalized-attention-based neural network. The key idea of the neural network is to specify a distribution over all possible explanations and model the probability based on users’ personalized attention on the sentences. Recurrent units are integrated into the network to model dependent selection of sentences and further improve explanation quality. Offline experiments demonstrate that our method can well explain both collaborative filtering methods and deep-learning-based models. Evaluation with human subjects shows that the explanations generated by our method are significantly more useful than the explanations generated by the baselines.