Explaining Decisions from Vision Models and Correcting them via Human Feedback

Deep networks have enabled unprecedented breakthroughs in a variety of computer vision tasks. While these models enable superior performance, their increasing complexity and lack of decomposability into individually intuitive components makes them hard to interpret. Consequently, when today’s intelligent systems fail, they fail spectacularly disgracefully, giving no warning or explanation.

Towards the goal of making deep networks interpretable, trustworthy and unbiased, in my talk I will present my work on building algorithms that provide explanations for decisions emanating from deep networks in order to —

  • understand/interpret why the model did what it did,
  • correct unwanted biases learned by AI models, and
  • encourage human-like reasoning in AI.

[Slides]

Date:
Speakers:
Ece Kamar, Ramprasaath Selvaraju
Affiliation:
Microsoft Research, Georgia Institute of Technology