Deep Neural Networks (DNNs) have achieved near human and in some cases super human accuracies in tasks such as machine translation, image classification, speech processing and so on. However, despite their enormous success these models are often used as black-boxes with very little visibility into their working. This opacity of the models often presents hindrance towards the adoption of these models for mission-critical and human-machine hybrid networks.
In this paper, we will explore the role of influence functions towards opening up these black-box models and for providing interpretability of their output. Influence functions are used to characterize the impact of training data on the model parameters. We will use these functions to analytically understand how the parameters are adjusted during the model training phase to embed the information contained in the training dataset. In other words, influence functions allows us to capture the change in the model parameters due to the training data. We will then use these parameters to provide interpretability of the model output for test data points.
Supriyo Chakraborty, Jorge Ortiz, and Simon Julier, "Role of influence functions in model interpretability (Conference Presentation)," Proc. SPIE 10635, Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR IX, 1063505 (Presented at SPIE Defense + Security: April 16, 2018; Published: 14 May 2018); https://doi.org/10.1117/12.2306009.5783273939001.
Conference Presentations are recordings of oral presentations given at SPIE conferences and published as part of the conference proceedings. They include the speaker's narration along with a video recording of the presentation slides and animations. Many conference presentations also include full-text papers. Search and browse our growing collection of more than 14,000 conference presentations, including many plenary and keynote presentations.