Understanding DNN predictions with LIME

DR Daniel R. Ripoll
SC Sidhartha Chaudhury
AW Anders Wallqvist
request Request a Protocol
ask Ask a question
Favorite

The Local Interpretable Model-agnostic Explanations (LIME) is a novel technique that attempts to explain the predictions of any classifier or regressor by generating an interpretable model locally faithful around a given prediction [69]. In those cases involving image classification as the ones presented in this work, a classifier can represent an image as a tensor with three color channels per pixel. LIME can analyze the output from such a classifier to produce an interpretable representation as binary vector that describes the presence or absence of a contiguous patch of pixels with similar importance.

Let us consider the application of a classification model to a given instance x, and let f(x) be the probability that x belongs to a particular class produced by such model. LIME generates an explanation of such assignment as a new model g that belongs to a class G of potentially interpretable models, Formally, LIME’s explanation is given by the following equation:

where the loss function L(f, g, πx) represents a measure of how unreliable is g as an approximation of f; πx is a proximity measure between an instance z to x that define locality around x; and Ω(g) is a measure of complexity of the explanation g.

Expression 9 indicates that the explanation obtained by LIME corresponds to the argument that minimizes L by keeping Ω(g) low enough to be interpretable by humans.

For additional explanation of the methodology, the user is referred to the original document describing LIME [69].

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

0/150

tip Tips for asking effective questions

+ Description

Write a detailed description. Include all information that will help others answer your question including experimental processes, conditions, and relevant images.

post Post a Question
0 Q&A