Need for LIME explainer

Is it possible to train a LIME explainer for a binary classfier on a dataset without labels?

I need to understand what is the value of storing a LIME explainer object trained on the same data used to train the model.

In general, does it make sense to keep a trained LIME explainer around to generate explanations during production or is it better to train the LIME explainer on production data whenever is needed?

Another question. If I train a LIME explainer on training data and I use it with test data, does the LIME explainer suffers from data shift?

Topic lime training

Category Data Science


Answering the first question: Is it possible to train a LIME explainer for a binary classfier on a dataset without labels?

From https://christophm.github.io/interpretable-ml-book/lime.html Local surrogate models are interpretable models that are used to explain individual predictions of black box machine learning models. - So, no, you cannot use Lime on binary dataset without labels, because you cannot train classification model in the first place.

So, you need labels - one approach would be to cluster your data and if the structure in your data will be found, assign clusters to the dataset, train model, use Lime => interpret results.

Second part: In general, does it make sense to keep a trained LIME explainer around to generate explanations during production or is it better to train the LIME explainer on production data whenever is needed?

As far as I know, there is no clear prove on if you should use LIME trained on train model and then use it on validation dataset or if you can use LIME on the trained row themselves. I would train the LIME model on train dataset and corresponding model and would then use the same LIME in production to explain particular data.

Generally, this book covers all topics (or most of them) regarding interpretable ml. (Im pretty sure that this book has a paragpaph talking about use of trained explainer on test/train dataset, but I dont remember where exactly)

https://christophm.github.io/interpretable-ml-book/

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.