Passing reduced/different feature data to LimeTabularExplainer compared to the original model
I am trying to use LimeTabularExplainer class and explain_instance function to find explainations of my LightGbm (lgb) model. However, the lgb model uses complex feature set which are not interpretable.
I want to pass a subset of oringal features (which are interpretable) to the Lime explainer, so that my resultant explainations are also interpretable.
In sections 3.1 and 3.3 of original paper, the authors talk about this https://arxiv.org/abs/1602.04938
rf = sklearn.ensemble.RandomForestClassifier(n_estimators=500)
rf.fit(train, labels_train)
explainer = lime.lime_tabular.LimeTabularExplainer(train,
feature_names=feature_names,
class_names=target_names,
discretize_continuous=True)
exp = explainer.explain_instance(test[i], rf.predict_proba, num_features=2, top_labels=1)
Suppose I need to pass test[i][:2], only the first two features to the surrogate model. Is there a way for this in LIME or even in SHAP?
Topic lime shap explainable-ai python
Category Data Science