Feature Importance interpretation
I want to audit the results of regressions I ran, and hopefully gain more insights about a treatment effect through sklearn's feature importance function (permutation_importance
), or eli5's PermutationImportance
. I know that those are generally used to narrow down the number of predictors in a model, in an attempt to increase its accuracy (feature selection). My specific problem is that I do not want to use FI for feature selection, but for direct interpretation of the importance of the variables in my regression (similarly to regression coefficients and their p values, while I do know they're different) but I have not seen anybody use feature importance in that way, even if intuitively speaking, I don't see any issue with it.
Scikit learn's states the following on their documentation Permutation importance does not reflect to the intrinsic predictive value of a feature by itself but how important this feature is for a particular model. But, this means that the values of the permutation importance does reflect the intrinsic predictive value of a feature by itself if and only if the model is good, right ? From my understanding, even p values or regression coefficients relate as much to the predictive power of the model itself than the permutation feature importance would ?
Topic feature-importances regression scikit-learn classification
Category Data Science