Differences between Feature Importance and SHAP variable importance graph

I have run an XGBClassifier using the following fields:

 - predictive features = ['sbp','tobacco','ldl','adiposity','obesity','alcohol','age']
 - binary target = 'Target'

I have produced the following Features Importance plot:

I understand that, generally speaking, importance provides a score that indicates how useful or valuable each feature was in the construction of the boosted decision trees within the model. The more an attribute is used to make key decisions with decision trees, the higher its relative importance. From the list of 7 predictive chars listed above, only four characteristics appear in the Features Importance plot (age, ldl, tobacco and sbp). Question: does it mean that the other 3 chars (obesity, alcohol and adiposity) didn't get involved in the trees generation at all?

I have then produced the following SHAP features importance plot:

In this graph, all 7 chars appear in the plot but alcohol, obesity and adiposity appear to have little or no importance (consistently with what observed with the Features Importance graph). Question: why would those 3 chars (obesity, alcohol and adiposity) appear in the SHAP feature importance graph and not in the Features Importance graph?

Topic feature-importances shap xgboost

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.