Using ML Interpretability Techniques for Data Analysis Instead of Strictly Model Analysis

Hope you lot are doing alright.

I have been looking into Explainable AI and model interpretability lately, and I had an idea but am wondering whether it would constitute a valid use case.

There is a data analysis project happening at work where we're trying to analyze data we had on hand to determine the factors that affect our KPOs and possibly derive useful actionable insights. Instead of moving forward with manually evaluating correlation and doing EDA that way, I was thinking of using variable importance measures, subpopulation analyses, and partial dependence profiles to potentially gain insights, or at the very least narrow down exploratory scope.

I have been scrounging the internet for someone using white box models for expediting analysis tasks, but I haven't found anything that remotely fits the bill except perhaps this article: https://medium.com/swlh/how-i-used-a-machine-learning-model-to-generate-actionable-insights-3aa1dfe2ddfd

It'd be great if you guys let me know what you think.

Cheers,

Topic interpretation explainable-ai data-analysis machine-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.