Shapash – Interpretability in Python Programming

Explainable AI (XAI) are a set of tools and techniques that help to make the machine learning models more interpretable. It helps in decision-making of models. Some of the packages which are part of XAI are: SHAP, LIME, Interpret ML etc. Shapash is a new library which helps in the interpretability of data science models. It also provides a Web App which enhances the visual interpretation of the models. End users will surely easily be able to deploy local explainability in production by using the easy-to-read visualizations and Web App for Global and Local explainability.

How does Shapash work

Shapash is dependent on different steps of the model to make the results interpretable. It compiles the elements in each step. Based on the inputs, charts & WebApp are created to share and discuss the analysis. Then it proceeds to prepare a summary.

The backend of Shapash is mainly supported by Shap (shapley values) and Lime. The Shapash library works on regression, classification or even multi-class problems. It is compatible with: Catboost, Xgboost, LightGBM, Sklearn Ensemble, Linear models, SVM. So we can use either of these models. The crux of Shapash runs mainly on two objects: SmartExplainer and SmartPredicter.

Now let us look at the steps to use Shapash. Firstly, model building. For simplicity, we can just use the default parameters. Secondly, we create the SmartExplainer object which requires compilation. After successful compilation, we launch the app. Once the Web-app opens in a new browser, we can view the model explanation. We usually get two types of explanations- global & local. Feature importance & Feature Contributions provide an overall model performance whereas local explanations give interpretations for individual predictions. After the model review is completed, we can also stop the service if we want.

Benefits of Shapash

Shapash has numerous benefits. Some of them are:
A. Shapash works on both classification and regression models.
B. It is compatible with many ML libraries, models and encoding features
C. It supports several parameters. However, to keep the results precise, we may keep only the default parameters to generate the report. We can also control the number of features to be included
D. Easy installation (using ‘pip’) and deployment.
E. The final visualization is understandable by all.
F. Availability of pickle files which helps to export the data into tables.


Model interpretability has been the major focus for any tool that we use. Thus, right from data mining to deployment, shapash has many applications. It helps in prediction, summarized explanations on new datasets. It can also summarize the explainability to make it operational based on our needs. Being open-source, Shapash will surely develop many new features which will make it even more applicable and enhance it’s ability of model interpretability.

Facebooktwitterredditpinterestlinkedin

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top