- Machine Learning (ML) based models are often black-box
- To use ML model in industrial setting it should be fair and reliable
- Metrics like accuracy-score or r2-score makes it reliable, but one cannot say anything about the fairness of the model
- ML engineer should be able to explain their models and understand the value and accuracy of their findings
- In this project, we have tried to interpret a trained ML model using CXPlain and SHAP library
- This project was developed for a #hackingforfuture Hackathon organized by International Center for Networked, Adaptive Production ICNAP in coorperation with the Fraunhofer Project Center at the University of Twente
- A ML model is trained on a tabular dataset for binary classification task
- Using this trained model, feature importance for the input features are calculated with the help of CXPlain and SHAP model interpretation library
- Results are compared quantitatively.
- cxplain 1.0.3
- shap 0.37.0
- pycaret 2.3.0
- tensorflow 2.4.1
- plotly 4.14.3
pip install cxplain
pip install shap
pip install pycaret
pip install tensorflow
pip install plotly