Features pushing the prediction higher are shown in red, those pushing the prediction lower are in blue. The above explanation shows features each contributing to push the model output from the base value (the average model output over the training dataset we passed) to the model output. Explainer ( model ) shap_values = explainer ( X ) # visualize the first prediction's explanation shap. fit ( X, y ) # explain the model's predictions using SHAP # (same syntax works for LightGBM, CatBoost, scikit-learn, transformers, Spark, etc.) explainer = shap. Fast C++ implementations are supported for XGBoost, LightGBM, CatBoost, scikit-learn and pyspark tree models: import xgboost import shap # train an XGBoost model X, y = shap. While SHAP can explain the output of any machine learning model, we have developed a high-speed exact algorithm for tree ensemble methods (see our Nature MI paper). Tree ensemble example (XGBoost/LightGBM/CatBoost/scikit-learn/pyspark models) SHAP can be installed from either PyPI or conda-forge: pip install shap It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions (see papers for details and citations). SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain the output of any machine learning model.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |