What are SHAP Values?
On each feature of a ML model, a Shapley value can be computed to explain how this feature contributed to the difference between the model’s prediction for an example compared to the "average" or expected model prediction. Short for "SHapley Additive exPlanations," SHAP values refers to results of an explainability method that has its roots in game theory.
The SHAP values of all the input features will always add up to the difference between the observed model output for this example and the baseline (expected) model output, hence the additive part of the name in SHAP. SHAP can also provide global explainability using the Shapley values computed for each data point, and even allows you to condition over a particular data slice to gain some insight on how feature contributions differ between slices. Using SHAP in the model-agnostic context is simple enough for local explainability, though global and cohort computations can be costly without particular assumptions about your model.