Skip to content

Instantly share code, notes, and snippets.

@tmasjc
Created December 8, 2020 05:58
Show Gist options
  • Save tmasjc/d7d3201a517006c5c7bf4abb1feabdeb to your computer and use it in GitHub Desktop.
Save tmasjc/d7d3201a517006c5c7bf4abb1feabdeb to your computer and use it in GitHub Desktop.

Primary Approaches to Local Interpretation

source: https://bradleyboehmke.github.io/HOML/iml.html

Local Interpretable Model-Agnostic Explanations

Assumptions:

  • Every model is linear on a local scale;
  • It is possible to fit a simple surrogate model around a single observation that will mimic how the global model behaves at the locality;

Algorithm:

  1. Permute training data to create replicated feature data with slight value modifications;

  2. Compute proximity measure (1 - distance) between observation of interest and each of the permuted observations;

  3. Apply selected machine learning model to predict outcomes of permuted data;

  4. Select m number of features to best describe predicted outcomes; (Forward selection, ridge or lasso regression, decision tree)

  5. Fit a sample model to the permuted data, explaining the complex model outcome with m features from the permuted data weighted by its similarity to the original observation;

  6. Use the resulting feature weights to explain local behaviour;

Shapley Values

Explaining indiividual predictions by borrowing ideas from coalitional game theory, Shapley Values.

Localized Step-Wise Produces

Partial dependence algorithm with added step-wise procedure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment