{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Explaining Autopilot Models\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"This notebook's CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook. \n",
"\n",
"\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"Kernel `Python 3 (Data Science)` works well with this notebook.\n",
"\n",
"_This notebook was created and tested on an ml.m5.xlarge notebook instance._\n",
"\n",
"## Table of Contents\n",
"\n",
"1. [Introduction](#introduction)\n",
"3. [Setup](#setup)\n",
"4. [Local explanation with KernelExplainer](#Local)\n",
"5. [KernelExplainer computation cost](#cost) \n",
"6. [Global explanation with KernelExplainer](#global)\n",
"7. [Conclusion](#conclusion)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"## Introduction\n",
"Machine learning (ML) models have long been considered black boxes since predictions from these models are hard to interpret. While decision trees can be interpreted by observing the parameters learned by the models, it is generally difficult to get a clear picture.\n",
"\n",
"Model interpretation can be divided into local and global explanations. A local explanation considers a single sample and answers questions like: \"why the model predicts that customer A will stop using the product?\" or \"why the ML system refused John Doe a loan?\". Another interesting question is \"what should John Doe change in order to get the loan approved?\". On the contrary, global explanations aim at explaining the model itself and answer questions like \"which features are important for prediction?\". It is important to note that local explanations can be used to derive global explanations by averaging many samples. For further reading on interpretable ML, see the excellent book by [Christoph Molnar](https://christophm.github.io/interpretable-ml-book).\n",
"\n",
"In this blog post, we will demonstrate the use of the popular model interpretation framework [SHAP](https://github.com/slundberg/shap) for both local and global interpretation."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### SHAP"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[SHAP](https://github.com/slundberg/shap) is a game theoretic framework inspired by [Shapley Values](https://www.rand.org/pubs/papers/P0295.html) that provides local explanations for any model. SHAP has gained popularity in recent years, probably due to its strong theoretical basis. The SHAP package contains several algorithms that, given a sample and a model, derive the SHAP value for each of the model's input features. The SHAP value of a feature represents the feature's contribution to the model's prediction.\n",
"\n",
"To explain models built by [Amazon SageMaker Autopilot](https://aws.amazon.com/sagemaker/autopilot/) we use SHAP's `KernelExplainer` which is a black box explainer. `KernelExplainer` is robust and can explain any model, thus can handle Autopilot's complex feature processing. `KernelExplainer` only requires that the model will support an inference functionality which, given a sample, will return the model's prediction for that sample. The prediction being the predicted value for regression and the class probability for classification.\n",
"\n",
"It is worth noting that SHAP includes several other explainers such as `TreeExplainer` and `DeepExplainer` that are specific for decision forest and neural networks respectively. These are not black box explainers and require knowledge of the model structure and trained params. `TreeExplainer` and `DeepExplainer` are limited and currently can not support any feature processing."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"## Setup\n",
"In this notebook we will start with a model built by SageMaker Autopilot which was already trained on a binary classification task. Please refer to this [notebook](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/autopilot/autopilot_customer_churn.ipynb) to see how to create and train an Autopilot model. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import boto3\n",
"import pandas as pd\n",
"import sagemaker\n",
"from sagemaker import AutoML\n",
"from datetime import datetime\n",
"import numpy as np\n",
"\n",
"region = boto3.Session().region_name\n",
"session = sagemaker.Session()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Install SHAP"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%conda install -c conda-forge shap"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import shap\n",
"\n",
"from shap import KernelExplainer\n",
"from shap import sample\n",
"from scipy.special import expit\n",
"\n",
"# Initialize plugin to make plots interactive.\n",
"shap.initjs()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create an inference endpoint\n",
"Creating an inference endpoint for the trained Autopilot model. Skip this step if an endpoint with the argument `inference_response_keys` set as\n",
"`['predicted_label', 'probability']` was already created."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_job_name = \"your-autopilot-job-that-exists\"\n",
"automl_job = AutoML.attach(automl_job_name, sagemaker_session=session)\n",
"\n",
"# Endpoint name\n",
"ep_name = \"sagemaker-automl-\" + datetime.now().strftime(\"%Y-%m-%d-%H-%M-%S\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# For classification response to work with SHAP we need the probability scores. This can be achieved by providing a list of keys for\n",
"# response content. The order of the keys will dictate the content order in the response. This parameter is not needed for regression.\n",
"inference_response_keys = [\"predicted_label\", \"probability\"]\n",
"\n",
"# Create the inference endpoint\n",
"automl_job.deploy(\n",
" initial_instance_count=1,\n",
" instance_type=\"ml.m5.2xlarge\",\n",
" inference_response_keys=inference_response_keys,\n",
" endpoint_name=ep_name,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Wrap Autopilot's endpoint with an estimator class.\n",
"For ease of use, we wrap the inference endpoint with a custom estimator class. Two inference functions are provided: `predict` which\n",
"returns the numeric prediction value to be used for regression and `predict_proba` which returns the class probability to be used for\n",
"classification."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from sagemaker.predictor import Predictor\n",
"\n",
"\n",
"class AutomlEstimator:\n",
" def __init__(self, endpoint_name, sagemaker_session):\n",
" self.predictor = Predictor(\n",
" endpoint_name=endpoint_name,\n",
" sagemaker_session=sagemaker_session,\n",
" serializer=sagemaker.serializers.CSVSerializer(),\n",
" content_type=\"text/csv\",\n",
" accept=\"text/csv\",\n",
" )\n",
"\n",
" def get_automl_response(self, x):\n",
" if x.__class__.__name__ == \"ndarray\":\n",
" payload = \"\"\n",
" for row in x:\n",
" payload = payload + \",\".join(map(str, row)) + \"\\n\"\n",
" else:\n",
" payload = x.to_csv(sep=\",\", header=False, index=False)\n",
" return self.predictor.predict(payload).decode(\"utf-8\")\n",
"\n",
" # Prediction function for regression\n",
" def predict(self, x):\n",
" response = self.get_automl_response(x)\n",
" # we get the first column from the response array containing the numeric prediction value (or label in case of classification)\n",
" response = np.array([x.split(\",\")[0] for x in response.split(\"\\n\")[:-1]])\n",
" return response\n",
"\n",
" # Prediction function for classification\n",
" def predict_proba(self, x):\n",
" \"\"\"Extract and return the probability score from the AutoPilot endpoint response.\"\"\"\n",
" response = self.get_automl_response(x)\n",
" # we get the second column from the response array containing the class probability\n",
" response = np.array([x.split(\",\")[1] for x in response.split(\"\\n\")[:-1]])\n",
" return response.astype(float)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create an instance of `AutomlEstimator`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_estimator = AutomlEstimator(endpoint_name=ep_name, sagemaker_session=session)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Data\n",
"In this notebook we will use the same dataset as used in the [Customer Churn notebook.](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/autopilot/autopilot_customer_churn.ipynb)\n",
"Please follow the \"Customer Churn\" notebook to download the dataset if it was not previously downloaded.\n",
"\n",
"### Background data\n",
"KernelExplainer requires a sample of the data to be used as background data. KernelExplainer uses this data to simulate a feature being missing by replacing the feature value with a random value from the background. We use shap.sample to sample 50 rows from the dataset to be used as background data. Using more samples as background data will produce more accurate results but runtime will increase. Choosing background data is challenging, see the whitepapers: https://storage.googleapis.com/cloud-ai-whitepapers/AI%20Explainability%20Whitepaper.pdf and https://docs.seldon.io/projects/alibi/en/latest/methods/KernelSHAP.html#Runtime-considerations. Note that the clustering algorithms provided in shap only support numeric data. According to SHAP's documentation, a vector of zeros could be used as background data to produce reasonable results."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"churn_data = pd.read_csv(\"../churn.txt\")\n",
"data_without_target = churn_data.drop(columns=[\"Churn?\"])\n",
"\n",
"background_data = sample(data_without_target, 50)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Setup KernelExplainer"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we create the `KernelExplainer`. Note that since it's a black box explainer, `KernelExplainer` only requires a handle to the\n",
"predict (or predict_proba) function and does not require any other information about the model. For classification it is recommended to\n",
"derive feature importance scores in the log-odds space since additivity is a more natural assumption there thus we use `logit`. For\n",
"regression `identity` should be used."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Derive link function\n",
"problem_type = automl_job.describe_auto_ml_job(job_name=automl_job_name)[\"ResolvedAttributes\"][\n",
" \"ProblemType\"\n",
"]\n",
"link = \"identity\" if problem_type == \"Regression\" else \"logit\"\n",
"\n",
"# the handle to predict_proba is passed to KernelExplainer since KernelSHAP requires the class probability\n",
"explainer = KernelExplainer(automl_estimator.predict_proba, background_data, link=link)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"By analyzing the background data `KernelExplainer` provides us with `explainer.expected_value` which is the model prediction with all features missing. Considering a customer for which we have no data at all (i.e. all features are missing) this should theoretically be the model prediction."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Since expected_value is given in the log-odds space we convert it back to probability using expit which is the inverse function to logit\n",
"print(\"expected value =\", expit(explainer.expected_value))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"## Local explanation with KernelExplainer\n",
"We will use `KernelExplainer` to explain the prediction of a single sample, the first sample in the dataset."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Get the first sample\n",
"x = data_without_target.iloc[0:1]\n",
"\n",
"# ManagedEndpoint can optionally auto delete the endpoint after calculating the SHAP values. To enable auto delete, use ManagedEndpoint(ep_name, auto_delete=True)\n",
"from managed_endpoint import ManagedEndpoint\n",
"\n",
"with ManagedEndpoint(ep_name) as mep:\n",
" shap_values = explainer.shap_values(x, nsamples=\"auto\", l1_reg=\"aic\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"SHAP package includes many visualization tools. See below a `force_plot` which provides a good visualization for the SHAP values of a single sample"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Since shap_values are provided in the log-odds space, we convert them back to the probability space by using LogitLink\n",
"shap.force_plot(explainer.expected_value, shap_values, x, link=link)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"From the plot above we learn that the most influential feature is `VMail Message` which pushes the probability down by about 7%. It is\n",
"important to note that `VMail Message = 25` makes the probability 7% lower in comparison to the notion of that feature being missing.\n",
"SHAP values do not provide the information of how increasing or decreasing `VMail Message` will affect prediction.\n",
"\n",
"In many cases we are interested only in the most influential features. By setting `l1_reg='num_features(5)'`, SHAP will provide non-zero scores for only the most influential 5 features."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"with ManagedEndpoint(ep_name) as mep:\n",
" shap_values = explainer.shap_values(x, nsamples=\"auto\", l1_reg=\"num_features(5)\")\n",
"shap.force_plot(explainer.expected_value, shap_values, x, link=link)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"## KernelExplainer computation cost\n",
"KernelExplainer computation cost is dominated by the inference calls. In order to estimate SHAP values for a single sample, KernelExplainer calls the inference function twice: First, with the sample unaugmented. And second, with many randomly augmented instances of the sample. The number of augmented instances in our case is: 50 (#samples in the background data) * 2088 (nsamples = 'auto') = 104,400. So, in our case, the cost of running KernelExplainer for a single sample is roughly the cost of 104,400 inference calls."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"## Global explanation with KernelExplainer\n",
"Next, we will use KernelExplainer to provide insight about the model as a whole. It is done by running KernelExplainer locally on 50 samples and aggregating the results"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Sample 50 random samples\n",
"X = sample(data_without_target, 50)\n",
"\n",
"# Calculate SHAP values for these samples, and delete the endpoint\n",
"with ManagedEndpoint(ep_name, auto_delete=True) as mep:\n",
" shap_values = explainer.shap_values(X, nsamples=\"auto\", l1_reg=\"aic\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"`force_plot` can be used to visualize SHAP values for many samples simultaneously by rotating the plot of each sample by 90 degrees and stacking the plots horizontally. The resulting plot is interactive and can be manually analyzed."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"shap.force_plot(explainer.expected_value, shap_values, X, link=link)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"`summary_plot` is another visualization tool displaying the mean absolute value of the SHAP values for each feature using a bar plot. Currently, `summary_plot` does not support link functions so the SHAP values are presented in the log-odds space (and not the probability space)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"shap.summary_plot(shap_values, X, plot_type=\"bar\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"## Conclusion\n",
"\n",
"In this post, we demonstrated how to use KernelSHAP to explain models created by Amazon SageMaker Autopilot both locally and globally. KernelExplainer is a robust black box explainer which requires only that the model will support an inference functionality which, given a sample, returns the model's prediction for that sample. This inference functionality was provided by wrapping Autopilot's inference endpoint with an estimator container.\n",
"\n",
"For more about Amazon SageMaker Autopilot, please see [Amazon SageMaker Autopilot](https://aws.amazon.com/sagemaker/autopilot/)."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Notebook CI Test Results\n",
"\n",
"This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n"
]
}
],
"metadata": {
"instance_type": "ml.t3.medium",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.4"
},
"pycharm": {
"stem_cell": {
"cell_type": "raw",
"metadata": {
"collapsed": false
},
"source": []
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}