{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Using SageMaker Clarify and A2I to create transparent and reliable ML solutions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "1. [Overview](#Overview)\n", "2. [Prerequisites and Data](#Prerequisites-and-Data)\n", " 1. [Initialize SageMaker](#Initialize-SageMaker)\n", " 1. [Download data](#Download-data)\n", " 1. [Loading the data: Adult Dataset](#Loading-the-data:-Adult-Dataset) \n", " 1. [Data inspection](#Data-inspection) \n", " 1. [Data encoding and upload to S3](#Encode-and-Upload-Training-Data) \n", "3. [Train and Deploy XGBoost Model](#Train-XGBoost-Model)\n", " 1. [Train Model](#Train-Model)\n", " 1. [Deploy Model to Endpoint](#Deploy-Model)\n", "4. [Amazon SageMaker Clarify](#Amazon-SageMaker-Clarify)\n", " 1. [Explaining Predictions](#Explaining-Predictions)\n", " 1. [Viewing the Explainability Report](#Viewing-the-Explainability-Report)\n", "5. [Create Control Plane Resources for A2I](#Create-Control-Plane-Resources)\n", " 1. [Create Human Task UI](#Create-Human-Task-UI)\n", " 2. [Create Flow Definition](#Create-Flow-Definition)\n", "6. [Starting Human Loops](#Scenario-1-:-When-Activation-Conditions-are-met-,-and-HumanLoop-is-created)\n", " 1. [Wait For Workers to Complete Task](#Wait-For-Workers-to-Complete-Task)\n", " 2. [Check Status of Human Loop](#Check-Status-of-Human-Loop)\n", " 3. [View Task Results](#View-Task-Results)\n", "7. [Preparing new groundtruth data based on the reviewed results](#Merge-the-A2I-prediction-results-with-the-test-data-to-generate-GroundTruth)\n", "8. [Clean Up](#Clean-Up)\n", "\n", "\n", "## Overview\n", "\n", "There are two major challenges being faced by customers looking to implement machine learning solutions in their line of business. \n", "1. Machine learning models are getting more and more complex and opaque, which makes it harder to explain the predictions of such models. \n", "2. Machine learning decisions lack the human understanding and collaboration.\n", " \n", "These challenges prevent lot of customers from financial and healthcare industries to implement machine learning solutions in their business critical functions. Amazon Sagemaker clarify and Amazon Augmented AI(A2I) try to solve both of these challenges from different perspectives.\n", "\n", "Amazon SageMaker Clarify helps improve your machine learning models by detecting potential bias and helping explain how these models make predictions. The fairness and explainability functionality provided by SageMaker Clarify takes a step towards enabling AWS customers to build trustworthy and understandable machine learning models.\n", "\n", "At the same time, Amazon A2I provides a way to introduce human review loop step in the machine learning inference pipeline. This greatly improves the trust and reliability in the machine learning process.\n", "\n", "Based on this understanding, in this notebook, we will look at an example of how we can use both SageMaker Clariy and Amazon A2I at the same time in a single machine learning pipeline to improve transparency and introduce reliability in the inference workflows.\n", "\n", "We will use the adult population dataset located at: https://archive.ics.uci.edu/ml/machine-learning-databases/adult/ to determine if a person's salary is greater than $50,000 or less than $50,000.\n", "\n", "\n", "Below are the steps we will perform as part of this notebook: \n", "1. Train and deploy an XGBoost model on the Adult population dataset predicting if the person's salary is greater than $50,000.\n", "\n", "1. Run Batch inference on the model endpoint along with also running explainability analysis on the batch of records.\n", "\n", "1. Filter the negative predictions as we are interested in knowing why the model predicted a person's salary to be less than $50,000 and which features had the most impact in that process.\n", "\n", "1. Plot the SHAP values computed by SageMaker Clarify for those negative outcomes, to see which feature/s contributed the most in predicting the negative outcome.\n", "\n", "1. Use A2I Human Review Workflow providing the prediction score and SHAP plot for the human reviewer to analyze the outcome to verify the feature attributions in the model.\n", "\n", "1. Use the reviewed data as groundtruth to be used for re-training purposes." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Prerequisites and Data\n", "\n", "\n", "\n", "### Setup Amazon SageMaker Studio Notebook\n", "\n", "1. Onboard to Amazon SageMaker Studio using the quick start (https://docs.aws.amazon.com/sagemaker/latest/dg/onboard-quick-start.html). Please attach the [AmazonAugmentedAIFullAccess](https://console.aws.amazon.com/iam/home#/policies/arn%3Aaws%3Aiam%3A%3Aaws%3Apolicy%2FAmazonAugmentedAIFullAccess) permissions policy to the IAM role you create during Studio onboarding to run this notebook.\n", "1. When user is created and is active, click Open Studio.\n", "1. In the Studio landing page, choose File --> New --> Terminal.\n", "1. In the terminal, enter the following code:\n", " * git clone https://github.com/aws-samples/amazon-sagemaker-clarify-a2i-demo\n", "1. Open the notebook by choosing “sagemaker-clarify-a2i.ipynb” in the amazon-sagemaker-clarify-a2i-demo folder in the left pane of the Studio landing page." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Install open source SHAP library\n", "\n", "First of all, We will need to install the [open source SHAP library](https://shap.readthedocs.io/en/latest/index.html), as we will be using this library to plot the SHAP values computed by SageMaker Clarify further in this notebook. \n", "\n", "There are two ways of installing the SHAP library:\n", "1. If you are using SageMaker Notebook instances, then run `pip install shap` \n", "2. If you are using SageMaker Studio Notebooks, then run `conda install -c conda-forge shap`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### If using SageMaker Studio notebook, execute the below cell, or else skip to the next cell." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "conda install -c conda-forge shap" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### If using SageMaker Notebook Instances, execute the below cell." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pip install shap" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### NOTE\n", "__You need to restart the kernel, after installing the library for the changes to take effect.__" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Initialize SageMaker" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker import Session\n", "from sagemaker import get_execution_role\n", "import pandas as pd\n", "import numpy as np\n", "import urllib\n", "import os\n", "\n", "# Define IAM role\n", "role = get_execution_role()\n", "\n", "session = Session()\n", "bucket = session.default_bucket()\n", "prefix = 'sagemaker/clarify-a2i-demo'\n", "region = session.boto_region_name" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Download data\n", "Data Source: [https://archive.ics.uci.edu/ml/machine-learning-databases/adult/](https://archive.ics.uci.edu/ml/machine-learning-databases/adult/)\n", "\n", "Let's __download__ the data and save it in the local folder with the name adult.data and adult.test from UCI repository$^{[2]}$.\n", "\n", "$^{[2]}$Dua Dheeru, and Efi Karra Taniskidou. \"[UCI Machine Learning Repository](http://archive.ics.uci.edu/ml)\". Irvine, CA: University of California, School of Information and Computer Science (2017)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "adult_columns = [\"Age\", \"Workclass\", \"fnlwgt\", \"Education\", \"Education-Num\", \"Marital Status\",\n", " \"Occupation\", \"Relationship\", \"Ethnic group\", \"Sex\", \"Capital Gain\", \"Capital Loss\",\n", " \"Hours per week\", \"Country\", \"Target\"]\n", "if not os.path.isfile('adult.data'):\n", " urllib.request.urlretrieve('https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data',\n", " 'adult.data')\n", " print('adult.data saved!')\n", "else:\n", " print('adult.data already on disk.')\n", "\n", "if not os.path.isfile('adult.test'):\n", " urllib.request.urlretrieve('https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test',\n", " 'adult.test')\n", " print('adult.test saved!')\n", "else:\n", " print('adult.test already on disk.')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Loading the data: Adult Dataset\n", "From the UCI repository of machine learning datasets, this database contains 14 features concerning demographic characteristics of 45,222 rows (32,561 for training and 12,661 for testing). The task is to predict whether a person has a yearly income that is more or less than $50,000.\n", "\n", "Here are the features and their possible values:\n", "1. **Age**: continuous.\n", "1. **Workclass**: Private, Self-emp-not-inc, Self-emp-inc, Federal-gov, Local-gov, State-gov, Without-pay, Never-worked.\n", "1. **Fnlwgt**: continuous (the number of people the census takers believe that observation represents).\n", "1. **Education**: Bachelors, Some-college, 11th, HS-grad, Prof-school, Assoc-acdm, Assoc-voc, 9th, 7th-8th, 12th, Masters, 1st-4th, 10th, Doctorate, 5th-6th, Preschool.\n", "1. **Education-num**: continuous.\n", "1. **Marital-status**: Married-civ-spouse, Divorced, Never-married, Separated, Widowed, Married-spouse-absent, Married-AF-spouse.\n", "1. **Occupation**: Tech-support, Craft-repair, Other-service, Sales, Exec-managerial, Prof-specialty, Handlers-cleaners, Machine-op-inspct, Adm-clerical, Farming-fishing, Transport-moving, Priv-house-serv, Protective-serv, Armed-Forces.\n", "1. **Relationship**: Wife, Own-child, Husband, Not-in-family, Other-relative, Unmarried.\n", "1. **Ethnic group**: White, Asian-Pac-Islander, Amer-Indian-Eskimo, Other, Black.\n", "1. **Sex**: Female, Male.\n", " * **Note**: this data is extracted from the 1994 Census and enforces a binary option on Sex\n", "1. **Capital-gain**: continuous.\n", "1. **Capital-loss**: continuous.\n", "1. **Hours-per-week**: continuous.\n", "1. **Native-country**: United-States, Cambodia, England, Puerto-Rico, Canada, Germany, Outlying-US(Guam-USVI-etc), India, Japan, Greece, South, China, Cuba, Iran, Honduras, Philippines, Italy, Poland, Jamaica, Vietnam, Mexico, Portugal, Ireland, France, Dominican-Republic, Laos, Ecuador, Taiwan, Haiti, Columbia, Hungary, Guatemala, Nicaragua, Scotland, Thailand, Yugoslavia, El-Salvador, Trinadad&Tobago, Peru, Hong, Holand-Netherlands.\n", "\n", "Next we specify our binary prediction task: \n", "15. **Target**: <=50,000, >$50,000." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "training_data = pd.read_csv(\"adult.data\",\n", " names=adult_columns,\n", " sep=r'\\s*,\\s*',\n", " engine='python',\n", " na_values=\"?\").dropna()\n", "\n", "testing_data = pd.read_csv(\"adult.test\",\n", " names=adult_columns,\n", " sep=r'\\s*,\\s*',\n", " engine='python',\n", " na_values=\"?\",\n", " skiprows=1).dropna()\n", "\n", "training_data.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Data inspection\n", "Plotting histograms for the distribution of the different features is a good way to visualize the data. Let's plot a few of the features that can be considered _sensitive_. \n", "Let's take a look specifically at the Sex feature of a census respondent. In the first plot we see that there are fewer Female respondents as a whole but especially in the positive outcomes, where they form ~$\\frac{1}{7}$th of respondents." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "training_data['Sex'].value_counts().sort_values().plot(kind='bar', title='Counts of Sex', rot=0)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "training_data['Sex'].where(training_data['Target']=='>50K').value_counts().sort_values().plot(kind='bar', title='Counts of Sex earning >$50K', rot=0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Encode and Upload Training Data\n", "Here we encode the training and test data. Encoding input data is not necessary for SageMaker Clarify, but is necessary for XGBoost models.\n", "The below cell does the following:\n", "- Prepare the training data for SageMaker training\n", "- Prepare the test data\n", "- Define the batch size, which we will use to create batch predictions\n", "- Prepare the explainability config data to be used for running the explainability analysis using SageMaker Clarify\n", "- Perform label encoding\n", "\n", "To make this notebook run faster, we will be sending a batch of 100 records from the test dataset for prediction and using the same batch for generating explanations powered by SageMaker Clarify. \n", "\n", "Based on your use-case, you may increase the batch size or send the whole CSV to the endpoint. Generally for a production grade setup, you will not need to create batches as batch transform has the ability to break a large csv into multiple small CSVs. But just to make this notebook run faster, we are using a small batch of records for demonstration purpose and quick execution.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn import preprocessing\n", "def number_encode_features(df):\n", " result = df.copy()\n", " encoders = {}\n", " for column in result.columns:\n", " if result.dtypes[column] == np.object:\n", " encoders[column] = preprocessing.LabelEncoder()\n", " result[column] = encoders[column].fit_transform(result[column].fillna('None'))\n", " return result, encoders\n", "\n", "#preparing the training data with no headers and target columns being the first\n", "training_data = pd.concat([training_data['Target'], training_data.drop(['Target'], axis=1)], axis=1)\n", "training_data, _ = number_encode_features(training_data)\n", "training_data.to_csv('train_data.csv', index=False, header=False)\n", "\n", "#preparing the baseline dataset to be used by SageMaker Clarify for explainability analysis\n", "baseline_data = training_data.drop(['Target'], axis = 1)\n", "baseline_data.to_csv('baseline_data.csv', index=False, header=False)\n", "\n", "\n", "# now preparing the testing data\n", "testing_data, _ = number_encode_features(testing_data)\n", "\n", "# defining the batch of records to be used for doing batch predictions and calculating SHAP values.\n", "# You can change this number based on your use-case\n", "batch_size=100\n", "\n", "# preparing the explanability data config csv having the batch of records from the testing_data, having target column being the first\n", "explanability_data_config = pd.concat([testing_data['Target'], testing_data.drop(['Target'], axis=1)], axis=1)\n", "explanability_data_config = explanability_data_config[:batch_size]\n", "explanability_data_config.to_csv('explanability_data_config.csv', index=False, header=False)\n", "\n", "\n", "# setting up the entire test dataset to csv\n", "test_features = testing_data.drop(['Target'], axis = 1)\n", "test_features.to_csv('test_features.csv', index=False, header=False)\n", "\n", "\n", "# prepare the batch of records for performing inference\n", "test_features_mini_batch = test_features[:batch_size]\n", "test_features_mini_batch.to_csv('test_features_mini_batch.csv', index=False, header=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A quick note about our encoding: the \"Female\" Sex value has been encoded as 0 and \"Male\" as 1." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lastly, let's upload the train, test and explanability config data to S3" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.s3 import S3Uploader\n", "from sagemaker.inputs import TrainingInput\n", "\n", "\n", "train_uri = S3Uploader.upload('train_data.csv', 's3://{}/{}'.format(bucket, prefix))\n", "train_input = TrainingInput(train_uri, content_type='csv')\n", "\n", "test_mini_batch_uri = S3Uploader.upload('test_features_mini_batch.csv', 's3://{}/{}'.format(bucket, prefix))\n", "\n", "explanability_data_config_uri = S3Uploader.upload('explanability_data_config.csv', 's3://{}/{}'.format(bucket, prefix))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Train XGBoost Model\n", "#### Train Model\n", "Since our focus is on understanding how to use SageMaker Clarify, we keep it simple by using a standard XGBoost model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.image_uris import retrieve\n", "from sagemaker.estimator import Estimator\n", "\n", "container = retrieve('xgboost', region, version='1.2-1')\n", "xgb = Estimator(container,\n", " role,\n", " instance_count=1,\n", " instance_type='ml.m4.xlarge',\n", " disable_profiler=True,\n", " sagemaker_session=session)\n", "\n", "xgb.set_hyperparameters(max_depth=5,\n", " eta=0.2,\n", " gamma=4,\n", " min_child_weight=6,\n", " subsample=0.8,\n", " objective='binary:logistic',\n", " num_round=800)\n", "\n", "xgb.fit({'train': train_input}, logs='None', wait='True')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Deploy Model\n", "\n", "Now, let us deploy the model. Regarding this use case and others where model explainability is required, it is generally the backend teams running a nightly jobs to get the predictions and its explainations to send it to their workforce for review. Hence for such cases, a SageMaker Batch Transform job is more practical than a real-time endpoint. Hence we will setup a Batch Transform job for a small set of records from the test dataset to replicate this scenario." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For setting up the batch transform, we need to specify the following:\n", "\n", "- instance_count – Number of EC2 instances to use.\n", "- instance_type – Type of EC2 instance to use, for example, ‘ml.c5.xlarge’. \n", "- strategy: The strategy used to decide how to batch records in a single request (default: None). Valid values: ‘MultiRecord’ and ‘SingleRecord’.\n", "- assemble_with: How the output is assembled (default: None). Valid values: ‘Line’ or ‘None’.\n", "- output_path: S3 location for saving the transform result. If not specified, results are stored to a default bucket. Note, file(s) will be named with '.out' suffixed to the input file(s) names. Note that in this case, running batch transform over again will overwrite existing output values unless you provide a different path each time.\n", "\n", "You can also setup a CloudWatch event to trigger a batch prediction at a particular time of the day/week/month" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "transformer_s3_output_path ='s3://{}/{}/predictions'.format(bucket, prefix)\n", "\n", "xgb_transformer = xgb.transformer(instance_count=1,\n", " instance_type='ml.c5.xlarge',\n", " strategy='MultiRecord',\n", " assemble_with='Line',\n", " output_path=transformer_s3_output_path)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Run the Batch Predictions\n", "\n", "Now it's time to run the batch predictions. Since the Transformer does not provide an API to check when the batch transform job is completed, one of the following options can be chosen:\n", "- Setup a CloudWatch event to send an SNS notification that the job is completed. (Recommended for any customer facing project in production)\n", "- call the wait() method on the transformer so that the notebook execution will wait for the transform job to complete.\n", "\n", "For demonstration purpose, we are using the second option." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "xgb_transformer.transform(test_mini_batch_uri, content_type='text/csv', split_type='Line')\n", "xgb_transformer.wait()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### **_NOTE_**: #### \n", "\n", "**The output of the model is a prediction score between 0 and 1, where the prediction score will denote the probability of the person's salary being greater than $50,000.**\n", "\n", "**For example:** if the model gives a prediction score of 0.3, it means that the model sees a 30% probability that the salary of the person would be greater than \\\\$50,000, which is quite a low probability. Similarly, if the prediction score is 0.9, it means the models finds a probability of 90% that the person's salary would be greater than $50,000" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Amazon SageMaker Clarify\n", "Now that the predictions have been made, let's setup a processor definition for SageMaker Clarify. For running the explainability analysis on the model, SageMaker Clarify uses SageMaker Processing jobs under the hood.\n", "\n", "The first step is to setup a `SageMakerClarifyProcessor`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker import clarify\n", "clarify_processor = clarify.SageMakerClarifyProcessor(role=role,\n", " instance_count=1,\n", " instance_type='ml.c5.xlarge',\n", " sagemaker_session=session)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Writing ModelConfig\n", "\n", "Now, you setup the `ModelConfig` object. This object communicates information about your trained model\n", "\n", "**Note**: To avoid additional traffic to your production models, SageMaker Clarify sets up and tears down a temporary endpoint when processing. `ModelConfig` specifies your preferred instance type and instance count used to run your model on during Clarify's processing." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker import clarify\n", "\n", "\n", "model_config = clarify.ModelConfig(model_name=xgb_transformer.model_name,\n", " instance_type='ml.c5.xlarge',\n", " instance_count=1,\n", " accept_type='text/csv')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Explaining Predictions\n", "There are expanding business needs and legislative regulations that require explainations of _why_ a model made the decision it did. SageMaker Clarify uses the [KernelSHAP](https://arxiv.org/abs/1705.07874) algorithm to explain the contribution that each input feature makes to the final decision.\n", "\n", "To do this, you need to provide some details in terms of setting up SHAP related configuration, an S3 output path where the explainability results will be stored and data configuration related to running the explainability analysis. Note that we are supplying the same `test_mini_batch_uri` which we used for predictions. The below cell does the following:\n", "- Calculates the baseline to be used in `shap_config`. Here the complete training dataset is supplied to calculate a good baseline. The `baseline_data.csv` is basically the training dataset without having the target column in it.\n", "- Treats the whole training dataset to be used as a baseline for `SHAPConfig`\n", "- Setup `DataConfig` providing details on where the input data is located and where to store the results along with more details.\n", "\n", "__NOTE__: The value for `num_samples` is given for demonstration purpose only. To increase the fidelity of SHAP values, use a larger value for `num_samples`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Here use the mean value of training dataset as SHAP baseline\n", "shap_baseline_df = pd.read_csv(\"baseline_data.csv\", header=None)\n", "shap_baseline = [list(shap_baseline_df.mean())]\n", "\n", "# create the SHAPConfig\n", "shap_config = clarify.SHAPConfig(baseline=shap_baseline,\n", " num_samples=15,\n", " agg_method='mean_abs',\n", " use_logit=True)\n", "\n", "explainability_output_path = 's3://{}/{}/explainability'.format(bucket, prefix)\n", "\n", "# create the DataConfig\n", "explainability_data_config = clarify.DataConfig(s3_data_input_path=explanability_data_config_uri,\n", " s3_output_path=explainability_output_path,\n", " label='Target',\n", " headers=training_data.columns.to_list(),\n", " dataset_type='text/csv')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Run the explainability analysis\n", "\n", "Now we are all set. Let us trigger the explainability analysis job. Once the job is finished, the result will be uploaded to the s3 output path set in the previous cell." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "clarify_processor.run_explainability(data_config=explainability_data_config,\n", " model_config=model_config,\n", " explainability_config=shap_config)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Download the explanability results and batch predictions\n", "\n", "Now, download the explanability result data and also the batch prediction data to start preparing it for A2I. The below cell will do the following:\n", "- Download the csv containing the SHAP values for individual rows passed as part of `data_config` in the `run_explainability method`\n", "- Download the `analysis.json` from explanability results, containing the global SHAP values and the expected `base value`\n", "- Download the batch transform prediction results\n", "- Create a single pandas dataframe containing predictions and the SHAP values corresponding to it\n", "- Creating a new column in the same dataframe, named as `Prediction` by keeping the value as `0` for all the prediction scores `less than 0.5` and value `1` for prediction scores `greater than 0.5` to `1` where, `0` denotes person's salary to be `less than $50,000` and `1` denotes the salary to be `greater than $50,000`\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.s3 import S3Downloader\n", "import json\n", "\n", "# read the shap values\n", "S3Downloader.download(s3_uri=explainability_output_path+\"/explanations_shap\", local_path=\"output\")\n", "shap_values_df = pd.read_csv(\"output/out.csv\")\n", "\n", "# read the inference results\n", "S3Downloader.download(s3_uri=transformer_s3_output_path, local_path=\"output\")\n", "predictions_df = pd.read_csv(\"output/test_features_mini_batch.csv.out\", header=None)\n", "predictions_df = predictions_df.round(5)\n", "\n", "# get the base expected value to be used to plot SHAP values\n", "S3Downloader.download(s3_uri=explainability_output_path+\"/analysis.json\", local_path=\"output\")\n", "\n", "with open('output/analysis.json') as json_file:\n", " data = json.load(json_file)\n", " base_value = data['explanations']['kernel_shap']['label0']['expected_value']\n", "\n", "print(\"base value: \", base_value)\n", "\n", "predictions_df.columns = ['Probability_Score']\n", "\n", "# join the probability score and shap values together in a single data frame\n", "prediction_shap_df = pd.concat([predictions_df,shap_values_df],axis=1)\n", "\n", "#create a new column as 'Prediction' converting the prediction to either 1 or 0\n", "prediction_shap_df.insert(0,'Prediction', (prediction_shap_df['Probability_Score'] > 0.5).astype(int))\n", "\n", "#adding an index column based on the batch size;to be used for merging the A2I predictions with the groundtruth.\n", "prediction_shap_df['row_num'] = test_features_mini_batch.index" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 5 - Set up a human review loop for high-confidence detection using Amazon A2I" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Amazon Augmented AI (Amazon A2I) makes it easy to build the workflows required for human review of ML predictions. Amazon A2I brings human review to all developers, removing the undifferentiated heavy lifting associated with building human review systems or managing large numbers of human reviewers.\n", "\n", "To incorporate Amazon A2I into your human review workflows you need:\n", "\n", "A worker task template to create a worker UI. The worker UI displays your input data, such as documents or images, and instructions to workers. It also provides interactive tools that the worker uses to complete your tasks. For more information, see [A2I instructions overview](https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-create-worker-template-console.html)\n", "\n", "A human review workflow, also referred to as a flow definition. You use the flow definition to configure your human workforce and provide information about how to accomplish the human review task. To learn more see [create flow definition](https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-create-flow-definition.html)\n", "\n", "When using a custom task type, you start a human loop using the Amazon Augmented AI Runtime API. When you call StartHumanLoop in your custom application, a task is sent to human reviewers." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### In this section, you set up a human review loop for low-confidence detections in Amazon A2I. It includes the following steps:\n", "\n", "* Create or choose your workforce\n", "* Create a human task UI\n", "* Create the flow definition\n", "* Trigger conditions for human loop activation\n", "* Check the human loop status and wait for reviewers to complete the task" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's now initialize some variables that we need in the subsequent steps" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import io\n", "import uuid\n", "import time\n", "import boto3\n", "\n", "timestamp = time.strftime(\"%Y-%m-%d-%H-%M-%S\", time.gmtime())\n", "# Amazon SageMaker client\n", "sagemaker_client = boto3.client('sagemaker')\n", "\n", "# Amazon Augment AI (A2I) client\n", "a2i = boto3.client('sagemaker-a2i-runtime')\n", "\n", "# Amazon S3 client \n", "s3 = boto3.client('s3')\n", "\n", "# Flow definition name - this value is unique per account and region. You can also provide your own value here.\n", "flow_definition_name = 'flow-def-clarify-a2i-' + timestamp\n", "\n", "# Task UI name - this value is unique per account and region. You can also provide your own value here.\n", "task_UI_name = 'task-ui-clarify-a2i-' + timestamp\n", "\n", "# Flow definition outputs\n", "flow_definition_output_path = f's3://{bucket}/{prefix}/clarify-a2i-results'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Create your workforce\n", "\n", "This step requires you to use the AWS Console. You will create a private workteam and add only one user (you) to it. To create a private team:\n", "\n", "1. Go to AWS Console > Amazon SageMaker > Labeling workforces\n", "1. Click \"Private\" and then \"Create private team\".\n", "1. Enter the desired name for your private workteam.\n", "1. Enter your own email address in the \"Email addresses\" section.\n", "1. Enter the name of your organization and a contact email to administer the private workteam.\n", "1. Click \"Create Private Team\".\n", "1. The AWS Console should now return to AWS Console > Amazon SageMaker > Labeling workforces. Your newly created team should be visible under \"Private teams\". Next to it you will see an ARN which is a long string that looks like arn:aws:sagemaker:region-name-123456:workteam/private-crowd/team-name. **Please enter this ARN in the cell below**\n", "1. You should get an email from no-reply@verificationemail.com that contains your workforce username and password.\n", "1. In AWS Console > Amazon SageMaker > Labeling workforces, click on the URL in Labeling portal sign-in URL. Use the email/password combination from Step 8 to log in (you will be asked to create a new, non-default password).\n", "1. This is your private worker's interface. When you create an A2I task in Verify your task using a private team below, your task should appear in this window. You can invite your colleagues to participate in the labeling job by clicking the \"Invite new workers\" button." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "workteam_arn = \"\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Create the human task UI\n", "\n", "Create a human task UI resource, giving a UI template in liquid html. This template will be rendered to the human workers whenever human loop is required. For over 70 pre built UIs, check: https://github.com/aws-samples/amazon-a2i-sample-task-uis" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "template = r\"\"\"\n", "\n", "\n", "\n", "\n", "\n", "
\n", "

Instructions

\n", "

Please review the predictions in the Predictions table based on the input data table below, and make corrections where appropriate.

\n", "

Here are the labels:

\n", "

0: Salary is less than $50K

\n", "

1: Salary is greater than $50K

\n", "

NOTE: There is also a column showing the probability score, \n", " which tells you how confident the model is that the person's salary would be greater than $50,000. \n", " Currently every row with probability score greater than 0.5 shows the prediction as 1 \n", " and for rows with probability less than 0.5, the prediction is marked as 0

\n", "

Your task is to look at the prediction, probability score and the SHAP plot to understand which features contributed most to the model's prediction \n", " and the probability of the model suggesting a positive outcome

\n", "
\n", "
\n", "

Adult Population dataset

\n", " \n", "
\n", "
\n", "

Predictions Table

\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "\n", " {% for pair in task.input.Pairs %}\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "\n", " {% endfor %}\n", "\n", "
ROW NUMBERMODEL PREDICTIONPROBABILITY SCORESHAP VALUESAGREE/DISAGREE WITH ML RATING?YOUR PREDICTIONCHANGE REASON
{{ pair.row }}\"shap\n", "

\n", " \n", " \n", "

\n", "

\n", " \n", " \n", "

\n", "
\n", "

\n", " \n", "

\n", "
\n", "

\n", " \n", "

\n", "
\n", "
\n", "\"\"\"\n", "\n", "\n", "\n", "def create_task_ui():\n", " '''\n", " Creates a Human Task UI resource.\n", "\n", " Returns:\n", " struct: HumanTaskUiArn\n", " '''\n", " response = sagemaker_client.create_human_task_ui(\n", " HumanTaskUiName=task_UI_name,\n", " UiTemplate={'Content': template})\n", " return response" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Create task UI\n", "human_task_UI_response = create_task_ui()\n", "\n", "human_task_Ui_arn = human_task_UI_response['HumanTaskUiArn']\n", "\n", "print(human_task_Ui_arn)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Create the Flow Definition\n", "In this section, we're going to create a flow definition definition. Flow Definitions allow us to specify:\n", "- The workforce that your tasks will be sent to. \n", "- The instructions that your workforce will receive. This is called a worker task template. \n", "- Where your output data will be stored.\n", "\n", "This demo is going to use the API, but you can optionally create this workflow definition in the console as well. For more details and instructions, see: https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-create-flow-definition.html." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "create_workflow_definition_response = sagemaker_client.create_flow_definition(\n", " FlowDefinitionName= flow_definition_name,\n", " RoleArn= role,\n", " HumanLoopConfig= {\n", " \"WorkteamArn\": workteam_arn,\n", " \"HumanTaskUiArn\": human_task_Ui_arn,\n", " \"TaskCount\": 1,\n", " \"TaskDescription\": \"Review the model predictions and SHAP values and determine if you agree or disagree. Assign a label of 1 to indicate positive result or 0 to indicate a negative result based on your review of the prediction, probability and SHAP values\",\n", " \"TaskTitle\": \"Using Clarify and A2I\"\n", " },\n", " OutputConfig={\n", " \"S3OutputPath\" : flow_definition_output_path\n", " }\n", " )\n", "\n", "flow_definition_arn = create_workflow_definition_response['FlowDefinitionArn']" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Describe flow definition - status should be active\n", "for x in range(60):\n", " describe_flow_definition_response = sagemaker_client.describe_flow_definition(FlowDefinitionName=flow_definition_name)\n", " print(describe_flow_definition_response['FlowDefinitionStatus'])\n", " if (describe_flow_definition_response['FlowDefinitionStatus'] == 'Active'):\n", " print(\"Flow Definition is active\")\n", " break\n", " time.sleep(2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Trigger human loop for all predictions with a negative outcome\n", "\n", "We would like to send all the predictions with a negative outcome, to an Amazon A2I Human loop. We would like to check which features contributed to the model prediction while predicting a person's salary to be less than \\\\$50,000. This can help identify if the model is only giving negative outcome for people belonging to a certain gender or ethnicity group etc. We will also be showing the probability scores along with the predictions and SHAP plots. This is to give complete visibility to the reviewer about how confident the model was, while making a certain prediction." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "negative_outcomes_df = prediction_shap_df[prediction_shap_df.iloc[:, 0] == 0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Plot the SHAP values computed by SageMaker Clarify for the negative outcomes\n", "\n", "Now, Plot the SHAP values for each of the negative outcomes, export the plots as an image and upload them to an s3 location. These images will be rendered in the task review template along with the predictions.\n", "\n", "Also, to make it easy to access the s3 path of the images corresponding to predictions, appends the corresponding s3 uris of images in the same dataframe where predictions and SHAP values are present." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import shap\n", "import matplotlib.pyplot as plt\n", "\n", "column_list = list(test_features_mini_batch.columns)\n", "\n", "s3_uris =[]\n", "for i in range(len(negative_outcomes_df)):\n", " explanation_obj = shap._explanation.Explanation(values=negative_outcomes_df.iloc[i,2:-1].to_numpy(), base_values=base_value, data=test_features_mini_batch.iloc[i].to_numpy(), feature_names=column_list)\n", " shap.plots.waterfall(shap_values=explanation_obj, max_display=4, show=False)\n", " img_name = 'shap-' + str(i) + '.png'\n", " plt.savefig('shap_images/'+img_name, bbox_inches='tight')\n", " plt.close()\n", " s3_uri = S3Uploader.upload('shap_images/'+img_name, 's3://{}/{}/shap_images'.format(bucket, prefix))\n", " s3_uris.append(s3_uri)\n", "\n", " \n", "negative_outcomes_df['shap_image_s3_uri'] = s3_uris" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(f\"{len(negative_outcomes_df)} out of {len(predictions_df)} samples or \" +\n", " '{:.1%} of the predictions will be sent to review.'.format(len(negative_outcomes_df)/len(predictions_df)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Trigger the Human Review Loop\n", "\n", "Now, all is set to trigger the human review loop. The below cell will:\n", "- Pick a set of negative outcome records (for example: 3 records)\n", "- Create a human review loop for it, showing all the three records in a single template\n", "- Wait untill the reviewers have completed their tasks\n", "- Append all completed human review loop details in a list" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import json\n", "import time\n", "\n", "# Note that the prediction is in terms of a probability from 0 to 1 for a discrete label of 1 indicating the person has a salary < $50K\n", "\n", "prediction_list = negative_outcomes_df.iloc[:,:1].values.flatten().tolist()\n", "\n", "probability_score_list = negative_outcomes_df.iloc[:,1:2].values.flatten().tolist()\n", "\n", "probability_score_list\n", "\n", "row_num_list = negative_outcomes_df.iloc[:,-2:-1].values.flatten().tolist()\n", "\n", "NUM_TO_REVIEW = len(negative_outcomes_df) # You can change this number as desired\n", "\n", "completed_human_loops = []\n", "\n", "step_size = 3\n", "\n", "for i in range(0, NUM_TO_REVIEW, step_size):\n", " if i+step_size <= NUM_TO_REVIEW-1:\n", " start_idx = i\n", " end_idx = i+step_size\n", " else:\n", " start_idx = i\n", " end_idx = NUM_TO_REVIEW\n", " \n", " item_list = [{'row': \"{}\".format(row_num_list[j]), 'prediction': prediction_list[j], 'probability_score': probability_score_list[j], 'shap_image_s3_uri': s3_uris[j]} for j in range(start_idx, end_idx)]\n", "\n", " ip_content = {'Pairs': item_list} \n", " \n", " humanLoopName = str(uuid.uuid4())\n", " start_loop_response = a2i.start_human_loop(\n", " HumanLoopName=humanLoopName,\n", " FlowDefinitionArn=flow_definition_arn,\n", " HumanLoopInput={\n", " \"InputContent\": json.dumps(ip_content)\n", " }\n", " ) \n", " \n", " print(\"Task - \" + str(i) + \" submitted, Now, Navigate to the private worker portal and perform the tasks. Make sure you've invited yourself to your workteam!\")\n", " \n", " response = a2i.describe_human_loop(HumanLoopName=humanLoopName)\n", " status = response[\"HumanLoopStatus\"]\n", " while status != \"Completed\":\n", " print(\"Task still in-progress, wait for 10 more seconds for reviewers to complete the task...\")\n", " time.sleep(10) \n", " response = a2i.describe_human_loop(HumanLoopName=humanLoopName)\n", " status = response[\"HumanLoopStatus\"]\n", " \n", " print(\"Human Review Loop for the Task - \" + str(i) + \" completed\")\n", " completed_human_loops.append(response)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's inspect the results of the human review tasks. We will also start preparing the groundtruth labels" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import re\n", "import pprint\n", "\n", "pp = pprint.PrettyPrinter(indent=4)\n", "\n", "groundtruth_labels = {}\n", "\n", "for resp in completed_human_loops:\n", " splitted_string = re.split('s3://' + bucket + '/', resp['HumanLoopOutput']['OutputS3Uri'])\n", " output_bucket_key = splitted_string[1]\n", "\n", " response = s3.get_object(Bucket=bucket, Key=output_bucket_key)\n", " content = response[\"Body\"].read()\n", " json_output = json.loads(content)\n", " \n", " j=1\n", " for i in range(0, step_size):\n", " if json_output['humanAnswers'][0]['answerContent']['rating{}'.format(j)]['agree'] == True:\n", " groundtruth_labels[json_output['inputContent']['Pairs'][i]['row']] = 0\n", " else:\n", " groundtruth_labels[json_output['inputContent']['Pairs'][i]['row']] = 1\n", " j = j +1\n", "\n", "json_output" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Merge the A2I prediction results with the test data to generate GroundTruth \n", "\n", "Since the predictions have been reviewed by human reviewers with analysis provided by SageMaker Clarify, we can treat these predictions as groundtruth data for further re-training purposes.\n", "\n", "So, let us merge the A2I predictions with the batch of testdata used earlier.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "new_training_data = testing_data[:batch_size]\n", "\n", "new_training_data['row_num'] = test_features_mini_batch.index\n", "\n", "\n", "for row in groundtruth_labels:\n", " new_training_data.loc[(new_training_data.row_num == int(row)), 'Target'] = groundtruth_labels[row]\n", "\n", "\n", "new_training_data.to_csv('new_training_data.csv', index=False, header=True)\n", "\n", "S3Uploader.upload('new_training_data.csv', 's3://{}/{}'.format(bucket, prefix))\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Clean Up\n", "Finally, don't forget to clean up the resources we set up and used for this demo!" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "session.delete_model(model_name)" ] } ], "metadata": { "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (Data Science)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:eu-central-1:936697816551:image/datascience-1.0" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.6" } }, "nbformat": 4, "nbformat_minor": 4 }