{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Deriving inference insights for your ML models and sending low confidence predictions to human review workflows using Amazon SageMaker Model Monitor and Amazon A2I\n", "\n", "When a ML model is deployed in production, monitoring the model is important for maintaining the quality of predictions. While the statistical properties of the training data are known in advance, incoming, real-life data can gradually deviate over time and negatively impact predictive power of your model, a phenomenon known as data drift. Detecting these conditions in production can be challenging and time-consuming, requiring a system that captures incoming real-time data, performs statistical analyses, defines rules to detect drift, and sends alerts for rule violations. Furthermore, the process must be repeated for every new iteration of the model.\n", "\n", "Amazon SageMaker Model Monitor enables you to efficiently and continuously monitor machine learning models in production. You can set alerts to detect deviations in the model quality and proactively take corrective actions, such as retraining models, auditing upstream systems, or fixing data quality issues. You can use insights from Amazon Model Monitor to choose ML inferences to send to humans for review using Amazon Augmented AI (Amazon A2I). Amazon A2I makes it easy to integrate a human review into your machine learning workflow. This allows you to automatically have humans step in and review data when a model is unable to make a high confidence prediction or to audit model predictions on an on-going basis.\n", "\n", "In this post we show how to setup a ML workflow on Amazon SageMaker to train a XGBoost algorithm for Breast Cancer prediction. We will then deploy the model with a real-time endpoint, capture a fraction of the data sent to the endpoint, create a baseline from the training dataset, launch a model monitoring schedule, review baseline constraints and statistics, and trigger a human review loop for below threshold predictions. We will then show how the human loop workers review/update the predictions that can be used to update your original training dataset for model re-training.\n", "\n", "\n", "## Contents\n", "\n", "1. [Preprocess your input dataset](#Preprocess_your_input_dataset)\n", "1. [Train and deploy a XGBoost Model](#Step_2_-_Train_and_deploy_a_XGBoost_Model)\n", "1. [Generate baselines and start an Amazon SageMaker Model Monitor](#Step_3_-_Start_the_Amazon_SageMaker_Model_Monitor)\n", "1. [Review the model monitor reports and derive insights](#Step_4_-_Review_the_model_monitor_reports_and_derive_insights)\n", "1. [Setup Human Review loops using Amazon A2I](#Step_5_-_Setup_Human_Review_loops_using_Amazon_A2I)\n", "1. [Cleaning up](#Clean_up)\n", "1. [Conclusion](#Conclusion)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Prerequisites\n", "\n", "### Create your workforce\n", "\n", "This step requires you to use the AWS Console. We will create a private workteam and add only one user (you) to it. To create a private team:\n", "\n", "1. Go to AWS Console > Amazon SageMaker > Labeling workforces\n", "1. Click \"Private\" and then \"Create private team\".\n", "1. Enter the desired name for your private workteam.\n", "1. Enter your own email address in the \"Email addresses\" section.\n", "1. Enter the name of your organization and a contact email to administer the private workteam.\n", "1. Click \"Create Private Team\".\n", "1. The AWS Console should now return to AWS Console > Amazon SageMaker > Labeling workforces. Your newly created team should be visible under \"Private teams\". Next to it you will see an ARN which is a long string that looks like arn:aws:sagemaker:region-name-123456:workteam/private-crowd/team-name. **Please enter this ARN in the cell below**\n", "1. You should get an email from no-reply@verificationemail.com that contains your workforce username and password.\n", "1. In AWS Console > Amazon SageMaker > Labeling workforces, click on the URL in Labeling portal sign-in URL. Use the email/password combination from Step 8 to log in (you will be asked to create a new, non-default password).\n", "1. This is your private worker's interface. When we create a verification task in Verify your task using a private team below, your task should appear in this window. You can invite your colleagues to participate in the labeling job by clicking the \"Invite new workers\" button.\n", "\n", "\n", "### Setup Amazon SageMaker Studio Notebook\n", "\n", "1. Onboard to Amazon SageMaker Studio using the quick start (https://docs.aws.amazon.com/sagemaker/latest/dg/onboard-quick-start.html). Please attach the [AmazonAugmentedAIFullAccess](https://console.aws.amazon.com/iam/home#/policies/arn%3Aaws%3Aiam%3A%3Aaws%3Apolicy%2FAmazonAugmentedAIFullAccess) permissions policy to the IAM role you create during Studio onboarding to run this notebook.\n", "1. When user is created and is active, click Open Studio.\n", "1. In the Studio landing page, choose File --> New --> Terminal.\n", "1. In the terminal, enter the following code:\n", " * git clone https://github.com/aws-samples/amazon-a2i-sample-jupyter-notebooks\n", "1. Open the notebook by choosing “Amazon-A2I-with-Amazon-SageMaker-Model-Monitor.ipynb†in the amazon-a2i-sample-jupyter-notebooks folder in the left pane of the Studio landing page." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 1 - Preprocess your input dataset\n", "\n", "Let's start by specifying the S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "isConfigCell": true, "tags": [ "parameters" ] }, "outputs": [], "source": [ "bucket = '<your s3 bucket name>'\n", "prefix = '<enter a prefix for your notebook execution>'\n", "WORKTEAM_ARN= \"<enter the ARN of your private labeling workforce>\"\n", " \n", "# Define IAM role\n", "import boto3\n", "import re\n", "import sagemaker\n", "from sagemaker import get_execution_role\n", "\n", "\n", "role = get_execution_role()\n", "print(\"RoleArn: {}\".format(role))\n", "sess = sagemaker.Session() " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Lets import some data science libraries and the Amazon SageMaker Python SDK" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np # For matrix operations and numerical processing\n", "import pandas as pd # For munging tabular data\n", "import matplotlib.pyplot as plt # For charts and visualizations\n", "from IPython.display import Image # For displaying images in the notebook\n", "from IPython.display import display # For displaying outputs in the notebook\n", "from time import gmtime, strftime # For labeling SageMaker models, endpoints, etc.\n", "import sys # For writing outputs to notebook\n", "import math # For ceiling function\n", "import json # For parsing hosting outputs\n", "import os # For manipulating filepath names \n", "from sagemaker.predictor import csv_serializer # Converts strings for HTTP POST requests on inference" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Lets load our dataset\n", "\n", "Before creating the template, we will load a tabular dataset, split the data into train and test, store the test data in Amazon S3, and train a machine learning model. The dataset we use is on Breast Cancer prediction and can be found [here](http://archive.ics.uci.edu/ml). \n", "\n", "Reference: [1] Dua, D. and Graff, C. (2019). UCI Machine Learning Repository ]. Irvine, CA: University of California, School of Information and Computer Science.\n", "\n", "Based on the input features, we will first train a model to detect a benign or malignant label." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.datasets import load_breast_cancer\n", "from sklearn.model_selection import train_test_split\n", "\n", "def generatedf(split_ratio):\n", " \"\"\"Loads the dataset into a dataframe and generates train/test splits\"\"\"\n", " data = load_breast_cancer()\n", " df = pd.DataFrame(data.data, columns = data.feature_names)\n", " df['label'] = data.target\n", " cols = list(df.columns)\n", " cols = cols[-1:] + cols[:-1]\n", " df = df[cols]\n", " train, test = train_test_split(df, test_size=split_ratio, random_state=42)\n", " return train, test\n", "\n", "train_data, test_data = generatedf(0.2)\n", "train_data.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#create a separate dataset for Model Monitoring schedule\n", "mm_data = test_data.drop(['label'],axis=1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#store the datasets locally\n", "train_data.to_csv('train.csv',index = None, header=None)\n", "test_data.to_csv('test.csv', index = None, header=None)\n", "mm_data.to_csv('mm.csv', index = None, header=None)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# load the data into S3\n", "sess.upload_data('train.csv', bucket=bucket, key_prefix=os.path.join(prefix, 'train'))\n", "sess.upload_data('test.csv', bucket=bucket, key_prefix=os.path.join(prefix, 'test'))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Because we're training with the CSV file format, we'll create s3_inputs that our training function can use as a pointer to the files in S3, which also specifies that the content type is CSV." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#load the train and test data filenames from Amazon S3\n", "s3_input_train = sagemaker.s3_input(s3_data='s3://{}/{}/train'.format(bucket, prefix), content_type='csv')\n", "s3_input_validation = sagemaker.s3_input(s3_data='s3://{}/{}/test/'.format(bucket, prefix), content_type='csv')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_data.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Amazon SageMaker's XGBoost container expects data in the libSVM or CSV data format. For this example, we'll stick to CSV. Note that the first column must be the target variable and the CSV should not include headers. Also, notice that although repetitive it's easiest to do this after the train|validation|test split rather than before. This avoids any misalignment issues due to random reordering." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 2 - Train and deploy a XGBoost model\n", "\n", "The `XGBoost` (eXtreme Gradient Boosting) is a popular and efficient open-source implementation of the gradient boosted trees algorithm. Gradient boosting is a supervised learning algorithm that attempts to accurately predict a target variable by combining an ensemble of estimates from a set of simpler and weaker models. The XGBoost algorithm performs well in machine learning competitions because of its robust handling of a variety of data types, relationships, distributions, and the variety of hyperparameters that you can fine-tune. You can use XGBoost for regression, classification (binary and multiclass), and ranking problems.\n", "\n", "You can use the new release of the XGBoost algorithm either as an Amazon SageMaker built-in algorithm or as a framework to run training scripts in your local environments. Using the built-in algorithm version of XGBoost is simpler than using the open source version, because you don’t have to write a training script. If you don’t need the features and flexibility of open source XGBoost, consider using the built-in version. For information about using the Amazon SageMaker XGBoost built-in algorithm, see [XGBoost Algorithm](https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost.html) in the Amazon SageMaker Developer Guide.\n", "\n", "First we'll need to specify the ECR container location for Amazon SageMaker's implementation of XGBoost." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.amazon.amazon_estimator import get_image_uri\n", "container = get_image_uri(boto3.Session().region_name, 'xgboost', '1.0-1')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Create the XGBoost Estimator\n", "\n", "Now that we have the XGBoost container, we use it to construct an estimator using the SageMaker Estimator API and initiate a training job. This XGBoost built-in algorithm runs directly on the input datasets. Amazon SageMaker XGBoost currently only trains using CPUs. It is a memory-bound (as opposed to compute-bound) algorithm. So, a general-purpose compute instance (for example, M5) is a better choice than a compute-optimized instance (for example, C4). Further, we recommend that you have enough total memory in selected instances to hold the training data. Although it supports the use of disk space to handle data that does not fit into main memory (the out-of-core feature available with the libsvm input mode), writing cache files onto disk slows the algorithm processing time.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sess = sagemaker.Session()\n", "\n", "xgb = sagemaker.estimator.Estimator(container,\n", " role, \n", " train_instance_count=1, \n", " train_instance_type='ml.m5.2xlarge',\n", " output_path='s3://{}/{}/output'.format(bucket, prefix),\n", " sagemaker_session=sess)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Specify Hyperparameters\n", "\n", "Hyperparameters are set by users to facilitate the estimation of model parameters from data. \n", "\n", "- `max_depth` Controls how deep each tree within the algorithm can be built. Deeper trees can lead to better fit, but are more computationally expensive and can lead to overfitting. Typically, you need to explore some trade-offs in model performance between a large number of shallow trees and a smaller number of deeper trees.\n", "- `subsample` Controls sampling of the training data. This hyperparameter can help reduce overfitting, but setting it too low can also starve the model of data.\n", "- `num_round` Controls the number of boosting rounds. This value specifies the models that are subsequently trained using the residuals of previous iterations. Again, more rounds should produce a better fit on the training data, but can be computationally expensive or lead to overfitting.\n", "- `eta Controls` how aggressive each round of boosting is. Larger values lead to more conservative boosting.\n", "- `gamma` Controls how aggressively trees are grown. Larger values lead to more conservative models.\n", "- `min_child_weight` Also controls how aggresively trees are grown. Large values lead to a more conservative model.\n", "\n", "For other hyperparameters and to know more details please refer to [XGBoost Parameters](https://xgboost.readthedocs.io/en/release_0.90/parameter.html#parameters-for-tree-booster) " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "xgb.set_hyperparameters(max_depth=5,\n", " eta=0.2,\n", " gamma=4,\n", " min_child_weight=6,\n", " subsample=0.8,\n", " silent=0,\n", " objective='binary:logistic',\n", " num_round=100)\n", "\n", "xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Deploy the XGBoost Model\n", "\n", "Now that we've trained the `XGBoost` algorithm on our data, let's deploy a model that's hosted behind a real-time endpoint. As a first step lets specify the paths to Amazon S3 locations for storing data, report, and processing code." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(f\"Bucket is: {bucket}\")\n", "print(f\"Prefix is: {prefix}\")\n", "data_capture_prefix = '{}/datacapture'.format(prefix)\n", "s3_capture_upload_path = 's3://{}/{}'.format(bucket, data_capture_prefix)\n", "reports_prefix = '{}/reports'.format(prefix)\n", "s3_report_path = 's3://{}/{}'.format(bucket,reports_prefix)\n", "code_prefix = '{}/code'.format(prefix)\n", "\n", "print(\"Capture path: {}\".format(s3_capture_upload_path))\n", "print(\"Report path: {}\".format(s3_report_path))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We now specify the capture option called [DataCaptureConfig](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_DataCaptureConfig.html). You can capture the request payload, the response payload, or both with this configuration. The capture configuration applies to all variants." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.model_monitor import DataCaptureConfig\n", "\n", "endpoint_name = 'xgb-breast-cancer-' + strftime(\"%Y-%m-%d-%H-%M-%S\", gmtime())\n", "print(\"EndpointName={}\".format(endpoint_name))\n", "\n", "data_capture_config = DataCaptureConfig(\n", " enable_capture=True,\n", " sampling_percentage=100,\n", " destination_s3_uri=s3_capture_upload_path)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After you fit an XGBoost Estimator, you can host the newly created model in SageMaker. After you call fit, you can call deploy on an XGBoost estimator to create a SageMaker endpoint. The endpoint runs a SageMaker-provided XGBoost model server to host your model, which was run when you called fit. \n", "\n", "The deploy function returns a Predictor object, which you can use to do inference on the Endpoint hosting your XGBoost model. Each Predictor provides a predict method which can do inference with numpy arrays, Python lists, or strings. After inference arrays or lists are serialized and sent to the XGBoost model server, predict returns the result of inference against your model." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Note:** We are specifying a ml.m5.2xlarge instance below for our endpoint. This will incur charges for the duration this endpoint is active. For more details please see [Amazon SageMaker Pricing](https://aws.amazon.com/sagemaker/pricing/)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "xgb_predictor = xgb.deploy(initial_instance_count=1,\n", " instance_type='ml.m5.2xlarge',\n", " endpoint_name=endpoint_name,\n", " data_capture_config=data_capture_config)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "xgb_predictor.content_type = 'text/csv'\n", "xgb_predictor.serializer = csv_serializer" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Invoke the deployed model using the endpoint\n", "\n", "You can now send data to this endpoint to get inferences in real time. Because you enabled the data capture in the previous steps, the request and response payload, along with some additional metadata, is saved in the Amazon S3 location that you specified in DataCaptureConfig." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# EXISTING-ENDPOINT\n", "# Use this code to instantiate the predictor object if you've already created an earlier endpoint and its running. Provide the correct endpoint name below and uncomment both code lines below\n", "# endpoint_name = \"xgb-breast-cancer-2020-07-22-21-33-23\"\n", "#xgb_predictor_2 = sagemaker.predictor.RealTimePredictor(endpoint=endpoint_name,content_type='text/csv')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import time\n", "def invoke_endpoint(predictor, data_df, rows=500):\n", " print(\"Sending test traffic to the endpoint {}. \\nPlease wait...\".format(endpoint_name))\n", " \n", " predictions = ''\n", " i = 0\n", " for row in data_df.to_numpy():\n", " payload = \",\".join([str(num) for num in row])\n", " response = predictor.predict(payload)\n", " if i % 10 == 0:\n", " print(response)\n", " predictions = ','.join([predictions, response.decode('utf-8')])\n", " i = i + 1\n", "\n", " return np.fromstring(predictions[1:], sep=',')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# if this is not your first time running this notebook and you already have an endpoint running, \n", "# please execute the cell marked EXISTING-ENDPOINT above following the instructions provided in comments and\n", "# replace the xgb_predictor variable name below accordingly\n", "predictions = invoke_endpoint(xgb_predictor, test_data[list(test_data.columns)[1:]])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The Data Capture Config we specified earlier for the endpoint should have reported the input sent to the endpoint, the outout received from the endpoint as well as some metrics. Lets verify if the report was sent to S3.\n", "**Note:** The upload of capture files might take a minute even if the endpoint invocation step above is complete. If you get an error when you execute the cell below give it a minute and then try again." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "s3_client = boto3.Session().client('s3')\n", "current_endpoint_capture_prefix = '{}/{}'.format(data_capture_prefix, endpoint_name)\n", "capture_files = sess.list_s3_files(bucket, current_endpoint_capture_prefix)\n", "while True:\n", " if capture_files:\n", " print(\"Found Capture Files:\")\n", " print(\"\\n \".join(capture_files))\n", " break" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lets now review the content of the S3 objects " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import json\n", "\n", "def get_obj_body(obj_key):\n", " return s3_client.get_object(Bucket=bucket, Key=obj_key).get('Body').read().decode(\"utf-8\")\n", "\n", "capture_file = get_obj_body(capture_files[-1])\n", "\n", "print(json.dumps(json.loads(capture_file.split('\\n')[0]), indent=2))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Understanding the prediction result\n", "\n", "The objective we defined in the Hyperparameters for model training was `binary:logistic`. The model will apply logistic regression for binary classification, with output as probability. The probability refers to the log likelihood of the bernoulli distribution. For more details refer to [Bernoulli Distribution](https://en.wikipedia.org/wiki/Bernoulli_distribution). In our example above the value of 0.96 in the endpointOutput indicates a 96% probability for classification into a Label of 1 denoting a malignant result. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 3 - Start the Amazon SageMaker Model Monitor\n", "\n", "Amazon SageMaker Model Monitor continuously monitors the quality of Amazon SageMaker machine learning models in production. It enables developers to set alerts for when there are deviations in the model quality, such as data drift. Early and pro-active detection of these deviations enables you to take corrective actions, such as retraining models, auditing upstream systems, or fixing data quality issues without having to monitor models manually or build additional tooling. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Create a Baseline\n", "\n", "The baseline calculations of statistics and constraints are needed as a standard against which data drift and other data quality issues can be detected. Amazon SageMaker Model Monitor provides a built-in container that provides the ability to suggest the constraints automatically for CSV and flat JSON input.\n", "\n", "The training dataset that you used to train the model is usually a good baseline dataset. The training dataset data schema and the inference dataset schema should exactly match (the number and order of the features). From the training dataset, you can ask Amazon SageMaker to suggest a set of baseline constraints and generate descriptive statistics to explore the data. For this example, upload the training dataset that was used to train the pretrained model included in this example. If you already have it in Amazon S3, you can point to it directly." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# copy over the training dataset to Amazon S3 (if you already have it in Amazon S3, you could reuse it)\n", "baseline_prefix = prefix + '/baselining'\n", "baseline_data_prefix = baseline_prefix + '/data'\n", "baseline_results_prefix = baseline_prefix + '/results'\n", "\n", "baseline_data_uri = 's3://{}/{}'.format(bucket,baseline_data_prefix)\n", "baseline_results_uri = 's3://{}/{}'.format(bucket, baseline_results_prefix)\n", "print('Baseline data uri: {}'.format(baseline_data_uri))\n", "print('Baseline results uri: {}'.format(baseline_results_uri))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Upload the training file for baselining\n", "training_data_file = open(\"train.csv\", 'rb')\n", "s3_key = os.path.join(baseline_prefix, 'data', 'train.csv')\n", "boto3.Session().resource('s3').Bucket(bucket).Object(s3_key).upload_fileobj(training_data_file)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Start the baseline job\n", "from sagemaker.model_monitor import DefaultModelMonitor\n", "from sagemaker.model_monitor.dataset_format import DatasetFormat\n", "\n", "my_default_monitor = DefaultModelMonitor(\n", " role=role,\n", " instance_count=1,\n", " instance_type='ml.m5.4xlarge',\n", " volume_size_in_gb=100,\n", " max_runtime_in_seconds=3600,\n", ")\n", "\n", "my_default_monitor.suggest_baseline(\n", " baseline_dataset=baseline_data_uri+'/train.csv',\n", " dataset_format=DatasetFormat.csv(header=False), \n", " output_s3_uri=baseline_results_uri,\n", " wait=True,\n", " logs=False\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Inspect baseline job results\n", "\n", "Now the baseline job has completed, lets inspect the results. Two files are generated:\n", "- `statistics.json` This file is expected to have columnar statistics for each feature in the dataset that is analyzed. See the schema for this file in the [Schema for Statistics](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-byoc-statistics.html).\n", "- `constraints.json` This file is expected to have the constraints on the features observed. See the schema for this file in the [Schema for Constraints](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-byoc-constraints.html)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#s3_client = boto3.Session().client('s3')\n", "#result = s3_client.list_objects(Bucket=bucket, Prefix=baseline_results_prefix)\n", "result = sess.list_s3_files(bucket, baseline_results_prefix)\n", "print(\"Found Files:\")\n", "print(\"\\n \".join(result))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lets inspect the contents of the statstics.json file for a couple of entries" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# if header was set to False for the baselining creation function, the column names will look like \"_c0,\" \"_c1,\" etc.\n", "# Let's print the statistics for a couple of rows\n", "import pandas as pd\n", "\n", "baseline_job = my_default_monitor.latest_baselining_job\n", "schema_df = pd.json_normalize(baseline_job.baseline_statistics().body_dict[\"features\"])\n", "schema_df.head(2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lets inspect the contents of the constraints.json file for a couple of entries. This should contain constraints applied for each of the columns. In our case we see that the non-negative constraint is applied" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "constraints_df = pd.json_normalize(baseline_job.suggested_constraints().body_dict[\"features\"])\n", "constraints_df.head(2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Lets check the monitoring configurations specified\n", "constraints_mon_df = pd.json_normalize(baseline_job.suggested_constraints().body_dict[\"monitoring_config\"])\n", "constraints_mon_df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The Monitoring Configuration determines the Monitor's actions. For more details please refer to [Model Monitor documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor.html)\n", "\n", "- `emit_metrics` Amazon SageMaker emits Cloudwatch metrics for each feature/column observed in the dataset in the /aws/sagemaker/Endpoints/data-metric namespace with EndpointName and ScheduleName dimensions\n", "- `datatype_check_threshold` During the baseline step, the generated constraints suggest the inferred data type for each column. The monitoring_config.datatype_check_threshold parameter can be tuned to adjust the threshold on when it is flagged as a violation\n", "- `domain_content_threshold` If there are more unknown values for a String field in the current dataset than in the baseline dataset, this threshold can be used to dictate if it needs to be flagged as a violation\n", "- `distribution_constraints.comparison_threshold` **This value is used to calculate model drift.** If the threshold is above the value set for the comparison_threshold, this causes a failure that is treated as a violation in the violation report. In our case, Model Monitor uses the comparison method of \"Robust\" based on the [two-sample K-S test](https://en.m.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test) to quantify the distance between the empirical distribution of our test dataset and the cumulative distribution of the baseline dataset.\n", "\n", "For more details on baseline constraints please refer to [Schema for Constraints](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-byoc-constraints.html)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Create Monitoring Schedule\n", "\n", "With a Monitoring Schedule, Amazon SageMaker can kick off processing jobs at a specified frequency to analyze the data collected during a given period. Amazon SageMaker provides a pre-built container for performing analysis on tabular datasets. In the processing job, Amazon SageMaker compares the dataset for the current analysis with the baseline statistics, constraints provided and generate a violations report. In addition, CloudWatch metrics are emitted for each feature under analysis. Lets create a monitoring schedule to run hourly" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.model_monitor import CronExpressionGenerator\n", "from time import gmtime, strftime\n", "\n", "mon_schedule_name = 'xgb-breast-cancer-a2i-blog-monitor-schedule-' + strftime(\"%Y-%m-%d-%H-%M-%S\", gmtime())\n", "my_default_monitor.create_monitoring_schedule(\n", " monitor_schedule_name=mon_schedule_name,\n", " endpoint_input=xgb_predictor.endpoint,\n", " output_s3_uri=s3_report_path,\n", " statistics=my_default_monitor.baseline_statistics(),\n", " constraints=my_default_monitor.suggested_constraints(),\n", " schedule_cron_expression=CronExpressionGenerator.hourly(),\n", " enable_cloudwatch_metrics=True,\n", "\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Note:** Please ensure that the MaxRunTime for your Model Monitor is smaller than the CRON schedule you specify. Otherwise you will get an error - CreateMonitoringSchedule operation: stopping condition should be smaller than schedule cadence" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lets invoke the endpoint continuously to generate traffic for the model monitor to pickup" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from threading import Thread\n", "from time import sleep\n", "import time\n", "\n", "endpoint_name=xgb_predictor.endpoint\n", "runtime_client = boto3.client('runtime.sagemaker')\n", "\n", "def invoke_endpoint(ep_name, file_name, runtime_client):\n", " with open(file_name, 'r') as f:\n", " for row in f:\n", " payload = row.rstrip('\\n')\n", " response = runtime_client.invoke_endpoint(EndpointName=ep_name,\n", " ContentType='text/csv', \n", " Body=payload)\n", " response['Body'].read()\n", " time.sleep(1)\n", " \n", "def invoke_endpoint_forever():\n", " while True:\n", " invoke_endpoint(endpoint_name, 'mm.csv', runtime_client)\n", " \n", "thread = Thread(target = invoke_endpoint_forever)\n", "thread.start()\n", "\n", "# Note that you need to stop the kernel to stop the invocations" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Check the Monitor Schedule status\n", "desc_schedule_result = my_default_monitor.describe_schedule()\n", "print('Schedule status: {}'.format(desc_schedule_result['MonitoringScheduleStatus']))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 4 - Review Model Monitoring Execution Output" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "**Note:** We setup a monitoring schedule to run hourly. So we need to wait and periodically monitor the execution output from the Model Monitor. We will check the status in 5 minute intervals. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Check execution status every 5 minutes\n", "mon_executions = my_default_monitor.list_executions()\n", "print(\"We created a hourly schedule above and it will kick off executions ON the hour (plus 0 - 20 min buffer.\\nWe will check execution status every 5 minutes...\")\n", "\n", "while len(mon_executions) == 0:\n", " print(\"Waiting for Model Monitor to pick up execution results...\" + strftime(\"%Y-%m-%d-%H-%M-%S\", gmtime()))\n", " time.sleep(300)\n", " mon_executions = my_default_monitor.list_executions() " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lets look at the latest execution status and print the report name" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "latest_execution = mon_executions[-1] # latest execution's index is -1, second to last is -2 and so on..\n", "latest_execution.wait(logs=False)\n", "\n", "print(\"Latest execution status: {}\".format(latest_execution.describe()['ProcessingJobStatus']))\n", "print(\"Latest execution result: {}\".format(latest_execution.describe()['ExitMessage']))\n", "\n", "latest_job = latest_execution.describe()\n", "if (latest_job['ProcessingJobStatus'] != 'Completed'):\n", " print(\"====STOP==== \\n No completed executions to inspect further. Please wait till an execution completes or investigate previously reported failures.\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "report_uri=latest_execution.output.destination\n", "print('Report Uri: {}'.format(report_uri))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Review Violations Report\n", "\n", "The violations file is generated as the output of a MonitoringExecution, which lists the results of evaluating the constraints (specified in the constraints.json file) against the current dataset that was analyzed. The Amazon SageMaker Model Monitor pre-built container provides the following violation checks:\n", "\n", "- `data_type_check` \n", "- `completeness_check`\n", "- `baseline_drift_check`\n", "- `missing_column_check`\n", "- `extra_column_check`\n", "- `categorical_values_check`\n", "\n", "For more details about the violation checks please refer to the Model Monitor documentation [here](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-interpreting-violations.html)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "violations = my_default_monitor.latest_monitoring_constraint_violations()\n", "constraints_df = pd.json_normalize(violations.body_dict[\"violations\"])\n", "constraints_df.head(10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As can be seen above, the model monitor detected a data_type_check violation in one of the requests sent to the endpoint. Data Drift or Model Drift occurs if a **baseline_drift_check** violation is triggered. So we do not see a model drift with our endpoint. To enable proactive action on these metrics, please check the documentation [here](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-interpreting-cloudwatch.html) for how to emit these metrics to Amazon Cloudwatch. You can also visualize the results of monitoring in Amazon SageMaker Studio. For information about the onboarding process for using Studio, see [Onboard to Amazon SageMaker Studio](https://docs.aws.amazon.com/sagemaker/latest/dg/gs-studio-onboard.html). You can view monitoring results at your endpoints using charts. You can view the jobs being monitored and deep dive into each of the jobs. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# View distribution of output probabilities\n", "from matplotlib import pyplot as plt\n", "plt.xlim([-0.1, 1.1])\n", "bin_size=0.05\n", "bins = np.arange(-0.1, 1.1, bin_size) # fixed bin size\n", "\n", "plt.hist(predictions, bins=bins, alpha=0.5)\n", "plt.title('Distribution of probabilities')\n", "plt.xlabel(f'probabilities (bin size = {bin_size})')\n", "plt.ylabel('count')\n", "\n", "plt.show()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Evaluate results\n", "\n", "There are two perspectives to be considered here to determine next steps for our experiment. \n", "\n", "- **`Model Monitor Violations`** We only saw the datatype_check violation from the Model Monitor. We did not see a model drift violation. In our case, Model Monitor uses the comparison method of \"Robust\" based on the [two-sample K-S test](https://en.m.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test) to quantify the distance between the empirical distribution of our test dataset and the cumulative distribution of the baseline dataset. This distance did not exceed the value set for the “comparison_thresholdâ€. The prediction results are aligned with the results in the training dataset. For more details refer to [Model Monitor Interpret Results](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-interpreting-violations.html)\n", "\n", "- **`Probability Distribution of Prediction results`** We used a test dataset of 114 requests. Out of this, we see that the model predicts 60% of the requests to be malignant (>90% probability output in the prediction results), 30% benign (< 10% probability output in the prediction results) and the remaining 10% of the requests are indeterminate as shown in the chart above.\n", "\n", "As a next step, you need to send the prediction results that are distributed with output probabilities of > 10% and < 90% (because the model is unable to predict with sufficient confidence) to a domain expert who can look at the model results and identify if the tumor is benign or malignant. You use Amazon A2I to setup a Human Review workflow and define conditions for activating the review loop." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 5 - Set up a human review loop for low-confidence detection using Amazon A2I" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Amazon Augmented AI (Amazon A2I) makes it easy to build the workflows required for human review of ML predictions. Amazon A2I brings human review to all developers, removing the undifferentiated heavy lifting associated with building human review systems or managing large numbers of human reviewers.\n", "\n", "To incorporate Amazon A2I into your human review workflows you need:\n", "\n", "A worker task template to create a worker UI. The worker UI displays your input data, such as documents or images, and instructions to workers. It also provides interactive tools that the worker uses to complete your tasks. For more information, see [A2I instructions overview](https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-create-worker-template-console.html)\n", "\n", "A human review workflow, also referred to as a flow definition. You use the flow definition to configure your human workforce and provide information about how to accomplish the human review task. To learn more see [create flow definition](https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-create-flow-definition.html)\n", "\n", "When using a custom task type, you start a human loop using the Amazon Augmented AI Runtime API. When you call StartHumanLoop in your custom application, a task is sent to human reviewers." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### In this section, you set up a human review loop for low-confidence detections in Amazon A2I. It includes the following steps:\n", "\n", "* Create or choose your workforce\n", "* Create a human task UI\n", "* Create the flow definition\n", "* Trigger conditions for human loop activation\n", "* Check the human loop status and wait for reviewers to complete the task" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lets now initialize some variables that we need in the subsequent steps" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import io\n", "import uuid\n", "import time\n", "\n", "timestamp = time.strftime(\"%Y-%m-%d-%H-%M-%S\", time.gmtime())\n", "# Amazon SageMaker client\n", "sagemaker_client = boto3.client('sagemaker')\n", "\n", "# Amazon Augment AI (A2I) client\n", "a2i = boto3.client('sagemaker-a2i-runtime')\n", "\n", "# Amazon S3 client \n", "s3 = boto3.client('s3')\n", "\n", "# Flow definition name - this value is unique per account and region. You can also provide your own value here.\n", "flowDefinitionName = 'fd-xgb-breast-cancer-' + timestamp\n", "\n", "# Task UI name - this value is unique per account and region. You can also provide your own value here.\n", "taskUIName = 'ui-xgb-breast-cancer-' + timestamp\n", "\n", "# Flow definition outputs\n", "OUTPUT_PATH = f's3://{bucket}/{prefix}/a2i-results'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Create the human task UI\n", "\n", "Create a human task UI resource, giving a UI template in liquid html. This template will be rendered to the human workers whenever human loop is required. For over 70 pre built UIs, check: https://github.com/aws-samples/amazon-a2i-sample-task-uis" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "template = r\"\"\"\n", "<script src=\"https://assets.crowd.aws/crowd-html-elements.js\"></script>\n", "\n", "<style>\n", " table, tr, th, td {\n", " border: 1px solid black;\n", " border-collapse: collapse;\n", " padding: 5px;\n", " }\n", "</style>\n", "\n", "<crowd-form>\n", " <div>\n", " <h1>Instructions</h1>\n", " <p>Please review the predictions in the Predictions table based on the input data table below, and make corrections where appropriate. </p>\n", " <p> Here are the labels: </p>\n", " <p> 0: Benign </p>\n", " <p> 1: Malignant </p>\n", " </div>\n", " <div>\n", " <h3> Breast cancer dataset </h3>\n", " <div id=\"my_table\"> {{ task.input.table | skip_autoescape }} </div>\n", " </div>\n", " <br>\n", " <h1> Predictions Table </h1>\n", " <table>\n", " <tr>\n", " <th>ROW NUMBER</th>\n", " <th>MODEL PREDICTION</th>\n", " <th>AGREE/DISAGREE WITH ML RATING?</th>\n", " <th>YOUR PREDICTION</th>\n", " <th>CHANGE REASON </th>\n", " </tr>\n", "\n", " {% for pair in task.input.Pairs %}\n", "\n", " <tr>\n", " <td>{{ pair.row }}</td>\n", " <td><crowd-text-area name=\"predicted{{ forloop.index }}\" value=\"{{ pair.prediction }}\"></crowd-text-area></td>\n", " <td>\n", " <p>\n", " <input type=\"radio\" id=\"agree{{ forloop.index }}\" name=\"rating{{ forloop.index }}\" value=\"agree\" required>\n", " <label for=\"agree{{ forloop.index }}\">Agree</label>\n", " </p>\n", " <p>\n", " <input type=\"radio\" id=\"disagree{{ forloop.index }}\" name=\"rating{{ forloop.index }}\" value=\"disagree\" required>\n", " <label for=\"disagree{{ forloop.index }}\">Disagree</label> \n", " </p> \n", " </td>\n", " <td>\n", " <p>\n", " <input type=\"text\" name=\"True Prediction\" placeholder=\"Enter your Prediction\" />\n", " </p>\n", " </td>\n", " <td>\n", " <p>\n", " <input type=\"text\" name=\"Change Reason\" placeholder=\"Explain why you changed the prediction\" />\n", " </p>\n", " </td>\n", " </tr>\n", "\n", " {% endfor %}\n", "\n", " </table>\n", "</crowd-form>\n", "\"\"\"\n", "\n", "def create_task_ui():\n", " '''\n", " Creates a Human Task UI resource.\n", "\n", " Returns:\n", " struct: HumanTaskUiArn\n", " '''\n", " response = sagemaker_client.create_human_task_ui(\n", " HumanTaskUiName=taskUIName,\n", " UiTemplate={'Content': template})\n", " return response" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Create task UI\n", "humanTaskUiResponse = create_task_ui()\n", "humanTaskUiArn = humanTaskUiResponse['HumanTaskUiArn']\n", "print(humanTaskUiArn)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Create the Flow Definition\n", "In this section, we're going to create a flow definition definition. Flow Definitions allow us to specify:\n", "- The workforce that your tasks will be sent to. \n", "- The instructions that your workforce will receive. This is called a worker task template. \n", "- Where your output data will be stored.\n", "\n", "This demo is going to use the API, but you can optionally create this workflow definition in the console as well. For more details and instructions, see: https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-create-flow-definition.html." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "create_workflow_definition_response = sagemaker_client.create_flow_definition(\n", " FlowDefinitionName= flowDefinitionName,\n", " RoleArn= role,\n", " HumanLoopConfig= {\n", " \"WorkteamArn\": WORKTEAM_ARN,\n", " \"HumanTaskUiArn\": humanTaskUiArn,\n", " \"TaskCount\": 1,\n", " \"TaskDescription\": \"Review the model predictions and determine if you agree or disagree. Assign a label of 1 to indicate malignant result or 0 to indicate a benign result based on your review of the inference request\",\n", " \"TaskTitle\": \"Using Model Monitor and A2I Demo\"\n", " },\n", " OutputConfig={\n", " \"S3OutputPath\" : OUTPUT_PATH\n", " }\n", " )\n", "flowDefinitionArn = create_workflow_definition_response['FlowDefinitionArn'] # let's save this ARN for future use" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Describe flow definition - status should be active\n", "for x in range(60):\n", " describeFlowDefinitionResponse = sagemaker_client.describe_flow_definition(FlowDefinitionName=flowDefinitionName)\n", " print(describeFlowDefinitionResponse['FlowDefinitionStatus'])\n", " if (describeFlowDefinitionResponse['FlowDefinitionStatus'] == 'Active'):\n", " print(\"Flow Definition is active\")\n", " break\n", " time.sleep(2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Set trigger conditions for human loop activation\n", "\n", "As we discussed before, we see from the probability distribution of predicted results that a prediction output probability range of > 30% and < 60% when inferred consistently may lead to model drift and needs to be investigated. So we setup the trigger condition for the Amazon A2I human loop to be within this range.\n", "**Note:** Please ignore the dataframe warning displayed" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# assign our original test dataset \n", "model_data_categorical = test_data[list(test_data.columns)[1:]] \n", "\n", "LOWER_THRESHOLD = 0.1\n", "UPPER_THRESHOLD = 0.9\n", "small_payload_df = model_data_categorical.head(len(predictions))\n", "small_payload_df['prediction_prob'] = predictions\n", "small_payload_df_res = small_payload_df.loc[\n", " (small_payload_df['prediction_prob'] > LOWER_THRESHOLD) &\n", " (small_payload_df['prediction_prob'] < UPPER_THRESHOLD)\n", "]\n", "print(small_payload_df_res.shape)\n", "small_payload_df_res.head(10)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(f\"{len(small_payload_df)} out of {len(predictions)} samples or \" +\n", " '{:.1%} of the payload was sent to review.'.format(len(small_payload_df)/len(predictions)))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Note that the prediction is in terms of a probability from 0 to 1 for a discrete label of 1 indicating malignant condition\n", "low_conf_predictions = small_payload_df_res['prediction_prob'].to_list()\n", "NUM_TO_REVIEW = len(low_conf_predictions) # You can change this number as desired\n", "item_list = [{'row': \"ROW_{}\".format(x), 'prediction': low_conf_predictions[x]} for x in range(NUM_TO_REVIEW)]\n", "item_list" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ip_content = {\"table\": small_payload_df_res.reset_index().drop(columns = ['index']).head(NUM_TO_REVIEW).to_html(), \n", " 'Pairs': item_list\n", " }" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Activate human loops\n", "import json\n", "humanLoopName = str(uuid.uuid4())\n", "\n", "start_loop_response = a2i.start_human_loop(\n", " HumanLoopName=humanLoopName,\n", " FlowDefinitionArn=flowDefinitionArn,\n", " HumanLoopInput={\n", " \"InputContent\": json.dumps(ip_content)\n", " }\n", " )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Check status of task completion and human loop\n", "\n", "Let's define a function that allows us to check the status of Human Loop progress" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "completed_human_loops = []\n", "resp = a2i.describe_human_loop(HumanLoopName=humanLoopName)\n", "print(f'HumanLoop Name: {humanLoopName}')\n", "print(f'HumanLoop Status: {resp[\"HumanLoopStatus\"]}')\n", "print(f'HumanLoop Output Destination: {resp[\"HumanLoopOutput\"]}')\n", "print('\\n')\n", " \n", "if resp[\"HumanLoopStatus\"] == \"Completed\":\n", " completed_human_loops.append(resp)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Wait for workers to complete their tasks" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "workteamName = WORKTEAM_ARN[WORKTEAM_ARN.rfind('/') + 1:]\n", "print(\"Navigate to the private worker portal and do the tasks. Make sure you've invited yourself to your workteam!\")\n", "print('https://' + sagemaker_client.describe_workteam(WorkteamName=workteamName)['Workteam']['SubDomain'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Check status of human loop again" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "completed_human_loops = []\n", "resp = a2i.describe_human_loop(HumanLoopName=humanLoopName)\n", "print(f'HumanLoop Name: {humanLoopName}')\n", "print(f'HumanLoop Status: {resp[\"HumanLoopStatus\"]}')\n", "print(f'HumanLoop Output Destination: {resp[\"HumanLoopOutput\"]}')\n", "print('\\n')\n", " \n", "if resp[\"HumanLoopStatus\"] == \"Completed\":\n", " completed_human_loops.append(resp)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's inspect the results of the human review tasks" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import re\n", "import pprint\n", "\n", "pp = pprint.PrettyPrinter(indent=4)\n", "\n", "for resp in completed_human_loops:\n", " splitted_string = re.split('s3://' + bucket + '/', resp['HumanLoopOutput']['OutputS3Uri'])\n", " output_bucket_key = splitted_string[1]\n", "\n", " response = s3.get_object(Bucket=bucket, Key=output_bucket_key)\n", " content = response[\"Body\"].read()\n", " json_output = json.loads(content)\n", " pp.pprint(json_output)\n", " print('\\n')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Clean up\n", "\n", "If you are done with this notebook, please run the cells below. This will remove your monitoring schedule, and the hosted endpoint you created. Also please make sure to stop this notebook instance when you are done to avoid incurring charges" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# List the monitoring schedule\n", "!aws sagemaker list-monitoring-schedules" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Copy the MonitoringScheduleName from above to provide in the delete command below\n", "!aws sagemaker delete-monitoring-schedule --monitoring-schedule-name 'xgb-breast-cancer-a2i-blog-monitor-schedule-2020-08-19-17-59-05'" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#Now delete the endpoint. If you get an error try again in a couple of minutes\n", "sagemaker.Session().delete_endpoint(xgb_predictor.endpoint)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Conclusion\n", "\n", "This notebook demonstrated how you can use Amazon SageMaker Model Monitor and Amazon A2I to setup a monitoring schedule for your Amazon SageMaker model endpoints, specify baselines that include constraint thresholds, observe inference traffic, derive insights such as model drift, completeness, data type violations and send the low confidence predictions to a Human Workflow with labelers to review and update the results. The human labeled output can be used to augment the training dataset for re-training, keeping the distribution variance within threshold, preventing data drift and improving model accuracy." ] } ], "metadata": { "kernelspec": { "display_name": "conda_python3", "language": "python", "name": "conda_python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.10" }, "notice": "Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the \"License\"). You may not use this file except in compliance with the License. A copy of the License is located at http://aws.amazon.com/apache2.0/ or in the \"license\" file accompanying this file. This file is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." }, "nbformat": 4, "nbformat_minor": 4 }