{ "cells": [ { "cell_type": "markdown", "id": "0551f521-eaea-4e5c-a9a9-60a93aaa018e", "metadata": {}, "source": [ "# Step 6: Add data and model monitoring\n", "After executing five previous notebooks, you have a production-ready solution with automated model building and model deployment CI/CD pipelines.\n", "\n", "In this notebook you are going to use [Amazon SageMaker model monitor](https://aws.amazon.com/sagemaker/model-monitor/) to add continuous and automated [monitoring of the data quality](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-data-quality.html) for the traffic to your real-time SageMaker inference endpoints. You also implement [model monitoring](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-model-quality.html) to detect performance drift and model metric anomalies.\n", "\n", "Using Model Monitor integration with [Amazon EventBridge](https://aws.amazon.com/eventbridge/) you can implement automated response and remediation to any detected issues with data and model quality. For example, you can launch an automated model retraining if the model performance falls below a specific threshold.\n", "\n", "Additionally to data and model quality monitoring you can implement [bias drift](https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-model-monitor-bias-drift.html) and [feature attribution drift](https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-model-monitor-feature-attribution-drift.html) monitoring.\n", "\n", "![](img/six-steps-6.png)" ] }, { "cell_type": "markdown", "id": "5a6f0591-a821-44e7-b6d7-c3d518b55b4c", "metadata": {}, "source": [ "
💡\n", "The minimal prerequisite for this notebook is to complete the setup (00-start-here.ipynb) and step 3 (03-sagemaker-pipeline.ipynb) notebooks.\n", "
\n", "\n", "
💡\n", "This notebook contains two parts:
\n", "- Part 1: Monitor data quality
\n", "- Part 2: Monitor model quality
\n", "
\n", "\n", "You need approximately between 60 and 90 minutes to go through this notebook. To optimize time you can execute both parts independently. For both parts you need to execute the following sections up to the Part 1.\n", "
" ] }, { "cell_type": "code", "execution_count": null, "id": "6dcada1a-d8a5-4f5a-9d2b-e8da352aa3cf", "metadata": { "tags": [] }, "outputs": [], "source": [ "%pip install jsonlines tqdm" ] }, { "cell_type": "code", "execution_count": null, "id": "a2681315-b19c-4e97-a49e-21f04efb6bfd", "metadata": { "tags": [] }, "outputs": [], "source": [ "import boto3\n", "import botocore\n", "import sagemaker \n", "import json\n", "import jsonlines\n", "import random\n", "from tqdm import trange\n", "from sagemaker.predictor import Predictor\n", "from sagemaker import ModelPackage\n", "import time\n", "from time import gmtime, strftime\n", "from datetime import datetime, timedelta\n", "import uuid\n", "import pandas as pd\n", "import numpy as np\n", "from sagemaker.model_monitor import (\n", " DefaultModelMonitor,\n", " DataCaptureConfig,\n", " CronExpressionGenerator,\n", " ModelQualityMonitor,\n", " EndpointInput,\n", ")\n", "from sagemaker.model_monitor.dataset_format import DatasetFormat\n", "from sagemaker.model_monitor import DataCaptureConfig\n", "from utils.monitoring_utils import run_model_monitor_job\n", "from sagemaker.s3 import S3Downloader, S3Uploader\n", "from sagemaker.clarify import (\n", " BiasConfig,\n", " DataConfig,\n", " ModelConfig,\n", " ModelPredictedLabelConfig,\n", " SHAPConfig,\n", ")\n", "from urllib.parse import urlparse\n", "\n", "sagemaker.__version__" ] }, { "cell_type": "code", "execution_count": null, "id": "f8b8d993-1c33-40ad-96ee-7340edadcc31", "metadata": { "tags": [] }, "outputs": [], "source": [ "sm = boto3.client(\"sagemaker\")\n", "s3 = boto3.client(\"s3\")" ] }, { "cell_type": "code", "execution_count": null, "id": "9d0204aa-0ff1-4397-ac89-5b148da9bdd5", "metadata": { "tags": [] }, "outputs": [], "source": [ "session = sagemaker.Session()" ] }, { "cell_type": "code", "execution_count": null, "id": "bb9517dc-86bc-446d-938d-164b80828158", "metadata": { "tags": [] }, "outputs": [], "source": [ "pd.set_option(\"display.max_colwidth\", None)" ] }, { "cell_type": "code", "execution_count": null, "id": "9d2fb029-39b2-48ee-a3d6-38f1a42f25ab", "metadata": { "tags": [] }, "outputs": [], "source": [ "%store -r \n", "\n", "%store\n", "\n", "try:\n", " initialized\n", "except NameError:\n", " print(\"+++++++++++++++++++++++++++++++++++++++++++++++++\")\n", " print(\"[ERROR] YOU HAVE TO RUN 00-start-here notebook \")\n", " print(\"+++++++++++++++++++++++++++++++++++++++++++++++++\")" ] }, { "cell_type": "markdown", "id": "45ae4a24-1294-4bda-8f68-4e811025932d", "metadata": {}, "source": [ "## How Model Monitor works\n", "Amazon SageMaker Model Monitor automatically monitors ML models in production and notifies you when quality issues arise. Model Monitor uses rules to detect drift in your models and data and alerts you when it happens. The following figure shows how this process works.\n", "\n", "![](img/data-monitoring-architecture.png)\n", "\n", "The process for setting up and using the data monitoring:\n", "1. Enable the SageMaker endpoint to capture data from incoming requests to a trained ML model and the resulting model predictions\n", "2. Create a baseline from the dataset that was used to train the model. The baseline computes metrics and suggests constraints for the metrics. \n", "3. Create a monitoring schedule specifying what data to collect, how often to collect it, and how to analyze it. Data traffic to your model and predictions from the model are compared to the constraints, and are reported as violations if they are outside the constrained values. You can define multiple monitoring schedule per endpoint\n", "4. Inspect the reports, which compare the latest data with the baseline, and watch for any violations reported and for metrics and notifications from Amazon CloudWatch\n", "5. Implement observability for your ML models with Amazon CloudWatch and event-based architecture with Amazon EventBridge. You can automate data and model updates, model retraining, and user notification based on the data and model quality events" ] }, { "cell_type": "markdown", "id": "51d0e422-8fc2-40eb-b0e4-b2b36e8bb37e", "metadata": {}, "source": [ "## Real-time inference data capture from a SageMaker endpoint\n", "To work with the model monitor in this notebook, you need a real-time inference endpoint and data capture configured on that endpoint. \n", "If you completed the [step 5](05-deploy.ipynb) notebook, there is at least one deployed endpoint with the name like `model-deploy-19-20-31-59-staging`. If you don't have an active endpoint, you need to create one." ] }, { "cell_type": "code", "execution_count": null, "id": "ea47c022-a154-46f3-8d5e-aaf268467977", "metadata": { "tags": [] }, "outputs": [], "source": [ "# List all deployed real-time endpoints. Depending on your existing environment you might have multiple endpoints\n", "endpoints = sm.list_endpoints(StatusEquals=\"InService\")[\"Endpoints\"]\n", "endpoint_name = \"\"\n", "\n", "if not len(endpoints):\n", " print(f\"There is no deployed active endpoints. You must have at least one endpoint. Run the step 3 pipeline to create a model\")\n", "else:\n", " endpoint_name = endpoints[0]['EndpointName']\n", " print(f\"There are {len(endpoints)} active inference endpoints. Checking the data capture configuration:\")\n", " \n", "for ep in endpoints:\n", " print(f\"Data capture configuration for {ep['EndpointName']}:\")\n", " print(f\"{json.dumps(sm.describe_endpoint(EndpointName=ep['EndpointName'])['DataCaptureConfig'], indent=2)}\")" ] }, { "cell_type": "markdown", "id": "6f313a21-fcf5-42ed-82e5-bc8306b46cd5", "metadata": {}, "source": [ "
💡\n", "If there is no active endpoints, you can run the step 3 notebook to create a model and register the model in the model registry.\n", "If you have an active endpoint, you can go to the Check the data capture configuration section.\n", "
" ] }, { "cell_type": "markdown", "id": "de9355bb-bb6f-4f48-be15-425ccddb17cb", "metadata": {}, "source": [ "### Deploy a model from the model registry as a real-time endpoint\n", "Run this section if you'd like to create an endpoint with a model from the model registry you created in the step 3 pipeline. If you have an active endpoint you'd like to use for model monitoring, go to the section **Check the data capture configuration**.\n", "\n", "The following code checks if there is a model package group created by the step 3 pipeline and if there are any registered model versions in the package." ] }, { "cell_type": "code", "execution_count": null, "id": "9bf19768-cd0e-4dcb-9bbb-93a37e5cf103", "metadata": { "tags": [] }, "outputs": [], "source": [ "try:\n", " model_package_group = sm.describe_model_package_group(ModelPackageGroupName=model_package_group_name)\n", " print(f\"There is model package group {model_package_group_name} in the model registry\")\n", "except botocore.exceptions.ClientError as e:\n", " if e.response['Error']['Code'] == 'ValidationException':\n", " print(\"******* ERROR *********\")\n", " print(f\"Model package group with the name {model_package_group_name} is not found. You need to run the step 3 pipeline to create a model\")" ] }, { "cell_type": "code", "execution_count": null, "id": "97e5b728-2423-4b7c-b3b1-717a5099130e", "metadata": { "tags": [] }, "outputs": [], "source": [ "model_packages = []\n", "\n", "# Find the latest model package\n", "# Set the parameter ModelApprovalStatus='Approved' if you'd like to get only the approved packages\n", "# Sort by the CreationTime\n", "for p in sm.get_paginator('list_model_packages').paginate(\n", " ModelPackageGroupName=model_package_group_name,\n", " # ModelApprovalStatus='Approved',\n", " SortBy=\"CreationTime\",\n", " SortOrder=\"Descending\",\n", " ):\n", " model_packages.extend(p[\"ModelPackageSummaryList\"])\n", " \n", "if not len(model_packages):\n", " print(\"There is no model packages in the model package group {}. You need to run the step 3 pipeline to create a model\")\n", " \n", "latest_model_package_arn = model_packages[0]['ModelPackageArn']\n", "print(f\"The latest model package is version {model_packages[0]['ModelPackageVersion']}, {latest_model_package_arn}\")\n" ] }, { "cell_type": "markdown", "id": "aabe171e-9347-475a-b407-fd21e41cd6a5", "metadata": {}, "source": [ "You can only deploy a model with the model approval status `Approved`, so the next code cell updates the status." ] }, { "cell_type": "code", "execution_count": null, "id": "62ef15db-8bef-40bd-9c5e-567d67c1dd8e", "metadata": { "tags": [] }, "outputs": [], "source": [ "sm.update_model_package(\n", " ModelPackageArn=latest_model_package_arn,\n", " ModelApprovalStatus=\"Approved\",\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "c23c7d8a-0ee1-4e6d-8052-543c8d6d73a2", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Create a model from the registry using Python SDK\n", "model = ModelPackage(role=sm_role, \n", " model_package_arn=model_packages[0]['ModelPackageArn'], \n", " sagemaker_session=session)" ] }, { "cell_type": "code", "execution_count": null, "id": "4859df53-8d40-44f5-b07e-8e7e2bf1dda7", "metadata": { "tags": [] }, "outputs": [], "source": [ "endpoint_name = f\"from-idea-to-prod-endpoint-{strftime('%d-%H-%M-%S', gmtime())}\"\n", "\n", "data_capture_config = DataCaptureConfig(\n", " enable_capture=True,\n", " sampling_percentage=100,\n", " destination_s3_uri=f\"s3://{bucket_name}/{bucket_prefix}/data-capture\",\n", " csv_content_types=[\"text/csv\"],\n", " )" ] }, { "cell_type": "markdown", "id": "44c7f4d6", "metadata": {}, "source": [ "\"Time" ] }, { "cell_type": "code", "execution_count": null, "id": "75c16a7d-f8f1-426c-bd44-5527f21f71df", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Deploy the model\n", "model.deploy(\n", " initial_instance_count=1,\n", " instance_type=\"ml.m5.large\",\n", " wait=False,\n", " data_capture_config=data_capture_config,\n", " endpoint_name=endpoint_name,\n", " serializer=sagemaker.serializers.CSVSerializer(),\n", " deserializer=sagemaker.deserializers.CSVDeserializer(),\n", ")" ] }, { "cell_type": "markdown", "id": "3bf1454b", "metadata": {}, "source": [ "\"Time" ] }, { "cell_type": "code", "execution_count": null, "id": "6b7d312b-84c9-4cc0-9467-451759cb809b", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Wait until the endpoint has the status InService, it takes approximately 5 min\n", "waiter = session.sagemaker_client.get_waiter('endpoint_in_service')\n", "waiter.wait(EndpointName=endpoint_name)" ] }, { "cell_type": "markdown", "id": "8f977d10-057f-4d9f-85ba-17b5dd7ebff6", "metadata": {}, "source": [ "### Check the data capture configuration\n", "If you completed the step 5 [notebook](05-deploy.ipynb), the model deployment CI/CD pipeline contains an infrastructure as code (IaS) data capture configuration for the deployed endpoints. If you clone the project's code repository to the Studio file system, you can browse the project files. Let's take a look into the endpoint configuration.\n", "\n", "The CloudFormation deployment template `endpoint-config-template.yml` in the project directory enables data capture for the endpoint configuration:\n", "```yaml\n", "EndpointConfig:\n", " Type: AWS::SageMaker::EndpointConfig\n", " Properties:\n", " ProductionVariants:\n", " - InitialInstanceCount: !Ref EndpointInstanceCount\n", " InitialVariantWeight: 1.0\n", " InstanceType: !Ref EndpointInstanceType\n", " ModelName: !GetAtt Model.ModelName\n", " VariantName: AllTraffic\n", " DataCaptureConfig:\n", " EnableCapture: !Ref EnableDataCapture \n", " InitialSamplingPercentage: !Ref SamplingPercentage\n", " DestinationS3Uri: !Ref DataCaptureUploadPath\n", " CaptureOptions:\n", " - CaptureMode: Input\n", " - CaptureMode: Output\n", " CaptureContentTypeHeader:\n", " CsvContentTypes:\n", " - \"text/csv\"\n", "```\n", "\n", "The MLOps deploy project you created in the step 4 parametrizes all settings in the CloudFormation template.\n", "The configuration files `prod-config.json` and `staging-config.json` provide the actual values for `EnableCapture`, `InitialSamplingPercentage`, and `DestinationS3Uri`:\n", "```json\n", "{\n", " \"Parameters\": {\n", " \"StageName\": \"prod\",\n", " \"EndpointInstanceCount\": \"1\",\n", " \"EndpointInstanceType\": \"ml.m5.large\",\n", " \"SamplingPercentage\": \"80\",\n", " \"EnableDataCapture\": \"true\"\n", " }\n", "}\n", "```\n", "\n", "If you haven't executed step 4 notebook and deployed an endpoint with the model version from the model registry, let's check the endpoint configuration and see how data capture is confgured." ] }, { "cell_type": "markdown", "id": "97579f3b-2f8e-41e9-93b1-391695c6419f", "metadata": { "tags": [] }, "source": [ "
💡\n", "The endpoint_name variable must be set by now by the previous code cells. If it's not set, highly probably you don't have any active endpoint. Make sure you completed the section Deploy a model from the model registry as a real-time endpoint. \n", "
" ] }, { "cell_type": "code", "execution_count": null, "id": "a891b9fe-37a3-44fa-a1d6-2fe94262410c", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Get the data capture configuration for the endpoint\n", "# endpoint_name = \"model-deploy-16-21-26-26-staging\" # must be set before, but you can use any suitable endpoint\n", "\n", "if not endpoint_name:\n", " print(f\"You must have at least on endpoint with data capture configuration enabled!\")\n", "else:\n", " print(f\"Checking the data capture configuration for the endpoint {endpoint_name}\")\n", " data_capture_config = sm.describe_endpoint(EndpointName=endpoint_name)['DataCaptureConfig']\n", " data_capture_s3_url = data_capture_config['DestinationS3Uri']\n", " data_capture_bucket = data_capture_s3_url.split('/')[2]\n", " data_capture_prefix = '/'.join(data_capture_s3_url.split('/')[3:])\n", "\n", " print(json.dumps(data_capture_config, indent=2))\n", " print(f\"Data capture S3 url: {data_capture_s3_url}\")" ] }, { "cell_type": "markdown", "id": "a7cbc4f7-ccd9-4399-8ffe-a736e2cc6cc4", "metadata": {}, "source": [ "### Define helper functions\n", "Define some helper functions with code snippets that you're going to use throughout this notebook." ] }, { "cell_type": "code", "execution_count": null, "id": "d880ebc0-f728-42e0-82fa-84a8daddda87", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Send data to the endpoint\n", "def generate_endpoint_traffic(predictor, data):\n", " l = len(data)\n", " for i in trange(l):\n", " predictions = np.array(predictor.predict(data.iloc[i].values), dtype=float).squeeze()" ] }, { "cell_type": "code", "execution_count": null, "id": "2d86ff79-55f5-42d6-b390-38b8e4220d6d", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Get all file keys under a specified prefix\n", "def get_file_list(bucket, prefix):\n", " try:\n", " files = [f.get(\"Key\") for f in s3.list_objects(Bucket=bucket, Prefix=prefix).get(\"Contents\")]\n", " print(f\"Found {len(files)} files in s3://{bucket}/{prefix}\")\n", " \n", " return files\n", " except TypeError:\n", " print(f\"No files found in s3://{bucket}/{prefix}\")\n", " return []" ] }, { "cell_type": "code", "execution_count": null, "id": "9297bbfc-8360-42a1-bc2a-2932372aee8e", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Get S3 url for the latest captured data\n", "def get_latest_data_capture_s3_url(bucket, prefix):\n", " capture_files = get_file_list(bucket, prefix)\n", " \n", " if capture_files:\n", " latest_data_capture_s3_url = f\"s3://{bucket}/{'/'.join(capture_files[-1].split('/')[:-1])}\"\n", "\n", " print(f\"Latest data capture S3 url: {latest_data_capture_s3_url}\")\n", " \n", " return latest_data_capture_s3_url\n", " else:\n", " return None" ] }, { "cell_type": "code", "execution_count": null, "id": "b9921dd5-3a14-4354-9731-479b1088393e", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Get S3 url for the latest monitoring job output\n", "def get_latest_monitoring_report_s3_url(job_name):\n", " monitor_job = sm.list_processing_jobs(\n", " NameContains=job_name,\n", " SortOrder='Descending',\n", " MaxResults=2\n", " )['ProcessingJobSummaries'][0]['ProcessingJobName']\n", "\n", " monitoring_job_output_s3_url = sm.describe_processing_job(\n", " ProcessingJobName=monitor_job\n", " )['ProcessingOutputConfig']['Outputs'][0]['S3Output']['S3Uri']\n", "\n", " print(f\"Latest monitoring report S3 url: {monitoring_job_output_s3_url}\")\n", " \n", " return monitoring_job_output_s3_url" ] }, { "cell_type": "code", "execution_count": null, "id": "6d571446-c518-4f34-8532-81fce2298091", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Helper to load a json file from S3\n", "def load_json_from_file(file_s3_url):\n", " bucket = file_s3_url.split('/')[2]\n", " key = '/'.join(file_s3_url.split('/')[3:])\n", " print(f\"Load JSON from: {bucket}/{key}\")\n", " \n", " return json.loads(\n", " s3.get_object(Bucket=bucket, \n", " Key=key)[\"Body\"].read().decode(\"utf-8\")\n", " )" ] }, { "cell_type": "code", "execution_count": null, "id": "2fceb53a-a099-4ad0-8b12-dadd72127a4a", "metadata": { "tags": [] }, "outputs": [], "source": [ "def get_latest_monitor_execution(monitor):\n", " mon_executions = monitor.list_executions()\n", "\n", " if len(mon_executions):\n", " latest_execution = mon_executions[-1] # get the latest execution\n", " latest_execution.wait(logs=False)\n", "\n", " print(f\"Latest execution status: {latest_execution.describe().get('ProcessingJobStatus')}\")\n", " print(f\"Latest execution result: {latest_execution.describe().get('ExitMessage')}\")\n", "\n", " latest_job = latest_execution.describe()\n", " if latest_job[\"ProcessingJobStatus\"] != \"Completed\":\n", " print(\"No completed executions to inspect further\")\n", " else:\n", " report_uri = latest_execution.output.destination\n", " print(f\"Report Uri: {report_uri}\")\n", " \n", " return latest_execution\n", " else:\n", " print(\"No monitoring schedule executions found\")\n", " return None" ] }, { "cell_type": "markdown", "id": "190e011d-ba01-43c8-8af9-959bb7835574", "metadata": {}, "source": [ "### Generate endpoint traffic and captured data\n", "You must send some data to an endpoint for inference to generate data capture.\n", "If you need to add or update the data capture configuration for the endpoint, you can use `DataCaptureConfig` and call [`update_data_capture_config()`](https://sagemaker.readthedocs.io/en/stable/api/inference/predictors.html#sagemaker.predictor.Predictor.update_data_capture_config) method of the predictor." ] }, { "cell_type": "code", "execution_count": null, "id": "58229914-7fa6-4d5c-841b-2bbc32c6add5", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Create a predictor class for the endpoint\n", "predictor = Predictor(\n", " endpoint_name=endpoint_name, \n", " serializer=sagemaker.serializers.CSVSerializer(),\n", " deserializer=sagemaker.deserializers.CSVDeserializer()\n", ")\n", "\n", "# Update data capture config for settings we use in this notebook\n", "data_capture_config = DataCaptureConfig(\n", " enable_capture=True,\n", " sampling_percentage=100,\n", " destination_s3_uri=data_capture_s3_url,\n", " csv_content_types=[\"text/csv\"],\n", ")\n", "\n", "predictor.update_data_capture_config(data_capture_config)" ] }, { "cell_type": "markdown", "id": "a4229045-c339-42ba-8116-f21d6d8ea3e6", "metadata": {}, "source": [ "Use test dataset prepared in the [step 2](02-sagemaker-containers.ipynb) or produced by the pipeline in the [step 3](02-sagemaker-pipeline.ipynb) notebooks and saved on the EFS volume:" ] }, { "cell_type": "code", "execution_count": null, "id": "c7c56788-bf83-47c6-89c9-dbc78c147601", "metadata": { "tags": [] }, "outputs": [], "source": [ "!aws s3 cp $test_s3_url/test_x.csv tmp/test_x.csv\n", "!aws s3 cp $test_s3_url/test_y.csv tmp/test_y.csv" ] }, { "cell_type": "code", "execution_count": null, "id": "00648691-bede-4b15-8127-3711cc9b5449", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Set the number of data vectors from the test dataset sent to the inference endpoint\n", "number_of_vectors = 100" ] }, { "cell_type": "code", "execution_count": null, "id": "652224c3-4121-41c1-9494-f4d8a287f2fc", "metadata": { "tags": [] }, "outputs": [], "source": [ "test_x = pd.read_csv(\"tmp/test_x.csv\", names=[f'_c{i}' for i in range(59)]).sample(number_of_vectors)" ] }, { "cell_type": "code", "execution_count": null, "id": "79b145f1-2e5d-49a2-a855-82375351eb65", "metadata": { "tags": [] }, "outputs": [], "source": [ "test_x.head(1)" ] }, { "cell_type": "markdown", "id": "ff7266a4-a744-4339-9da1-e1b87a4144c9", "metadata": {}, "source": [ "Send the data to the endpoint:" ] }, { "cell_type": "code", "execution_count": null, "id": "150cdd96-de42-4b63-9925-415097ecc680", "metadata": { "tags": [] }, "outputs": [], "source": [ "generate_endpoint_traffic(predictor, test_x)" ] }, { "cell_type": "markdown", "id": "4f1ca97e-2129-4371-9695-c08d3e46f11b", "metadata": {}, "source": [ "### View captured data\n", "Now list the data capture files stored in Amazon S3. The data is stored as `jsonl` an Amazon S3 path format is `s3://{data-capture-destination-s3-url}/{endpoint-name}/{variant-name}/yyyy/mm/dd/hh/filename.jsonl`.\n", "\n", "Wait until captured data appears in the Amazon S3 bucket, it may take several minutes." ] }, { "cell_type": "code", "execution_count": null, "id": "a38c676f-9749-4e1f-811c-a2d8b2af467a", "metadata": { "tags": [] }, "outputs": [], "source": [ "# If you run this notebook not the first time, there might be some data capture files from the previous runs\n", "# We recommend to delete all existing files under the data capture S3 path to avoid any inconsistences\n", "# Uncomment and run the following line to delete all files under the data capture S3 path\n", "\n", "# !aws s3 rm {data_capture_s3_url} --recursive" ] }, { "cell_type": "code", "execution_count": null, "id": "b1d0fd2a-65e6-48d2-ab50-2e6cf8597f78", "metadata": { "tags": [] }, "outputs": [], "source": [ "!aws s3 ls {data_capture_s3_url} --recursive" ] }, { "cell_type": "code", "execution_count": null, "id": "88b6a4c5-09ae-464e-a24a-9c474e6ae10d", "metadata": { "tags": [] }, "outputs": [], "source": [ "capture_files = get_file_list(data_capture_bucket, data_capture_prefix)" ] }, { "cell_type": "code", "execution_count": null, "id": "60bb593e-cc43-454d-8f75-40fcb795a466", "metadata": { "tags": [] }, "outputs": [], "source": [ "assert len(capture_files) > 0, \"Wait until the capture data delivered to the Amazon S3 bucket\"" ] }, { "cell_type": "code", "execution_count": null, "id": "37b77066-75e3-497a-b39b-e4a9f2c092ff", "metadata": { "tags": [] }, "outputs": [], "source": [ "capture_files[0]" ] }, { "cell_type": "markdown", "id": "e60ba1b4-8e39-4b1b-8f17-3253e6cb0a9b", "metadata": {}, "source": [ "Each inference request is captured in one line in the `jsonl` file. The line contains both the input and output merged together. In the example, you provided the ContentType as `text/csv` which is reflected in the `observedContentType` value. Also, you expose the encoding that you used to encode the input and output payloads in the capture format with the encoding value." ] }, { "cell_type": "code", "execution_count": null, "id": "77aed2f6-6430-4356-a6de-aff5f29fbc1b", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Download a capture data file and print it's content\n", "file_key = capture_files[-1]\n", "S3Downloader.download(f\"s3://{data_capture_bucket}/{file_key}\", f\"./tmp\")\n", "\n", "print(f\"Content of the capture file:\")\n", "# Read the jsonl file and show the first object\n", "with jsonlines.open(f\"./tmp/{file_key.split('/')[-1]}\") as reader: \n", " print(json.dumps(reader.read(), indent=2))\n", " # print(json.dumps(reader.read(), indent=2))" ] }, { "cell_type": "markdown", "id": "0ada3476-3067-47e8-bcc1-f515972048c2", "metadata": {}, "source": [ "## Part 1: Monitor data quality\n", "In this part you learn how to setup data quality monitoring for SageMaker real-time endpoints.\n", "\n", "To enable inference data quality monitoring and evaluation you must:\n", "1. Enable [data capture](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-data-capture.html)\n", "1. [Create a baseline](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-create-baseline.html) with which you compare the realtime traffic\n", "1. Once a baseline is ready, [schedule monitoring jobs](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-scheduling.html) to continously evaluate and compare against the baseline\n", "1. [See and interpret the results](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-interpreting-results.html) of monitoring jobs\n", "1. [Integrate data quality monitoring](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-interpreting-cloudwatch.html) with Amazon CloudWatch" ] }, { "cell_type": "markdown", "id": "84268bba-f44e-4cb4-a53f-67251253c1a4", "metadata": {}, "source": [ "### Create a baselining job with the training dataset\n", "The whole dataset with which you trained and tested the model is usually a good baseline dataset. Note that the baseline dataset data schema and the inference dataset schema should exactly match (i.e. the number and order of the features).\n", "\n", "From the baseline dataset you can ask Amazon SageMaker to suggest a set of baseline _constraints_ and generate descriptive _statistics_ to explore the data. Model Monitor provides a [built-in container](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-pre-built-container.html) that provides the ability to suggest the constraints automatically for CSV and flat JSON input. This `sagemaker-model-monitor-analyzer` container also provides you with a range of model monitoring capabilities, including constraint validation against a baseline, and emitting Amazon CloudWatch metrics. This container is based on Spark and is built with [Deequ](https://github.com/awslabs/deequ). \n", "\n", "
💡 All column names in your baseline dataset must be compliant with Spark. For column names, use only lowercase characters, and _ as the only special character. \n", "
\n", "\n", "Use the baseline dataset you created in the [step 2](02-sagemaker-containers.ipynb) notebook data processing. The baseline dataset is the full dataset without header, index, and label column." ] }, { "cell_type": "code", "execution_count": null, "id": "fbaac6de-b3a9-4f4d-b6e2-2f2a66b86a11", "metadata": { "tags": [] }, "outputs": [], "source": [ "!aws s3 ls {baseline_s3_url}/" ] }, { "cell_type": "code", "execution_count": null, "id": "632362d5-c4e0-4bf5-bca4-b4da6ad34377", "metadata": { "tags": [] }, "outputs": [], "source": [ "baseline_results_s3_url = f\"{baseline_s3_url}/results\"\n", "data_mon_reports_s3_url = f\"{baseline_s3_url}/reports\"\n", "baseline_dataset_uri = f\"{baseline_s3_url}/baseline.csv\"" ] }, { "cell_type": "markdown", "id": "c436412f-0daf-46bc-ab5f-682ad8bdeecc", "metadata": {}, "source": [ "Use the Python SDK class [`DefaultModelMonitor`](https://sagemaker.readthedocs.io/en/stable/api/inference/model_monitor.html#sagemaker.model_monitor.model_monitoring.DefaultModelMonitor) to create a data monitor and interact with it:" ] }, { "cell_type": "code", "execution_count": null, "id": "70b94a46-00ca-4e2b-bc4c-7e70d59b35ec", "metadata": { "tags": [] }, "outputs": [], "source": [ "data_monitor = DefaultModelMonitor(\n", " role=sm_role,\n", " instance_count=1,\n", " instance_type=\"ml.m5.xlarge\",\n", " volume_size_in_gb=20,\n", " max_runtime_in_seconds=3600,\n", " sagemaker_session=session,\n", ")" ] }, { "cell_type": "markdown", "id": "4d49b158-9859-4053-8ec9-8f99577ed07e", "metadata": {}, "source": [ "Start a SageMaker processing job on the baseline data to profile data and suggest constraints." ] }, { "cell_type": "code", "execution_count": null, "id": "5443d53d-00e9-4320-9c72-1751fe796778", "metadata": { "tags": [] }, "outputs": [], "source": [ "data_baseline_job_name = f\"from-idea-to-prod-data-baselining-{strftime('%d-%H-%M-%S', gmtime())}-{str(uuid.uuid4())[:8]}\"\n", "\n", "data_baseline_job = data_monitor.suggest_baseline(\n", " baseline_dataset=baseline_dataset_uri,\n", " dataset_format=DatasetFormat.csv(header=False),\n", " output_s3_uri=baseline_results_s3_url,\n", " wait=False,\n", " logs=False,\n", " job_name=data_baseline_job_name,\n", ")\n", "\n", "print(data_baseline_job_name)" ] }, { "cell_type": "markdown", "id": "f4f878c9-c820-47e2-a61c-51b3155ea34f", "metadata": {}, "source": [ "The baselining job takes about 7 minutes to complete:\n", "\n", "\"Time" ] }, { "cell_type": "code", "execution_count": null, "id": "09659b6e-f061-4e4a-aaba-da4fb9771d5a", "metadata": { "tags": [] }, "outputs": [], "source": [ "data_baseline_job.wait(logs=False)" ] }, { "cell_type": "markdown", "id": "1958022d", "metadata": {}, "source": [ "\"Time" ] }, { "cell_type": "markdown", "id": "d29e9063-5fb6-4bda-86c1-d33b2b0ca10b", "metadata": { "tags": [] }, "source": [ "### See the generated statistics and constraints\n", "After the baselining jobs finished, it saves the baseline statistics to the `statistics.json` file and the suggested baseline constraints to the `constraints.json` file in the location you specify with `output_s3_uri`." ] }, { "cell_type": "code", "execution_count": null, "id": "7c89c5ec-eb0b-491b-8f0a-9b6b8a037e59", "metadata": { "tags": [] }, "outputs": [], "source": [ "data_monitor.describe_latest_baselining_job()" ] }, { "cell_type": "code", "execution_count": null, "id": "ff77ab21-f595-417e-a880-a3cb73be87c0", "metadata": { "tags": [] }, "outputs": [], "source": [ "!aws s3 ls {baseline_results_s3_url}/" ] }, { "cell_type": "code", "execution_count": null, "id": "a377fca9-436d-45cd-818c-4a8c88f84c4b", "metadata": { "tags": [] }, "outputs": [], "source": [ "data_statistics_s3_url = f\"{baseline_results_s3_url}/statistics.json\"\n", "data_constraints_s3_url = f\"{baseline_results_s3_url}/constraints.json\"" ] }, { "cell_type": "markdown", "id": "305d2e3b-c707-4b34-bd68-36fc3da9fee0", "metadata": {}, "source": [ "Copy statistics and constraints JSON files to the Studio EFS:" ] }, { "cell_type": "code", "execution_count": null, "id": "24301fbd-6e71-4c35-b478-c8b076d4e285", "metadata": { "tags": [] }, "outputs": [], "source": [ "!aws s3 cp {data_constraints_s3_url} ./tmp/\n", "!aws s3 cp {data_statistics_s3_url} ./tmp/" ] }, { "cell_type": "code", "execution_count": null, "id": "c45e8f3d-65bb-4bd0-9176-5f0bc8e44b2e", "metadata": { "tags": [] }, "outputs": [], "source": [ "!head -20 tmp/constraints.json" ] }, { "cell_type": "code", "execution_count": null, "id": "5bd77f24-182e-4bea-819a-bcc60795e22b", "metadata": { "tags": [] }, "outputs": [], "source": [ "!head -20 tmp/statistics.json" ] }, { "cell_type": "markdown", "id": "366806e0-fae7-41a1-8f67-a46ac297d9c1", "metadata": {}, "source": [ "Load the generated JSON as Pandas DataFrame and see the content of `statistics.json` and `constaints.json`:" ] }, { "cell_type": "code", "execution_count": null, "id": "dc1f849c-8ca0-4c7f-a3a4-eac7b7553b37", "metadata": { "tags": [] }, "outputs": [], "source": [ "baseline_job = data_monitor.latest_baselining_job\n", "statistics_df = pd.json_normalize(baseline_job.baseline_statistics().body_dict[\"features\"])\n", "statistics_df.head()" ] }, { "cell_type": "code", "execution_count": null, "id": "2fdb5f66-da5c-477f-b66a-3997713404bd", "metadata": { "tags": [] }, "outputs": [], "source": [ "constraints_df = pd.json_normalize(\n", " baseline_job.suggested_constraints().body_dict[\"features\"]\n", ")\n", "constraints_df.head()" ] }, { "cell_type": "markdown", "id": "ecf40e47-2c22-4ca3-8fe1-02f428e5e0b8", "metadata": {}, "source": [ "For this dataset the baselining job suggest three constraints:\n", "1. DataType\n", "2. Completeness\n", "3. Is non-negative\n", "\n", "Additionally, the Model Monitor prebuilt container does missing and extra column check, baseline drift check, and categorical values check. Refer to [Developer Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-interpreting-violations.html) for more details.\n", "\n", "In a real-world project you can add your own constraints the data must comply with.\n", "\n", "Next you schedule and run a monitoring job to validate incoming data against these constraints and statistics." ] }, { "cell_type": "markdown", "id": "cf89bdfa-87a8-4a23-a299-0a303bbd9bcd", "metadata": {}, "source": [ "### Create a data monitoring schedule\n", "With a monitoring schedule, SageMaker launches processing jobs at a specified frequency to analyze the data collected during a given period. SageMaker provides a [built-in container](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-pre-built-container.html) for performing analysis on tabular datasets. In the processing job, SageMaker compares the dataset for the current analysis with the baseline statistics and constraints and generates a violations report. In addition, CloudWatch metrics are emitted for each data feature under analysis.\n", "\n", "#### Implement custom record processing with a preprocessing script\n", "You can extend Model Monitor by providing a custom record preprocessing function. In this function you can implement your own filtering or preprocessing of every data record. For example, you can skip some records from analysis based on values or some event metadata. Refer to [Preprocessing and Postprocessing](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-pre-and-post-processing.html) documentation for more details and examples.\n", "\n", "When you created a monitoring baseline, you used the baseline dataset with all features but without the label. The Model Monitor by default concatenates model input and output, resulting in a dataset which contains all features plus the label. If you don't preprocess records before passing them to Model Monitor, the number of columns in the baseline dataset won't match the number of columns in the data capture record, and Model Monitor will report a `extra_column_check` violation. To avoid this situation, you need either to include the label column in the baselining or remove model output from the monitored records. This notebook uses the latter approach and provides a preprocessing script that returns only input data without the label.\n", "\n", "For another example of custom preprocessing see the blog post [Design a compelling record filtering method with Amazon SageMaker Model Monitor](https://aws.amazon.com/blogs/machine-learning/design-a-compelling-record-filtering-method-with-amazon-sagemaker-model-monitor/)." ] }, { "cell_type": "code", "execution_count": null, "id": "6d83eb20-f98c-4579-9de7-a3147cb7c15d", "metadata": { "tags": [] }, "outputs": [], "source": [ "# !pygmentize ./record_preprocessor.py" ] }, { "cell_type": "code", "execution_count": null, "id": "73a8a258-18ee-495f-aa97-e1f61890619c", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Upload the preprocessing script to S3\n", "record_preprocessor_s3_url = f\"s3://{bucket_name}/{bucket_prefix}/code\"" ] }, { "cell_type": "code", "execution_count": null, "id": "b8cca5a8-7af8-44f0-b4a4-33daf44e9468", "metadata": { "tags": [] }, "outputs": [], "source": [ "!aws s3 cp ./record_preprocessor.py {record_preprocessor_s3_url}/" ] }, { "cell_type": "code", "execution_count": null, "id": "d179ca01-f71e-4544-9ebb-daa61b74d866", "metadata": { "tags": [] }, "outputs": [], "source": [ "data_mon_schedule_name = \"from-idea-to-prod-data-monitor-schedule-\" + strftime(\n", " \"%Y-%m-%d-%H-%M-%S\", gmtime()\n", ")\n", "\n", "data_monitor.create_monitoring_schedule(\n", " monitor_schedule_name=data_mon_schedule_name,\n", " endpoint_input=predictor.endpoint_name,\n", " record_preprocessor_script=f\"{record_preprocessor_s3_url}/record_preprocessor.py\",\n", " # post_analytics_processor_script=s3_code_postprocessor_uri,\n", " output_s3_uri=data_mon_reports_s3_url,\n", " statistics=data_monitor.baseline_statistics(),\n", " constraints=data_monitor.suggested_constraints(),\n", " schedule_cron_expression=CronExpressionGenerator.hourly(),\n", " enable_cloudwatch_metrics=True,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "99355b13-4f45-4b53-87e3-57774d9b9e61", "metadata": { "tags": [] }, "outputs": [], "source": [ "while data_monitor.describe_schedule()[\"MonitoringScheduleStatus\"] != \"Scheduled\":\n", " print(f\"Waiting until data monitoring schedule status becomes Scheduled\")\n", " time.sleep(3)\n", "\n", "data_monitor.describe_schedule()" ] }, { "cell_type": "markdown", "id": "0597276a-420c-457d-a545-dcf7378c4234", "metadata": {}, "source": [ "### Generate compliant traffic\n", "Generate traffic that won't trigger any violations. Use the `test_x.csv` dataset to send requests to the endpoint." ] }, { "cell_type": "code", "execution_count": null, "id": "ed4a44fa-2ebf-4cf4-b34f-f72aeba5837a", "metadata": { "tags": [] }, "outputs": [], "source": [ "generate_endpoint_traffic(predictor, test_x)" ] }, { "cell_type": "markdown", "id": "62e2001d-62ed-465f-b273-4c7ce16fa39f", "metadata": {}, "source": [ "### See the captured data\n", "List captured data files under `data_capture_s3_url`. Wait couple of minutes before the captured data appears in the Amazon S3 bucket." ] }, { "cell_type": "code", "execution_count": null, "id": "c42cc853-a4d1-43b4-947a-06b0a7d9880f", "metadata": { "tags": [] }, "outputs": [], "source": [ "!aws s3 ls {data_capture_s3_url} --recursive" ] }, { "cell_type": "code", "execution_count": null, "id": "4200659a-542c-4de6-8b23-ba98f2c81dfe", "metadata": { "tags": [] }, "outputs": [], "source": [ "# If you run this notebook not the first time, there might be some data capture files from the previous runs\n", "# We recommend to delete all existing files under the data capture S3 path to avoid any inconsistences\n", "# Uncomment and run the following line to delete all files under the data capture S3 path\n", "\n", "# !aws s3 rm {data_capture_s3_url} --recursive" ] }, { "cell_type": "markdown", "id": "13b7477a-d418-48fb-83b1-f72830644d92", "metadata": {}, "source": [ "### Launch a manual monitoring job\n", "You can launch a monitoring job manually and don't wait until a configured data monitor schedule execution. You created an hourly schedule, so you need to wait until you cross the hour boundary to see some schedule executions.\n", "\n", "Since the Model Monitor uses a [built-in container](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-pre-built-container.html) and a SageMaker [processing job](https://docs.aws.amazon.com/sagemaker/latest/dg/processing-job.html) to run analysis of the captured data, you can manually configure and run a monitoring job. \n", "\n", "This [repository](https://github.com/aws-samples/reinvent2019-aim362-sagemaker-debugger-model-monitor/tree/master/02_deploy_and_monitor) contains an implementation of a helper function to manually run a monitoring job." ] }, { "cell_type": "code", "execution_count": null, "id": "0cd005eb-b66b-4650-a702-311549cb5b65", "metadata": { "tags": [] }, "outputs": [], "source": [ "# !pygmentize ./utils/monitoring_utils.py" ] }, { "cell_type": "markdown", "id": "b8f50e08-ab18-48e2-9241-ce73d4643c68", "metadata": {}, "source": [ "Get an S3 url for the latest captured data files:" ] }, { "cell_type": "code", "execution_count": null, "id": "4fdb2413-0dd4-499d-8076-23435b1e4a65", "metadata": { "tags": [] }, "outputs": [], "source": [ "latest_data_capture_s3_url = get_latest_data_capture_s3_url(data_capture_bucket, data_capture_prefix)" ] }, { "cell_type": "code", "execution_count": null, "id": "5a1e975f-83ea-4cf1-8883-71b2875a7432", "metadata": { "tags": [] }, "outputs": [], "source": [ "print(f\"Data capture path: {latest_data_capture_s3_url}\")\n", "print(f\"Data baseline statistics file: {data_statistics_s3_url}\")\n", "print(f\"Data baseline constraints file: {data_constraints_s3_url}\")\n", "print(f\"Data monitor report output path: {data_mon_reports_s3_url}\")\n", "print(f\"Record preprocessor script path: {record_preprocessor_s3_url}\")" ] }, { "cell_type": "markdown", "id": "f45d63e6-ec35-4568-a20f-6f64de9490cb", "metadata": {}, "source": [ "Run a monitoring job, it takes about 7 minutes:\n", "\n", "\"Time" ] }, { "cell_type": "code", "execution_count": null, "id": "ceb8dbcd-61c9-4b7e-8d15-7c3efbd40601", "metadata": { "tags": [] }, "outputs": [], "source": [ "from utils.monitoring_utils import run_model_monitor_job\n", "\n", "run_model_monitor_job(\n", " region=region,\n", " instance_type=\"ml.m5.xlarge\",\n", " role=sm_role,\n", " data_capture_path=latest_data_capture_s3_url,\n", " statistics_path=data_statistics_s3_url,\n", " constraints_path=data_constraints_s3_url,\n", " reports_path=data_mon_reports_s3_url,\n", " instance_count=1,\n", " preprocessor_path=f\"{record_preprocessor_s3_url}/record_preprocessor.py\",\n", " postprocessor_path=None,\n", " publish_cloudwatch_metrics=\"Disabled\",\n", " logs=False,\n", ")" ] }, { "cell_type": "markdown", "id": "6dbcac89", "metadata": {}, "source": [ "\"Time" ] }, { "cell_type": "markdown", "id": "715c8ff4-e24d-49a9-a212-be635fbff216", "metadata": {}, "source": [ "### See the monitoring job output\n", "Let's check what reports the monitoring job generated. " ] }, { "cell_type": "code", "execution_count": null, "id": "30aaec02-b385-4c96-9db5-4454d9916db1", "metadata": { "tags": [] }, "outputs": [], "source": [ "manual_monitoring_job_output_s3_url = get_latest_monitoring_report_s3_url(\"sagemaker-model-monitor-analyzer\")" ] }, { "cell_type": "code", "execution_count": null, "id": "dedfef20-9df8-4969-ab18-8606a325c28c", "metadata": { "tags": [] }, "outputs": [], "source": [ "!aws s3 ls {manual_monitoring_job_output_s3_url}/" ] }, { "cell_type": "markdown", "id": "3770d68a-42a8-4f60-be4d-b424f52d16f0", "metadata": {}, "source": [ "Load the monitoring report and see if there are any violations:" ] }, { "cell_type": "code", "execution_count": null, "id": "223f61e2-a6bc-4c78-8724-a72367d12f15", "metadata": { "tags": [] }, "outputs": [], "source": [ "violations = load_json_from_file(f\"{manual_monitoring_job_output_s3_url}/constraint_violations.json\")" ] }, { "cell_type": "markdown", "id": "d4ad56db-54bd-4051-b7db-49ddb7af9c66", "metadata": {}, "source": [ "As you sent only compliant data to the endpoint, there must be no violations for the captured data." ] }, { "cell_type": "code", "execution_count": null, "id": "a1c2f5b8-d1da-4eb8-9c7d-94d782c6ae11", "metadata": { "tags": [] }, "outputs": [], "source": [ "pd.json_normalize(violations[\"violations\"])" ] }, { "cell_type": "markdown", "id": "3c825cba-ca1e-4b68-8bdd-c262138a2674", "metadata": {}, "source": [ "You can also copy the constraint violations report to the Studio EFS and print the content of the file:" ] }, { "cell_type": "code", "execution_count": null, "id": "81603aa4-5742-4c04-b1a3-c48060885520", "metadata": { "tags": [] }, "outputs": [], "source": [ "!aws s3 cp {manual_monitoring_job_output_s3_url}/constraint_violations.json ./tmp/" ] }, { "cell_type": "code", "execution_count": null, "id": "f505dfb5-d64a-4066-bb02-dbba1b79e839", "metadata": { "tags": [] }, "outputs": [], "source": [ "!head ./tmp/constraint_violations.json" ] }, { "cell_type": "markdown", "id": "a1a445bd-39a9-489d-9911-e9cfaefb1aa6", "metadata": {}, "source": [ "Now load the newly calculated statistics and constratins based on the captured dataset." ] }, { "cell_type": "code", "execution_count": null, "id": "266d95b7-8c3b-4bdf-a820-77d884c43c50", "metadata": { "tags": [] }, "outputs": [], "source": [ "statistics = load_json_from_file(f\"{manual_monitoring_job_output_s3_url}/statistics.json\")\n", "constraints = load_json_from_file(f\"{manual_monitoring_job_output_s3_url}/constraints.json\")\n", "\n", "print(f\"Records processed: {statistics['dataset']['item_count']}\")" ] }, { "cell_type": "code", "execution_count": null, "id": "f572e5ac-3519-48b9-b4c9-e611c8e71d5d", "metadata": { "tags": [] }, "outputs": [], "source": [ "pd.json_normalize(statistics[\"features\"]).head()" ] }, { "cell_type": "code", "execution_count": null, "id": "19f99d6e-b261-4f89-be20-dacde27ae18d", "metadata": { "tags": [] }, "outputs": [], "source": [ "pd.json_normalize(constraints[\"features\"]).head()" ] }, { "cell_type": "markdown", "id": "7ea3529c-f664-490f-b240-d11574e62183", "metadata": {}, "source": [ "### What is monitored\n", "Refer to [Schema for Violations](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-interpreting-violations.html) in the Developer Guide to see what constraints are monitored by the model monitor. You can configure a tolerance threshold that fits your specific data quality requirements. To configure the thresholds, you must change the `monitoring_config` section of the baseline `constraints.json` file:" ] }, { "cell_type": "code", "execution_count": null, "id": "831436ca-15dc-4f6a-abf0-e8d1e2a92ce4", "metadata": { "tags": [] }, "outputs": [], "source": [ "with open(\"tmp/constraints.json\", \"r\") as c:\n", " data = c.read()\n", " \n", "print(json.dumps(json.loads(data)[\"monitoring_config\"], indent=2))" ] }, { "cell_type": "markdown", "id": "579f1f2d-7f8c-4c7e-a5f0-94acf855b1ea", "metadata": {}, "source": [ "To modify monitoring configuration, change this section and upload the file to Amazon S3.\n", "You can use `Robust` or `Simple` method to detect a data distribution drift, refer to [Schema for Constraints](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-byoc-constraints.html) in the Developer Guide. `Robust` method is recommended for small datasets and based on the [Two-sample Kolmogorov-Smirnov test](https://en.m.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test)." ] }, { "cell_type": "markdown", "id": "d567b72b-805b-43e7-b926-db97015d00f4", "metadata": { "tags": [] }, "source": [ "### Generate non-compliant traffic\n", "Now generate traffic that will trigger the violation in the model monitor data quality check." ] }, { "cell_type": "code", "execution_count": null, "id": "49fa53a9-fcf8-42a3-a4a5-b0460d1e0919", "metadata": { "tags": [] }, "outputs": [], "source": [ "non_compliant_pd = test_x.copy()\n", "non_compliant_pd.iloc[:,0] = -99.99" ] }, { "cell_type": "code", "execution_count": null, "id": "9907bf99-f24e-4d86-ad2d-a87433ca43c7", "metadata": { "tags": [] }, "outputs": [], "source": [ "non_compliant_pd.head()" ] }, { "cell_type": "code", "execution_count": null, "id": "8ef3d752-cf16-4614-94fd-90c09190b9c6", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Remove previous saved data capture from the S3 bucket\n", "latest_data_capture_s3_url = get_latest_data_capture_s3_url(data_capture_bucket, data_capture_prefix)" ] }, { "cell_type": "code", "execution_count": null, "id": "f47f60b6-b24d-4f32-88d3-b69d75d8ab67", "metadata": { "tags": [] }, "outputs": [], "source": [ "# If you run this notebook not the first time, there might be some data capture files from the previous runs\n", "# We recommend to delete all existing files under the data capture S3 path to avoid any inconsistences\n", "# Uncomment and run the following line to delete all files under the data capture S3 path\n", "\n", "# !aws s3 rm {latest_data_capture_s3_url} --recursive" ] }, { "cell_type": "code", "execution_count": null, "id": "6df19f6a-f6e2-4c75-94e7-fb7211faa557", "metadata": { "tags": [] }, "outputs": [], "source": [ "generate_endpoint_traffic(predictor, non_compliant_pd)" ] }, { "cell_type": "markdown", "id": "c348cd59-9925-4d29-97ae-e90baef5e3f6", "metadata": {}, "source": [ "### See the captured data\n", "List captured data files under `data_capture_s3_url`. Wait couple of minutes before the captured data appears in the Amazon S3 bucket." ] }, { "cell_type": "code", "execution_count": null, "id": "4b729c49-8118-4229-a260-6fcdf455b29d", "metadata": { "tags": [] }, "outputs": [], "source": [ "!aws s3 ls {data_capture_s3_url} --recursive" ] }, { "cell_type": "markdown", "id": "8b59387d-d96f-47c6-b058-3a4d7168c4ab", "metadata": {}, "source": [ "### Launch a manual monitoring job\n", "Let's run a manual monitoring job again to analyze the capture data:" ] }, { "cell_type": "code", "execution_count": null, "id": "4615cb98-2c7d-4656-b714-18f5f30a5dcc", "metadata": { "tags": [] }, "outputs": [], "source": [ "latest_data_capture_s3_url = get_latest_data_capture_s3_url(data_capture_bucket, data_capture_prefix)" ] }, { "cell_type": "markdown", "id": "4a1009c7", "metadata": {}, "source": [ "\"Time" ] }, { "cell_type": "code", "execution_count": null, "id": "97db3837-3b58-4c7d-85b5-495bce6f7d5f", "metadata": { "tags": [] }, "outputs": [], "source": [ "run_model_monitor_job(\n", " region=region,\n", " instance_type=\"ml.m5.xlarge\",\n", " role=sm_role,\n", " data_capture_path=latest_data_capture_s3_url,\n", " statistics_path=data_statistics_s3_url,\n", " constraints_path=data_constraints_s3_url,\n", " reports_path=data_mon_reports_s3_url,\n", " instance_count=1,\n", " preprocessor_path=f\"{record_preprocessor_s3_url}/record_preprocessor.py\",\n", " postprocessor_path=None,\n", " publish_cloudwatch_metrics=\"Disabled\",\n", " logs=False,\n", ")" ] }, { "cell_type": "markdown", "id": "19650f49", "metadata": {}, "source": [ "\"Time" ] }, { "cell_type": "markdown", "id": "4e1b92c6-5a1f-4908-bb95-df4059111ef4", "metadata": {}, "source": [ "### See the monitoring job output\n", "Let's check what reports the monitoring job generated. Since you send non-compliant data to the endpoint, you must see a violation report." ] }, { "cell_type": "code", "execution_count": null, "id": "f01e44ce-e861-4113-90e7-c0281511447e", "metadata": { "tags": [] }, "outputs": [], "source": [ "manual_monitoring_job_output_s3_url = get_latest_monitoring_report_s3_url(\"sagemaker-model-monitor-analyzer\")" ] }, { "cell_type": "code", "execution_count": null, "id": "f8fe74d1-e7d0-4978-ae6f-7907058ac68a", "metadata": { "tags": [] }, "outputs": [], "source": [ "!aws s3 ls {manual_monitoring_job_output_s3_url}/" ] }, { "cell_type": "markdown", "id": "2bccea03-786f-4dfc-902e-60568d09131a", "metadata": {}, "source": [ "Load the monitoring report and see the violations:" ] }, { "cell_type": "code", "execution_count": null, "id": "1e1d51d6-8ba8-4ecc-a437-70a76e2ec172", "metadata": { "tags": [] }, "outputs": [], "source": [ "violations = load_json_from_file(f\"{manual_monitoring_job_output_s3_url}/constraint_violations.json\")\n", "violations" ] }, { "cell_type": "code", "execution_count": null, "id": "df85c386-46a2-446f-bbb3-e674799a2fa7", "metadata": { "tags": [] }, "outputs": [], "source": [ "pd.json_normalize(violations[\"violations\"])" ] }, { "cell_type": "code", "execution_count": null, "id": "15ff077d-7ad3-4648-890f-7c0cd60a51a4", "metadata": { "tags": [] }, "outputs": [], "source": [ "statistics = load_json_from_file(f\"{manual_monitoring_job_output_s3_url}/statistics.json\")\n", "constraints = load_json_from_file(f\"{manual_monitoring_job_output_s3_url}/constraints.json\")\n", "\n", "print(f\"Records processed: {statistics['dataset']['item_count']}\")" ] }, { "cell_type": "code", "execution_count": null, "id": "74aac0c2-374f-4f88-b114-19419025ac90", "metadata": { "tags": [] }, "outputs": [], "source": [ "pd.json_normalize(statistics[\"features\"]).head()" ] }, { "cell_type": "code", "execution_count": null, "id": "1f56dec7-dd76-419a-ac0f-db7c18234044", "metadata": { "tags": [] }, "outputs": [], "source": [ "pd.json_normalize(constraints[\"features\"]).head()" ] }, { "cell_type": "markdown", "id": "68e46a76-d556-4d75-8b38-454fba95d419", "metadata": {}, "source": [ "### List schedule executions and monitoring reports\n", "
\n", "In this section you explore the scheduled executions of the monitoring job and their results. To have any executions and reports you have to wait until the top of the hour for an execution to complete. If you don't have any executions of the monitor scheduled, check if you send any traffic to the endpoint and if your monitor schedule is properly configured.\n", "
\n", "\n", "You created a hourly schedule above that begins executions on the hour plus 0-20 min buffer. You will have to wait till the clock hit the hour. You can also change the schedule.\n", "\n", "
\n", "While waiting for the scheduled execution of the monitoring job, you can continue with the Part 2: Monitor model quality and come back to this section to check the monitor job executions.\n", "
\n", "\n", "This section demonstrates how to work with scheduled monitoring job execution. The Python SDK class [`DefaultModelMonitor`](https://sagemaker.readthedocs.io/en/stable/api/inference/model_monitor.html#sagemaker.model_monitor.model_monitoring.DefaultModelMonitor) implements helper methods to load and see the executions and monitoring reports." ] }, { "cell_type": "markdown", "id": "f301094d-8039-42a0-b17d-5539d26a7c87", "metadata": {}, "source": [ "List executions and view a monitoring job details:" ] }, { "cell_type": "code", "execution_count": null, "id": "0534df39-4cbc-4562-bef8-bbd2a776502e", "metadata": { "tags": [] }, "outputs": [], "source": [ "latest_execution = get_latest_monitor_execution(data_monitor)" ] }, { "cell_type": "markdown", "id": "c119d4c3-fca4-4146-9079-483f15df8b18", "metadata": {}, "source": [ "See details about the latest scheduled monitoring execution:" ] }, { "cell_type": "code", "execution_count": null, "id": "bef16c05-4f41-487d-9e4b-fb40780338e8", "metadata": { "tags": [] }, "outputs": [], "source": [ "latest_execution.describe()" ] }, { "cell_type": "markdown", "id": "898dc7bb-364f-40c0-9181-9130fe2ec9d7", "metadata": {}, "source": [ "Get the latest execution statistics and constraint violations as objects:" ] }, { "cell_type": "code", "execution_count": null, "id": "4a9a5590-5fc9-4fd5-a094-33c5fc2c996d", "metadata": { "tags": [] }, "outputs": [], "source": [ "last_execution_statistics = latest_execution.statistics()\n", "last_execution_violations = latest_execution.constraint_violations()" ] }, { "cell_type": "markdown", "id": "8bdeae78-a391-4f36-a781-95024b7bc9c1", "metadata": {}, "source": [ "Load reports into Pandas DataFrame:" ] }, { "cell_type": "code", "execution_count": null, "id": "c8f37f50-6aed-4489-b50d-a8476ace0ee6", "metadata": { "tags": [] }, "outputs": [], "source": [ "pd.json_normalize(last_execution_statistics.body_dict[\"features\"]).head()" ] }, { "cell_type": "code", "execution_count": null, "id": "ec57f3eb-4fb1-4a0d-ba61-9f4aef53298e", "metadata": { "tags": [] }, "outputs": [], "source": [ "pd.json_normalize(last_execution_violations.body_dict[\"violations\"]).head()" ] }, { "cell_type": "markdown", "id": "c59e3043-775a-4d6b-8890-136095c28a85", "metadata": {}, "source": [ "See the baseline and the latest data profiling statistics:" ] }, { "cell_type": "code", "execution_count": null, "id": "4e0fc10e-fc5a-46c4-8690-372864c8b03b", "metadata": { "tags": [] }, "outputs": [], "source": [ "pd.json_normalize(data_monitor.baseline_statistics().body_dict[\"features\"]).head()" ] }, { "cell_type": "code", "execution_count": null, "id": "56a18ba4-5a16-4026-a3aa-794f3a2410b8", "metadata": { "tags": [] }, "outputs": [], "source": [ "pd.json_normalize(data_monitor.latest_monitoring_statistics().body_dict[\"features\"]).head()" ] }, { "cell_type": "markdown", "id": "4ba1d859-3753-49ea-ad78-d18f0b5a8bec", "metadata": {}, "source": [ "#### View a violation report\n", "Model monitor outputs any violations compared to the baseline to a violation report. You can access the latest violation report via the ModelMonitor object. \n", "Note, that the violation report returned by the following code cell might be empty, if there were no executions of the monitoring schedule. You must wait until the scheduled monitoring job finishes to see the violation report." ] }, { "cell_type": "code", "execution_count": null, "id": "7e31de1c-77b3-4992-a2ab-237367c96340", "metadata": { "tags": [] }, "outputs": [], "source": [ "violations = data_monitor.latest_monitoring_constraint_violations()" ] }, { "cell_type": "code", "execution_count": null, "id": "a1d74e83-0c67-41cf-9b61-e53eb42125e2", "metadata": { "tags": [] }, "outputs": [], "source": [ "violations" ] }, { "cell_type": "code", "execution_count": null, "id": "5e9773b1-0f77-48f7-8090-7dcacfc0f15c", "metadata": { "tags": [] }, "outputs": [], "source": [ "if not violations:\n", " print(\"No constraint violations report found\")\n", "else:\n", " violations_df = pd.json_normalize(violations.body_dict[\"violations\"]).head()" ] }, { "cell_type": "code", "execution_count": null, "id": "a5e698a8-68ee-493f-bf1b-ffddea569762", "metadata": { "tags": [] }, "outputs": [], "source": [ "violations_df" ] }, { "cell_type": "markdown", "id": "438f0f52-484c-4600-98e6-ea85e03f4507", "metadata": {}, "source": [ "---" ] }, { "cell_type": "markdown", "id": "ef2669f3-a09b-4ca8-9a17-a706239f16c4", "metadata": {}, "source": [ "## Part 2: Monitor model quality\n", "Model quality monitoring jobs monitor the performance of a model by comparing the predictions that the model makes with the actual ground truth labels that the model attempts to predict. To do this, model quality monitoring merges data that is captured from real-time inference with actual labels (ground truth labels) that you store in an Amazon S3 bucket, and then compares the predictions with the ground truth labels.\n", "\n", "Model quality monitoring follows the same steps as data quality monitoring, but adds an additional step of merging the ground truth labels from Amazon S3 with the predictions captured from the real-time inference endpoint.\n", "\n", "To monitor model quality, follow these steps:\n", "1. Enable [data capture](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-data-capture.html)\n", "1. [Create a baseline](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-model-quality-baseline.html). A baseline job compares predictions from the model with ground truth labels in a baseline dataset\n", "1. [Schedule monitoring jobs](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-model-quality-schedule.html)\n", "1. [Ingest ground truth labels](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-model-quality-merge.html) that model monitor merges with captured prediction data from real-time inference endpoint\n", "1. [Intepret the results](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-interpreting-results.html)\n", "1. [Integrate model quality monitoring](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-model-quality-cw.html) with Amazon CloudWatch and Amazon EventBridge\n", "\n", "![](img/model-monitoring-architecture.png)\n", "\n", "In the following sections you implement the model quality monitoring in this lab environment." ] }, { "cell_type": "markdown", "id": "eb60e48d-2299-4538-9c5d-0a7d70d2d397", "metadata": {}, "source": [ "### Define helper functions\n", "Some helper functions for model quality monitoring setup." ] }, { "cell_type": "code", "execution_count": null, "id": "77eaeb16-580e-4bf2-bab2-2b0ee7ef9a43", "metadata": { "tags": [] }, "outputs": [], "source": [ "def generate_ground_truth_with_id(inference_id):\n", " # set random seed to get consistent results\n", " random.seed(inference_id) \n", " rand = random.random()\n", " \n", " # format required by the merge container.\n", " return {\n", " \"groundTruthData\": {\n", " \"data\": \"0\" if rand < 0.5 else \"1\", #str(rand),\n", " \"encoding\": \"CSV\",\n", " },\n", " \"eventMetadata\": {\n", " \"eventId\": str(inference_id), # eventId must correlate with the eventId in the data capture file\n", " },\n", " \"eventVersion\": \"0\",\n", " }" ] }, { "cell_type": "code", "execution_count": null, "id": "10f7761d-11f9-4d6c-8881-71739bbcb8f1", "metadata": { "tags": [] }, "outputs": [], "source": [ "def upload_ground_truth(ground_truth_upload_s3_url, file_name, records, upload_time):\n", " target_s3_uri = f\"{ground_truth_upload_s3_url}/{upload_time:%Y/%m/%d/%H}/{file_name}\"\n", " number_of_records = len(records.split('\\n'))\n", " print(f\"Uploading {number_of_records} records to {target_s3_uri}\")\n", " \n", " S3Uploader.upload_string_as_file_body(records, target_s3_uri)\n", " \n", " return target_s3_uri" ] }, { "cell_type": "markdown", "id": "424b2e00-e78f-4240-9e2d-036eeda422c2", "metadata": {}, "source": [ "### Create a model quality monitor\n", "Use the Python SDK class [`ModelQualityMonitor`](https://sagemaker.readthedocs.io/en/stable/api/inference/model_monitor.html#sagemaker.model_monitor.model_monitoring.ModelQualityMonitor) to create a model quality monitor and interact with it:" ] }, { "cell_type": "code", "execution_count": null, "id": "de436c73-5f6e-462e-af0e-bf9b9888dbc3", "metadata": { "tags": [] }, "outputs": [], "source": [ "model_monitor = ModelQualityMonitor(\n", " role=sm_role,\n", " instance_count=1,\n", " instance_type='ml.m5.xlarge',\n", " volume_size_in_gb=20,\n", " max_runtime_in_seconds=1800,\n", " sagemaker_session=session\n", ")" ] }, { "cell_type": "markdown", "id": "9cb71347-f441-4bcf-b992-ef175b147f8d", "metadata": {}, "source": [ "### Run a model quality baseline job\n", "Your model building pipeline in the [step 3](03-sagemaker-pipeline.ipynb) notebook saved the model predictions on the test dataset. Now you use the model monitor to establish a [model performance baseline](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-model-quality-baseline.html). The baseline dataset contains three columns with `predictions`, `probability`, and `label` values." ] }, { "cell_type": "code", "execution_count": null, "id": "24e53a69-7442-4ca4-ab9d-4284361e11e9", "metadata": { "tags": [] }, "outputs": [], "source": [ "!aws s3 ls {prediction_baseline_s3_url}/" ] }, { "cell_type": "code", "execution_count": null, "id": "46ac144f-2021-4c11-a700-78c0a0d4986d", "metadata": { "tags": [] }, "outputs": [], "source": [ "prediction_baseline_results_s3_url = f\"{prediction_baseline_s3_url}/results\"\n", "model_mon_reports_s3_url = f\"{prediction_baseline_s3_url}/reports\"\n", "prediction_baseline_dataset_uri = f\"{prediction_baseline_s3_url}/prediction_baseline.csv\"" ] }, { "cell_type": "code", "execution_count": null, "id": "bd45888f-ce36-4123-b2dd-0ad5b04af059", "metadata": { "tags": [] }, "outputs": [], "source": [ "model_baseline_job_name = f\"from-idea-to-prod-model-baselining-{strftime('%d-%H-%M-%S', gmtime())}-{str(uuid.uuid4())[:8]}\"\n", "\n", "model_baseline_job = model_monitor.suggest_baseline(\n", " baseline_dataset=prediction_baseline_dataset_uri,\n", " dataset_format=DatasetFormat.csv(header=True),\n", " output_s3_uri = prediction_baseline_results_s3_url, \n", " problem_type=\"BinaryClassification\",\n", " inference_attribute= \"prediction\", # The column in the dataset that contains predictions\n", " probability_attribute= \"probability\", # The column in the dataset that contains probabilities\n", " ground_truth_attribute= \"label\", # The column in the dataset that contains ground truth labels\n", " job_name=model_baseline_job_name,\n", ")\n", "\n", "print(model_baseline_job_name)" ] }, { "cell_type": "markdown", "id": "60e4afe1", "metadata": {}, "source": [ "\"Time" ] }, { "cell_type": "code", "execution_count": null, "id": "baa07041-8a17-4119-9e0e-eb01d292f5bc", "metadata": { "tags": [] }, "outputs": [], "source": [ "model_baseline_job.wait(logs=False)" ] }, { "cell_type": "markdown", "id": "cb7e286c", "metadata": {}, "source": [ "\"Time" ] }, { "cell_type": "markdown", "id": "8e523654-e418-471f-a1ef-792c8b5c2c65", "metadata": {}, "source": [ "### Inspect the generated baseline statistics and constraints\n" ] }, { "cell_type": "code", "execution_count": null, "id": "822be080-92db-4bc9-8d44-1f0fc5de7c74", "metadata": { "tags": [] }, "outputs": [], "source": [ "!aws s3 ls {prediction_baseline_results_s3_url}/" ] }, { "cell_type": "code", "execution_count": null, "id": "a574b321-d852-4fc0-a365-50d467df4ab3", "metadata": { "tags": [] }, "outputs": [], "source": [ "latest_model_baseline_job = model_monitor.latest_baselining_job\n", "pd.DataFrame(latest_model_baseline_job.suggested_constraints().body_dict[\"binary_classification_constraints\"]).T" ] }, { "cell_type": "code", "execution_count": null, "id": "33693b7a-1939-43aa-b386-68f8e6fb4001", "metadata": { "tags": [] }, "outputs": [], "source": [ "pd.DataFrame(latest_model_baseline_job.baseline_statistics().body_dict[\"binary_classification_metrics\"][\"confusion_matrix\"])" ] }, { "cell_type": "code", "execution_count": null, "id": "357a3234-3711-498b-a6f1-41c5e75a6f16", "metadata": { "tags": [] }, "outputs": [], "source": [ "pd.json_normalize(latest_model_baseline_job.baseline_statistics().body_dict[\"binary_classification_metrics\"]).T" ] }, { "cell_type": "markdown", "id": "684e7f90-9d9a-4f56-a2f2-53fdd20dd7db", "metadata": { "tags": [] }, "source": [ "### Generate endpoint traffic\n", "Generate synthetic traffic to the endpoint to capture inference input and output." ] }, { "cell_type": "code", "execution_count": null, "id": "a2264390-6e98-49bd-b6cb-c44a1b537543", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Remove previous data capture saved to the S3 bucket\n", "latest_data_capture_s3_url = get_latest_data_capture_s3_url(data_capture_bucket, data_capture_prefix)" ] }, { "cell_type": "code", "execution_count": null, "id": "90f61c9b-930f-4386-a19e-79aeb05a84d6", "metadata": { "tags": [] }, "outputs": [], "source": [ "# If you run this notebook not the first time, there might be some data capture files from the previous runs\n", "# We recommend to delete all existing files under the data capture S3 path to avoid any inconsistences\n", "# Uncomment and run the following line to delete all files under the data capture S3 path\n", "\n", "# !aws s3 rm {latest_data_capture_s3_url} --recursive" ] }, { "cell_type": "code", "execution_count": null, "id": "9c7a800a-2ade-4351-853b-f664e548380d", "metadata": { "tags": [] }, "outputs": [], "source": [ "test_x.shape" ] }, { "cell_type": "code", "execution_count": null, "id": "e2f37968-40d3-4cbb-a385-b6584b0b4292", "metadata": { "tags": [] }, "outputs": [], "source": [ "generate_endpoint_traffic(predictor, test_x)" ] }, { "cell_type": "markdown", "id": "f7f40678-d789-465c-9203-3e820712b8c6", "metadata": {}, "source": [ "Wait until captured data appears in the Amazon S3 bucket, it may take several minutes. The capture data is delivered to the Amazon S3 prefix `{data-capture-prefix}/{EndpointName}/{VariantName}/{year}/{month}/{day}/{UTC hour}`." ] }, { "cell_type": "code", "execution_count": null, "id": "f1cf1d60-12af-45e4-8f46-4a43900e9438", "metadata": { "tags": [] }, "outputs": [], "source": [ "!aws s3 ls {data_capture_s3_url} --recursive" ] }, { "cell_type": "markdown", "id": "70d39787-cad6-4939-be12-230e3100bc6a", "metadata": {}, "source": [ "### Ingest ground truth data\n", "
💡 Run this section only after the capture data from the latest endpoint invocations has appeared in the Amazon S3 bucket. The capture data is organized based on the UTC hour in which the invocation happened.\n", "
\n", "\n", "For model monitoring you must have ground truths labels that the model monitor merges with captured inference data from the endpoint.\n", "\n", "In this lab environment you generate synthetic ground truth data to use with the model quality monitoring. In a real-time project you need to implement a workflow to produce and store the ground truth labels to evaluate the quality of the model predictions.\n", "\n", "The following code cells generate and save synthetic ground truth labels for all inference records in the latest capture data files." ] }, { "cell_type": "code", "execution_count": null, "id": "01919bfe-55e4-4c0f-b1bd-343f61b0454e", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Set the S3 url where to store the ground truth labels\n", "variant_name = sm.describe_endpoint(EndpointName=predictor.endpoint_name)[\"ProductionVariants\"][0][\"VariantName\"]\n", "ground_truth_upload_s3_url = f\"s3://{data_capture_bucket}/ground_truth_data/{predictor.endpoint_name}/{variant_name}\"\n", "ground_truth_upload_s3_url" ] }, { "cell_type": "code", "execution_count": null, "id": "bd62a82a-248d-48f5-a500-501cd89d1929", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Get the S3 prefix where the latest capture data has been delivered\n", "latest_data_capture_s3_url = get_latest_data_capture_s3_url(data_capture_bucket, data_capture_prefix)\n", "latest_data_capture_prefix = '/'.join(latest_data_capture_s3_url.split('/')[3:])" ] }, { "cell_type": "code", "execution_count": null, "id": "8421cde5-a756-4549-a4bb-79b5b8eeb78f", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Get the list of capture data file prefixes in the latest capture data location\n", "capture_files = get_file_list(data_capture_bucket, latest_data_capture_prefix)\n", "\n", "assert capture_files, f\"No capture data files found in {latest_data_capture_prefix}. Generate endpoint traffic and wait until capture data appears in the bucket!\"\n", "\n", "# For each capture data file get the eventIds and generate correlated ground truth labels\n", "for f in capture_files:\n", " f_name = f.split('/')[-1]\n", " \n", " print(f\"Downloading {f}\")\n", " S3Downloader.download(f\"s3://{data_capture_bucket}/{f}\", \"./tmp\")\n", " \n", " print(f\"Reading inference ids from the file: ./tmp/{f_name}\")\n", " with jsonlines.open(f\"./tmp/{f_name}\") as reader: \n", " ground_truth_records = \"\\n\".join([\n", " json.dumps(r) for r in [generate_ground_truth_with_id(l[\"eventMetadata\"][\"eventId\"]) for l in reader]\n", " ])\n", " lastest_ground_truth_s3_uri = upload_ground_truth(ground_truth_upload_s3_url, f\"gt-{f_name}\", ground_truth_records, datetime.utcnow())" ] }, { "cell_type": "code", "execution_count": null, "id": "499403b3-a5bc-4983-9fbc-3c60a39e3c8f", "metadata": { "tags": [] }, "outputs": [], "source": [ "# List uploaded ground truth files\n", "!aws s3 ls {ground_truth_upload_s3_url} --recursive" ] }, { "cell_type": "markdown", "id": "9e8011e9-76b6-4cda-8b33-8e04acaf29b8", "metadata": {}, "source": [ "Download the last ingested ground truth data file and see it's content:" ] }, { "cell_type": "code", "execution_count": null, "id": "8e22dc2f-625a-40ac-a064-7cc51dfe928d", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Download the last ground truth file to Studio's EFS\n", "!aws s3 cp {lastest_ground_truth_s3_uri} ./tmp/groundtruth.jsonl" ] }, { "cell_type": "code", "execution_count": null, "id": "5d2036e2-1722-4932-bf8f-d57af9650549", "metadata": { "tags": [] }, "outputs": [], "source": [ "!head ./tmp/groundtruth.jsonl" ] }, { "cell_type": "markdown", "id": "45ff543f-4d49-47e5-9138-fe69ef775b5c", "metadata": {}, "source": [ "### Create a model monitoring schedule\n", "Now after you have the capture data and the ground truth data, you can create a model monitoring schedule.\n", "Use [`create_monitoring_schedule()`](https://sagemaker.readthedocs.io/en/stable/api/inference/model_monitor.html#sagemaker.model_monitor.model_monitoring.ModelQualityMonitor.create_monitoring_schedule) method of the `ModelQualityMonitor` class to create a model quality monitoring schedule." ] }, { "cell_type": "code", "execution_count": null, "id": "df813eb3-1ad0-4e41-97ad-f8021128f856", "metadata": { "tags": [] }, "outputs": [], "source": [ "endpoint_input = EndpointInput(\n", " endpoint_name=predictor.endpoint_name,\n", " probability_attribute=\"0\",\n", " probability_threshold_attribute=0.5,\n", " destination=\"/opt/ml/processing/input_data\",\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "c5fbed44-580e-4997-afe9-7f1c5635bb99", "metadata": { "tags": [] }, "outputs": [], "source": [ "model_mon_schedule_name = \"from-idea-to-prod-model-monitor-schedule-\" + strftime(\n", " \"%Y-%m-%d-%H-%M-%S\", gmtime()\n", ")\n", "\n", "model_monitor.create_monitoring_schedule(\n", " monitor_schedule_name=model_mon_schedule_name,\n", " endpoint_input=endpoint_input,\n", " problem_type=\"BinaryClassification\",\n", " # record_preprocessor_script=f\"{record_preprocessor_s3_url}/record_preprocessor.py\",\n", " # post_analytics_processor_script=s3_code_postprocessor_uri,\n", " output_s3_uri=model_mon_reports_s3_url,\n", " ground_truth_input=ground_truth_upload_s3_url,\n", " constraints=model_monitor.suggested_constraints() if model_monitor.latest_baselining_job else f\"{prediction_baseline_results_s3_url}/constraints.json\",\n", " schedule_cron_expression=CronExpressionGenerator.hourly(),\n", " enable_cloudwatch_metrics=True,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "d1c54675-c6d7-4d05-bd5c-d430489ce799", "metadata": { "tags": [] }, "outputs": [], "source": [ "while model_monitor.describe_schedule()['MonitoringScheduleStatus'] != \"Scheduled\":\n", " print(f\"Waiting until model monitoring status becomes Scheduled\")\n", " time.sleep(3)\n", " \n", "model_monitor.describe_schedule()" ] }, { "cell_type": "markdown", "id": "384a0f86-6b37-43c2-9992-d20661b5679f", "metadata": {}, "source": [ "The endpoint has two scheduled monitors now, a data quality and a model quality monitor:" ] }, { "cell_type": "code", "execution_count": null, "id": "40335a7c-5d6b-4c2f-8e06-00bdb680f2dc", "metadata": { "tags": [] }, "outputs": [], "source": [ "predictor.list_monitors()" ] }, { "cell_type": "markdown", "id": "428f1f22-ba50-44e4-a1de-b750c9ace1ca", "metadata": {}, "source": [ "### See model monitoring schedule executions\n", "
💡 You created a model monitoring schedule which runs every hour. You need to wait until you cross the hour boundary to see any executions.\n", "
\n", " \n", "A monitoring job started by the schedule looks for the ground truth data under the Amazon S3 prefix `{ground_truth_upload_s3_url}/{year}/{month}/{day}/{UTC hour}/`. If there is no ground truth label datasets under this prefix, the model monitoring job fails with an exception `No S3 objects found under S3 URL ...`. In the previous section **Ingest ground truth data** you created a synthetic ground truth dataset and saved it under the correct prefix.\n", "\n", "Model quality monitor runs two processing jobs for each schedule execution:\n", "1. A ground truth merge job to contatenate capture data and ground truth label datasets based on the `eventId`\n", "2. A model quality monitoring job to evaluate model performance compared to the baseline\n", "\n", "You can see these two jobs for each monitor execution in the SageMaker console under **Processing jobs**:\n", "![](img/model-quality-monitor-execution.png)" ] }, { "cell_type": "markdown", "id": "720b8ab8-6568-4902-ad4f-7879ff0492dd", "metadata": {}, "source": [ "#### Inspect the lastest model monitor execution" ] }, { "cell_type": "code", "execution_count": null, "id": "7b1ca57b-52b6-4187-984a-6a0c02909970", "metadata": { "tags": [] }, "outputs": [], "source": [ "# call describe_schedule to see the status of the latest completed execution\n", "model_monitor.describe_schedule()" ] }, { "cell_type": "code", "execution_count": null, "id": "e8087765-9d5a-4eef-b064-1a757e313c7f", "metadata": { "tags": [] }, "outputs": [], "source": [ "# List all _completed_ model monitor executions\n", "model_mon_executions = model_monitor.list_executions()" ] }, { "cell_type": "code", "execution_count": null, "id": "286fdb24-37f0-4545-a206-20b73a7c8b5f", "metadata": { "tags": [] }, "outputs": [], "source": [ "model_mon_executions" ] }, { "cell_type": "code", "execution_count": null, "id": "e01eb7ea-c815-4c8b-91a4-f7a9039240ba", "metadata": { "tags": [] }, "outputs": [], "source": [ "# See the details of the latest model monitor execution\n", "latest_model_mon_execution = get_latest_monitor_execution(model_monitor)\n", "execution_details = latest_model_mon_execution.describe()\n", "execution_details" ] }, { "cell_type": "markdown", "id": "a94d5085-d022-45f5-88f3-b12bca0a9599", "metadata": {}, "source": [ "#### See the execution reports\n", "Each completed model monitor execution produces new statistics, constraints, and violations reports for the capture data. You have various ways to access these reports:\n", "- directly access the files on Amazon S3 under the job output S3 uri\n", "- use the Python SDK class [`MonitoringExecution`](https://sagemaker.readthedocs.io/en/stable/api/inference/model_monitor.html#sagemaker.model_monitor.model_monitoring.MonitoringExecution)\n", "- use [`latest_monitoring_statistics`](https://sagemaker.readthedocs.io/en/stable/api/inference/model_monitor.html#sagemaker.model_monitor.model_monitoring.ModelMonitor.latest_monitoring_statistics) and [`latest_monitoring_constraint_violations`](https://sagemaker.readthedocs.io/en/stable/api/inference/model_monitor.html#sagemaker.model_monitor.model_monitoring.ModelMonitor.latest_monitoring_constraint_violations) methods of the [`ModelMonitor`](https://sagemaker.readthedocs.io/en/stable/api/inference/model_monitor.html#sagemaker.model_monitor.model_monitoring.ModelMonitor) class" ] }, { "cell_type": "code", "execution_count": null, "id": "20112efa-8372-4123-aedc-1e37a0534276", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Get the job output S3 uri\n", "mon_job_output_s3_uri = execution_details[\"ProcessingOutputConfig\"][\"Outputs\"][0][\"S3Output\"][\"S3Uri\"]\n", "mon_job_output_s3_uri" ] }, { "cell_type": "code", "execution_count": null, "id": "896d72ec-c106-435d-929e-07c56a322047", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Same S3 uri is accessible via the MonitoringExecution class\n", "latest_model_mon_execution.output.destination" ] }, { "cell_type": "code", "execution_count": null, "id": "86b5bf7d-f909-443a-8225-93524c834eb9", "metadata": { "tags": [] }, "outputs": [], "source": [ "# See the generated files - new statistics, constraints, and violations\n", "!aws s3 ls {mon_job_output_s3_uri} --recursive" ] }, { "cell_type": "markdown", "id": "d59ab484-6689-4a09-8994-f8b7a89c8a2e", "metadata": {}, "source": [ "
💡 Since you generated random synthetic ground truth labels, you expect to see some violations, more specifically, `LessThanThreshold` constraint violation for various model performance metrics, such as `auc`, `accuracy`, and `precision`.\n", "
" ] }, { "cell_type": "code", "execution_count": null, "id": "86d5dcb9-660a-4422-931d-01c38661b15c", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Get the violation report from the MonitoringExecution class\n", "last_execution_violations = latest_model_mon_execution.constraint_violations()" ] }, { "cell_type": "code", "execution_count": null, "id": "4c8dae03-ff18-4435-9e9e-e25d57711f6d", "metadata": { "tags": [] }, "outputs": [], "source": [ "pd.json_normalize(last_execution_violations.body_dict[\"violations\"]).head()" ] }, { "cell_type": "markdown", "id": "b658cdda-c533-461c-8033-ec135c035c44", "metadata": {}, "source": [ "You can access the violation report directly from the model monitor class:" ] }, { "cell_type": "code", "execution_count": null, "id": "fa734732-5542-4481-af50-233f862e865d", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Use the ModelMonitor class\n", "violations = model_monitor.latest_monitoring_constraint_violations()\n", "pd.json_normalize(violations.body_dict[\"violations\"]).head()" ] }, { "cell_type": "markdown", "id": "026cca77-b033-48fc-bb97-513685ad13b3", "metadata": { "tags": [] }, "source": [ "#### See the merged datasets\n", "Finally let's take a look on the merged datasets generated by the merge job. The merged dataset contains inference input, inference output, and the ingested ground truth labels. The inference output and the ground truth are connected via `eventMetadata.eventId` identifier." ] }, { "cell_type": "code", "execution_count": null, "id": "3c3afef7-08d7-45d7-9e5c-afc353075a7e", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Get the S3 url to the merge datasets from the monitor job inputs\n", "mon_job_merge_input_s3_uri = execution_details[\"ProcessingInputs\"][1][\"S3Input\"][\"S3Uri\"]\n", "\n", "mon_job_merge_bucket = mon_job_merge_input_s3_uri.split('/')[2]\n", "mon_job_merge_prefix = '/'.join(mon_job_merge_input_s3_uri.split('/')[3:])" ] }, { "cell_type": "code", "execution_count": null, "id": "84bcd4fd-de01-49d2-bbb8-2ec1d602ff38", "metadata": { "tags": [] }, "outputs": [], "source": [ "merge_files = get_file_list(mon_job_merge_bucket, mon_job_merge_prefix)\n", "\n", "if merge_files:\n", " S3Downloader.download(f\"s3://{mon_job_merge_bucket}/{merge_files[0]}\", f\"./tmp\")\n", "\n", " print(f\"Content of the merge file:\")\n", " # Read the jsonl file and show two first objects\n", " with jsonlines.open(f\"./tmp/{merge_files[0].split('/')[-1]}\") as reader: \n", " print(json.dumps(reader.read(), indent=2))\n", " print(json.dumps(reader.read(), indent=2))" ] }, { "cell_type": "markdown", "id": "59071f22-f07e-446e-834e-f7bc2d44f40b", "metadata": {}, "source": [ "## Additional monitoring\n", "Additionally to data and model quality monitoring with Model Monitor, you can use Amazon SageMaker Clarify to:\n", "- [Monitor bias drift](https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-model-monitor-bias-drift.html)\n", "- [Monitor feature attribution drift](https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-model-monitor-feature-attribution-drift.html)\n", "\n", "Refer to a sample notebook [Monitoring bias drift and feature attribution drift Amazon SageMaker Clarify](https://sagemaker-examples.readthedocs.io/en/latest/sagemaker_model_monitor/fairness_and_explainability/SageMaker-Model-Monitor-Fairness-and-Explainability.html) for a hands-on example and more details." ] }, { "cell_type": "markdown", "id": "2057c12f-1aae-4087-8930-4150543ff38a", "metadata": {}, "source": [ "## Use SageMaker Studio for data and model monitoring\n", "You can use Studio UX to enable and configure data and model monitoring and to visualize results. You can view the details of any monitoring job run, and you can create charts that show the baseline and captured values for any metric that the monitoring job calculates.\n", "\n", "Navigate to **Home** to the left side bar and choose **Deployments** and then **Endpoints** in the list. Click on an endpoint for which you would like to configure the model monitoring:\n", "\n", "![](img/endpoints.png)\n", "\n", "In the displayed **Endpoint details** pane you can configure data and model monitoring:\n", "\n", "![](img/model-monitoring-ux.png)" ] }, { "cell_type": "markdown", "id": "14ed36be-5547-481a-90c4-3d8647db0957", "metadata": {}, "source": [ "## Clean-up resources\n", "Stop and remove monitoring schedule for the endpont." ] }, { "cell_type": "code", "execution_count": null, "id": "75f003d8-48b0-432d-9a6a-6fea273098c5", "metadata": { "tags": [] }, "outputs": [], "source": [ "for monitor in predictor.list_monitors():\n", " try:\n", " monitor.stop_monitoring_schedule()\n", " monitor.delete_monitoring_schedule()\n", " except botocore.exceptions.ClientError as e:\n", " if e.response['Error']['Code'] == 'ValidationException':\n", " print(f\"ValidationException: {e.response['Error']['Message']}. Wait until the monitoring job is done and run the cell again.\")\n", " else:\n", " raise e" ] }, { "cell_type": "markdown", "id": "aa3fc9ec-caa3-4adf-9b9a-66bf6ba3ec71", "metadata": {}, "source": [ "### Final clean-up\n", "This is the last notebook in this workshop. If you are finished with exploration, to avoid charges on your AWS account, run the [clean-up notebook](99-clean-up.ipynb).\n", "\n", "
\n", "You have at least one real-time endpoint active in your AWS account. To avoid charges, you must delete the endpoint. Go to the clean-up notebook.\n", "
" ] }, { "cell_type": "markdown", "id": "8ea675fa-2d8f-4b15-b9cd-91104a96edee", "metadata": {}, "source": [ "## Further development ideas for your real-world projects\n", "- Add [visualizations](https://sagemaker-examples.readthedocs.io/en/latest/sagemaker_model_monitor/visualization/SageMaker-Model-Monitor-Visualize.html) for model monitoring reports\n", "- Add data baselining, explainability report generation, and bias report to the model building pipeline\n", "- Implement [model quality monitoring](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-model-quality.html)\n", "- Try different inference options such as [serverless](https://docs.aws.amazon.com/sagemaker/latest/dg/serverless-endpoints.html) or [asynchronous](https://docs.aws.amazon.com/sagemaker/latest/dg/async-inference.html) inference\n", "- Address security considerations for your ML environment and solutions. Start with the developer guide [Security in Amazon SageMaker](https://docs.aws.amazon.com/sagemaker/latest/dg/security.html)\n", "- Implement [deployment guardrails](https://docs.aws.amazon.com/sagemaker/latest/dg/deployment-guardrails.html) to control how to update your models in production" ] }, { "cell_type": "markdown", "id": "82b9b6df-94d1-49ee-a21b-42b153d3fe4c", "metadata": {}, "source": [ "## Additional resources\n", "- [AmazonSageMaker Model Monitor: A System for Real-Time Insights into Deployed Machine Learning Models](https://assets.amazon.science/97/cc/8dc8526547859351f46d2710aba9/amazon-sagemaker-model-monitor-a-system-for-real-time-insights-into-deployed-machine-learning-models.pdf)\n", "- [Monitor models for data and model quality, bias, and explainability](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor.html)\n", "- [Monitor data quality](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-data-quality.html)\n", "- [Model Monitor visualizations](https://sagemaker-examples.readthedocs.io/en/latest/sagemaker_model_monitor/visualization/SageMaker-Model-Monitor-Visualize.html)\n", "- [Monitor Amazon SageMaker](https://docs.aws.amazon.com/sagemaker/latest/dg/monitoring-overview.html)\n", "- [Monitoring a Model in Production](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-model-monitor.html)\n", "- [ModelMonitor for batch transform jobs](https://aws.amazon.com/about-aws/whats-new/2022/10/amazon-sagemaker-model-monitor-batch-transform-jobs/)\n", "- [Security in Amazon SageMaker](https://docs.aws.amazon.com/sagemaker/latest/dg/security.html)\n", "- [Deployment guardrails](https://docs.aws.amazon.com/sagemaker/latest/dg/deployment-guardrails.html)\n", "- [Design a compelling record filtering method with Amazon SageMaker Model Monitor](https://aws.amazon.com/blogs/machine-learning/design-a-compelling-record-filtering-method-with-amazon-sagemaker-model-monitor/)" ] }, { "cell_type": "markdown", "id": "ee54a7b1-0e4a-424c-9283-07c62c1a4e2a", "metadata": {}, "source": [ "# Shutdown kernel" ] }, { "cell_type": "code", "execution_count": null, "id": "c9cc068d-e111-46db-aaef-01fc30fa5069", "metadata": {}, "outputs": [], "source": [ "%%html\n", "\n", "

Shutting down your kernel for this notebook to release resources.

\n", "\n", " \n", "" ] }, { "cell_type": "code", "execution_count": null, "id": "a0c5fa65-6e71-4e91-9449-0ea57a5261c1", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "availableInstances": [ { "_defaultOrder": 0, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "memoryGiB": 4, "name": "ml.t3.medium", "vcpuNum": 2 }, { "_defaultOrder": 1, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 8, "name": "ml.t3.large", "vcpuNum": 2 }, { "_defaultOrder": 2, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 16, "name": "ml.t3.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 3, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 32, "name": "ml.t3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 4, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "memoryGiB": 8, "name": "ml.m5.large", "vcpuNum": 2 }, { "_defaultOrder": 5, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 16, "name": "ml.m5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 6, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 32, "name": "ml.m5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 7, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 64, "name": "ml.m5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 8, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 128, "name": "ml.m5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 9, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 192, "name": "ml.m5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 10, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 256, "name": "ml.m5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 11, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 384, "name": "ml.m5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 12, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 8, "name": "ml.m5d.large", "vcpuNum": 2 }, { "_defaultOrder": 13, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 16, "name": "ml.m5d.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 14, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 32, "name": "ml.m5d.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 15, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 64, "name": "ml.m5d.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 16, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 128, "name": "ml.m5d.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 17, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 192, "name": "ml.m5d.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 18, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 256, "name": "ml.m5d.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 19, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 384, "name": "ml.m5d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 20, "_isFastLaunch": true, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 4, "name": "ml.c5.large", "vcpuNum": 2 }, { "_defaultOrder": 21, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 8, "name": "ml.c5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 22, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 16, "name": "ml.c5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 23, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 32, "name": "ml.c5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 24, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 72, "name": "ml.c5.9xlarge", "vcpuNum": 36 }, { "_defaultOrder": 25, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 96, "name": "ml.c5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 26, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 144, "name": "ml.c5.18xlarge", "vcpuNum": 72 }, { "_defaultOrder": 27, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 192, "name": "ml.c5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 28, "_isFastLaunch": true, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 16, "name": "ml.g4dn.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 29, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 32, "name": "ml.g4dn.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 30, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 64, "name": "ml.g4dn.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 31, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 128, "name": "ml.g4dn.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 32, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "memoryGiB": 192, "name": "ml.g4dn.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 33, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 256, "name": "ml.g4dn.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 34, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 61, "name": "ml.p3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 35, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "memoryGiB": 244, "name": "ml.p3.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 36, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "memoryGiB": 488, "name": "ml.p3.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 37, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "memoryGiB": 768, "name": "ml.p3dn.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 38, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 16, "name": "ml.r5.large", "vcpuNum": 2 }, { "_defaultOrder": 39, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 32, "name": "ml.r5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 40, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 64, "name": "ml.r5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 41, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 128, "name": "ml.r5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 42, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 256, "name": "ml.r5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 43, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 384, "name": "ml.r5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 44, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 512, "name": "ml.r5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 45, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 768, "name": "ml.r5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 46, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 16, "name": "ml.g5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 47, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 32, "name": "ml.g5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 48, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 64, "name": "ml.g5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 49, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 128, "name": "ml.g5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 50, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 256, "name": "ml.g5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 51, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "memoryGiB": 192, "name": "ml.g5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 52, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "memoryGiB": 384, "name": "ml.g5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 53, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "memoryGiB": 768, "name": "ml.g5.48xlarge", "vcpuNum": 192 } ], "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.12" } }, "nbformat": 4, "nbformat_minor": 5 }