{ "cells": [ { "cell_type": "markdown", "id": "08ac32ff", "metadata": {}, "source": [ "# Amazon SageMaker Model Monitoring " ] }, { "cell_type": "markdown", "id": "20263275", "metadata": {}, "source": [ "ML Monitoring is a critical MLOps capability to reduce risk and manage safe and reliable production machine learning systems at scale. SageMaker contains several integrated services such as [SageMaker Model Monitor](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor.html) and [SageMaker Clarify](https://aws.amazon.com/sagemaker/clarify/) to monitor models for data and model quality, bias, and feature attribution drift.\n", "\n", "Amazon SageMaker Model Monitor continuously monitors the quality of Amazon SageMaker machine learning models in production. With Model Monitor, you can set alerts that notify you when there are deviations in the model quality. Early and proactive detection of these deviations enables you to take corrective actions, such as retraining models, auditing upstream systems, or fixing quality issues without having to monitor models manually or build additional tooling. Model Monitor integrates with SageMaker Clarify to provide pre-built and extendable monitors to get start with monitoring your ML models faster." ] }, { "cell_type": "markdown", "id": "f7afc79d", "metadata": {}, "source": [ "In this lab, you will learn how to:\n", " * Capture inference requests, results, and metadata from our pipeline deployed model.\n", " * Schedule a model monitor to monitor model performance on a regular schedule." ] }, { "cell_type": "markdown", "id": "0b0bf436", "metadata": {}, "source": [ "While each monitor requires task-specific configurations, the standardized monitoring setup workflow you will follow is:\n", "\n", "1. Initialize a monitoring object\n", "2. Configure and run a baseline job to contrastively compare results\n", "3. Schedule continuous monitoring\n", "\n", "The goal of this lab is that you walk through the code and understand how get started with monitoring your machine learning models with SageMaker Model Monitor. " ] }, { "cell_type": "markdown", "id": "5f7925b0", "metadata": { "tags": [] }, "source": [ "## Setup" ] }, { "cell_type": "code", "execution_count": null, "id": "baaed67f", "metadata": { "tags": [] }, "outputs": [], "source": [ "!pip install \"sagemaker>=2.123.0\"" ] }, { "cell_type": "code", "execution_count": null, "id": "dd35a7df", "metadata": { "tags": [] }, "outputs": [], "source": [ "from datetime import datetime, timedelta\n", "import pandas as pd\n", "import time\n", "import csv\n", "import json\n", "import boto3\n", "import sagemaker\n", "\n", "region = boto3.Session().region_name\n", "sagemaker_session = sagemaker.session.Session()\n", "role = sagemaker.get_execution_role()\n", "default_bucket = sagemaker_session.default_bucket()\n", "\n", "sagemaker_client = sagemaker_session.sagemaker_client\n", "sagemaker_runtime_client = sagemaker_session.sagemaker_runtime_client\n", "\n", "from sagemaker.predictor import Predictor\n", "from sagemaker.serializers import CSVSerializer\n", "\n", "from sagemaker.clarify import (\n", " BiasConfig,\n", " DataConfig,\n", " ModelConfig,\n", " ModelPredictedLabelConfig,\n", " SHAPConfig,\n", ")\n", "\n", "from sagemaker.model_monitor import (\n", " BiasAnalysisConfig,\n", " CronExpressionGenerator,\n", " DataCaptureConfig,\n", " EndpointInput,\n", " ExplainabilityAnalysisConfig,\n", " ModelBiasMonitor,\n", " ModelExplainabilityMonitor,\n", " DefaultModelMonitor,\n", " ModelQualityMonitor,\n", ")\n", "\n", "from sagemaker.model_monitor.dataset_format import DatasetFormat\n", "\n", "from sagemaker.s3 import S3Downloader, S3Uploader" ] }, { "cell_type": "code", "execution_count": null, "id": "084b7afa", "metadata": { "tags": [] }, "outputs": [], "source": [ "print(f\"AWS region: {region}\")\n", "# A different bucket can be used, but make sure the role for this notebook has\n", "# the s3:PutObject permissions. This is the bucket into which the data is captured.\n", "print(f\"S3 Bucket: {default_bucket}\")\n", "\n", "# Endpoint metadata.\n", "# Note: you will use the staging endpoint from the previously lab just as you would in a real scenario to verify your monitoring\n", "# setup before deploying your setup on production endpoints.\n", "endpoint_name = \"workshop-project-staging\"\n", "endpoint_instance_count = 1\n", "endpoint_instance_type = \"ml.m5.large\"\n", "print(f\"Endpoint: {endpoint_name}\")\n", "\n", "prefix = \"sagemaker/xgboost-dm-model-monitoring\"\n", "s3_key = f\"s3://{default_bucket}/{prefix}\"\n", "print(f\"S3 key: {s3_key}\")\n", "\n", "s3_capture_upload_path = f\"{s3_key}/data_capture\"\n", "s3_ground_truth_upload_path = f\"{s3_key}/ground_truth_data/{datetime.now():%Y-%m-%d-%H-%M-%S}\"\n", "s3_baseline_results_path = f\"{s3_key}/baselines\"\n", "s3_report_path = f\"{s3_key}/reports\"\n", "\n", "print(f\"Capture path: {s3_capture_upload_path}\")\n", "print(f\"Ground truth path: {s3_ground_truth_upload_path}\")\n", "print(f\"Baselines path: {s3_baseline_results_path}\")\n", "print(f\"Report path: {s3_report_path}\")\n", "\n", "sm_client = boto3.client('sagemaker')\n", "\n", "endpoint_config = sm_client.describe_endpoint(EndpointName = endpoint_name)['EndpointConfigName']\n", "model_name = sm_client.describe_endpoint_config(EndpointConfigName = endpoint_config)['ProductionVariants'][0]['ModelName']\n", "\n", "print(\"Model Name : \", model_name)" ] }, { "cell_type": "markdown", "id": "f705952b", "metadata": { "tags": [] }, "source": [ "## Configure data capture and generate synthetic traffic" ] }, { "cell_type": "markdown", "id": "5fef4ceb", "metadata": {}, "source": [ "Data quality monitoring automatically monitors machine learning (ML) models in production and notifies you when data quality issues arise. ML models in production have to make predictions on real-life data that is not carefully curated like most training datasets. If the statistical nature of the data that your model receives while in production drifts away from the nature of the baseline data it was trained on, the model begins to lose accuracy in its predictions. Amazon SageMaker Model Monitor uses rules to detect data drift and alerts you when it happens." ] }, { "cell_type": "markdown", "id": "8830bbad", "metadata": {}, "source": [ "### Initialize SageMaker Predictor for real-time requests to previously deployed model endpoint" ] }, { "cell_type": "code", "execution_count": null, "id": "7e70ac96", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Create a Predictor Python object for real-time endpoint requests. https://sagemaker.readthedocs.io/en/stable/api/inference/predictors.html\n", "predictor = Predictor(endpoint_name=endpoint_name, serializer=CSVSerializer())" ] }, { "cell_type": "markdown", "id": "e74e58a6", "metadata": {}, "source": [ "**If you have previously run `sagemaker-data-quality-monitoring.ipynb` you can ignore the next 2 cells**" ] }, { "cell_type": "code", "execution_count": null, "id": "d1d8c2a6", "metadata": { "tags": [] }, "outputs": [], "source": [ "# # SageMaker automatically created a DataCaptureConfig when your model was deployed to an endpoint \n", "# # in a prior lab that already had data capture enabled. Below is illustrating how create a custom \n", "# # DataCaptureConfig with data capture enabled and update an existing endpoint.\n", "# data_capture_config = DataCaptureConfig(\n", "# enable_capture=True,\n", "# sampling_percentage=100,\n", "# destination_s3_uri=s3_capture_upload_path,\n", "# )" ] }, { "cell_type": "code", "execution_count": null, "id": "078aadcd", "metadata": { "tags": [] }, "outputs": [], "source": [ "# # Now update endpoint with data capture enabled and provide an s3_capture_upload_path.\n", "# predictor.update_data_capture_config(data_capture_config)" ] }, { "cell_type": "markdown", "id": "b014c2c4", "metadata": {}, "source": [ "Note: updating your endpoint data config can take 3-5 min. A progress bar will be displayed in the cell above and indicates completion with `---------------!` and the cell execution number. You will see your endpoint status as `Updating` under SageMaker resources > Endpoints while this is in progress and `InService` when your updated endpoint is ready for requests." ] }, { "cell_type": "markdown", "id": "3a7c12bb", "metadata": {}, "source": [ "### Invoke the deployed model endpoint to generate predictions" ] }, { "cell_type": "markdown", "id": "d65e4989", "metadata": {}, "source": [ "Now send data to this endpoint to get inferences in real time. \n", "\n", "With data capture enabled in the previous step, the request and response payload, along with some additional metadata, is saved to the S3 location specified in `DataCaptureConfig`." ] }, { "cell_type": "code", "execution_count": null, "id": "e7824a4d", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Read in training set for schema and to compute feature attribution baselines.\n", "train_df = pd.read_csv(\"train-headers.csv\")" ] }, { "cell_type": "code", "execution_count": null, "id": "2fa06809", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Use test set to create a file without headers and labels to mirror data format at inference time.\n", "test_df = pd.read_csv(\"test.csv\", header = None)\n", "test_df.drop(test_df.columns[0], axis=1, inplace=True)\n", "test_df.sample(180).to_csv('test-samples-no-header.csv', header = False, index = None)" ] }, { "cell_type": "markdown", "id": "27607ef7", "metadata": {}, "source": [ "Now send a test batch of 180 requests to the model endpoint. These inputs will be captured along with endpoint output predictions and sent to your `s3_capture_upload_path`." ] }, { "cell_type": "code", "execution_count": null, "id": "d8e15b2d", "metadata": { "tags": [] }, "outputs": [], "source": [ "print(\"Sending test traffic to the endpoint {}. \\nPlease wait...\".format(endpoint_name))\n", "\n", "test_sample_df = pd.read_csv('test-samples-no-header.csv', header = None, index_col = False)\n", "\n", "response = predictor.predict(data=test_sample_df.to_numpy())\n", "\n", "print(\"Done!\")" ] }, { "cell_type": "markdown", "id": "b09323f5", "metadata": {}, "source": [ "### View captured data" ] }, { "cell_type": "markdown", "id": "591db672", "metadata": {}, "source": [ "List the data capture files stored in Amazon S3. \n", "\n", "There should be different files from different time periods organized in S3 based on the hour in which the invocation occurred in the format: \n", "\n", "`s3://{destination-bucket-prefix}/{endpoint-name}/{AllTraffic or model-variant-name}/yyyy/mm/dd/hh/filename.jsonl`" ] }, { "cell_type": "code", "execution_count": null, "id": "5bd02f95", "metadata": { "tags": [] }, "outputs": [], "source": [ "print(\"Waiting 60 seconds for captures to show up\", end=\"\")\n", "\n", "for _ in range(60):\n", " capture_files = sorted(S3Downloader.list(f\"{s3_capture_upload_path}/{endpoint_name}\"))\n", " if capture_files:\n", " break\n", " print(\".\", end=\"\", flush=True)\n", " time.sleep(1)\n", "\n", "print(\"\\nFound Capture Files:\")\n", "print(\"\\n \".join(capture_files[-10:]))" ] }, { "cell_type": "markdown", "id": "36cf7c65", "metadata": {}, "source": [ "Next, view the content of a single capture file, looking at the first few lines in the captured file." ] }, { "cell_type": "code", "execution_count": null, "id": "abb86020", "metadata": { "tags": [] }, "outputs": [], "source": [ "capture_file = S3Downloader.read_file(capture_files[-1]).split(\"\\n\")[-10:-1]\n", "print(capture_file[-1])" ] }, { "cell_type": "markdown", "id": "95356ebe", "metadata": {}, "source": [ "View a single line is present below in a formatted JSON file." ] }, { "cell_type": "code", "execution_count": null, "id": "fec986c0", "metadata": { "tags": [] }, "outputs": [], "source": [ "print(json.dumps(json.loads(capture_file[-1]), indent=2))" ] }, { "cell_type": "markdown", "id": "5114d73a", "metadata": {}, "source": [ "### Generate synthetic traffic" ] }, { "cell_type": "markdown", "id": "742f8f13", "metadata": {}, "source": [ "In order to review SageMaker's continuous monitoring capabilities, you will start a thread to generate synthetic traffic to send to the deployed model endpoint. \n", "\n", "The `WorkerThread` class will run continuously on the notebook kernel to generate predictions that are captured and sent to S3 until the kernel is restarted or the thread is explicitly terminated. \n", "\n", "See the cell in the `Cleanup` section to terminate the threads.\n", "\n", "This step is necessary because if there is no traffic, the monitoring jobs are marked as `Failed` since there is no data to process." ] }, { "cell_type": "markdown", "id": "42d9ca40", "metadata": {}, "source": [ "This cell extends a Python Thread class to be able to able to terminate the thread later on without terminating the notebook kernel." ] }, { "cell_type": "code", "execution_count": null, "id": "4d598c45", "metadata": { "tags": [] }, "outputs": [], "source": [ "import threading\n", "\n", "class WorkerThread(threading.Thread):\n", " def __init__(self, do_run, *args, **kwargs):\n", " super(WorkerThread, self).__init__(*args, **kwargs)\n", " self.__do_run = do_run\n", " self.__terminate_event = threading.Event()\n", "\n", " def terminate(self):\n", " self.__terminate_event.set()\n", "\n", " def run(self):\n", " while not self.__terminate_event.is_set():\n", " self.__do_run(self.__terminate_event)" ] }, { "cell_type": "markdown", "id": "0506f947", "metadata": {}, "source": [ "Now you define a function that your thread will invoke continuously to send test samples to the model endpoint." ] }, { "cell_type": "code", "execution_count": null, "id": "d5b1fa72", "metadata": { "tags": [] }, "outputs": [], "source": [ "def invoke_endpoint(terminate_event):\n", " with open(\"test-samples-no-header.csv\", \"r\") as f:\n", " i = 0\n", " for row in f:\n", " payload = row.rstrip(\"\\n\")\n", " response = sagemaker_runtime_client.invoke_endpoint(\n", " EndpointName=endpoint_name,\n", " ContentType=\"text/csv\",\n", " Body=payload,\n", " InferenceId=str(i), # unique ID per row\n", " )\n", " i += 1\n", " response[\"Body\"].read()\n", " time.sleep(1)\n", " if terminate_event.is_set():\n", " break\n", "\n", "\n", "# Keep invoking the endpoint with test data\n", "invoke_endpoint_thread = WorkerThread(do_run=invoke_endpoint)\n", "invoke_endpoint_thread.start()" ] }, { "cell_type": "markdown", "id": "faaea3e1", "metadata": { "tags": [] }, "source": [ "### Generate synthetic ground truth data" ] }, { "cell_type": "markdown", "id": "1bcc81dd", "metadata": {}, "source": [ "Besides data capture, model monitoring execution also requires ground truth data.\n", "\n", "In real use cases, ground truth data should be regularly collected and uploaded to designated S3 location. \n", "\n", "The code block below is used to generate fake ground truth data. The first-party merge container will combine captured and ground truth data, and the merged data will be passed to the model bias monitoring job for analysis. Similar to data capture, the model bias monitoring execution will fail if there's no data to merge." ] }, { "cell_type": "code", "execution_count": null, "id": "719d47ab", "metadata": { "tags": [] }, "outputs": [], "source": [ "import random\n", "\n", "def ground_truth_with_id(inference_id):\n", " # set random seed to get consistent results.\n", " random.seed(inference_id) \n", " rand = random.random()\n", " # format required by the merge container.\n", " return {\n", " \"groundTruthData\": {\n", " # randomly generate positive labels 70% of the time.\n", " \"data\": \"1\" if rand < 0.7 else \"0\",\n", " \"encoding\": \"CSV\",\n", " },\n", " \"eventMetadata\": {\n", " \"eventId\": str(inference_id),\n", " },\n", " \"eventVersion\": \"0\",\n", " }\n", "\n", "\n", "def upload_ground_truth(upload_time):\n", " # 180 are the number of rows in data we're sending for inference.\n", " records = [ground_truth_with_id(i) for i in range(180)]\n", " fake_records = [json.dumps(r) for r in records]\n", " data_to_upload = \"\\n\".join(fake_records)\n", " target_s3_uri = f\"{s3_ground_truth_upload_path}/{upload_time:%Y/%m/%d/%H/%M%S}.jsonl\"\n", " print(f\"Uploading {len(fake_records)} records to\", target_s3_uri)\n", " S3Uploader.upload_string_as_file_body(data_to_upload, target_s3_uri)" ] }, { "cell_type": "code", "execution_count": null, "id": "ecabb140", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Generate data for the last hour.\n", "upload_ground_truth(datetime.utcnow() - timedelta(hours=1))" ] }, { "cell_type": "code", "execution_count": null, "id": "cf09dd11", "metadata": { "tags": [] }, "outputs": [], "source": [ "# You can also use the WorkerThread class to continue generating synthetic ground truth data once an hour.\n", "def generate_fake_ground_truth(terminate_event):\n", " upload_ground_truth(datetime.utcnow())\n", " for _ in range(0, 60):\n", " time.sleep(60)\n", " if terminate_event.is_set():\n", " break\n", "\n", "\n", "ground_truth_thread = WorkerThread(do_run=generate_fake_ground_truth)\n", "ground_truth_thread.start()" ] }, { "cell_type": "markdown", "id": "dc968f3e", "metadata": { "tags": [] }, "source": [ "## Monitor model quality" ] }, { "cell_type": "markdown", "id": "79186d9c", "metadata": {}, "source": [ "Model quality monitoring jobs monitor the performance of a model by comparing the predictions that the model makes with the actual ground truth labels that the model attempts to predict. To do this, model quality monitoring merges data that is captured from real-time inference with actual labels stored in S3, and then compares the predictions with the actual labels." ] }, { "cell_type": "markdown", "id": "b4e8bde5", "metadata": {}, "source": [ "### Define `ModelQualityMonitor`" ] }, { "cell_type": "markdown", "id": "5a6ad23c", "metadata": {}, "source": [ "First, define and configure a [`ModelQualityMonitor`](https://sagemaker.readthedocs.io/en/stable/api/inference/model_monitor.html#sagemaker.model_monitor.model_monitoring.ModelQualityMonitor) object." ] }, { "cell_type": "code", "execution_count": null, "id": "53342fbd", "metadata": { "tags": [] }, "outputs": [], "source": [ "model_quality_monitor = ModelQualityMonitor(\n", " role=role,\n", " instance_count=1,\n", " instance_type='ml.m5.xlarge',\n", " volume_size_in_gb=20,\n", " max_runtime_in_seconds=1800,\n", " sagemaker_session=sagemaker_session\n", ")" ] }, { "cell_type": "markdown", "id": "1221ef10", "metadata": {}, "source": [ "### Run model quality baseline job" ] }, { "cell_type": "markdown", "id": "b27f3b52", "metadata": {}, "source": [ "Next, you run a model quality baseline job. As inputs, you need to provide a validation or test dataset with model predictions to establish a model performance baseline. For convenience and illustration, you are provided with `validation-with-predictions.csv` with the format `{probability}/{prediction}/{label}` for the `ModelQualityMonitor` to compute a performance baseline. In a real production environment, you should consider a feedback mechanism such as [SageMaker Augmented AI](https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-use-augmented-ai-a2i-human-review-loops.html) for error analysis and creating ground truth model performance metrics to set a model performance baseline. Note you need to provide at least 200 samples to compute model performance metric standard deviations." ] }, { "cell_type": "markdown", "id": "eec118ce", "metadata": {}, "source": [ "Call the `suggest_baseline` method of the `ModelQualityMonitor` object to run a baseline job.\n", "\n", "Note: this step can take about 8-10 min." ] }, { "cell_type": "code", "execution_count": null, "id": "7855ef9e", "metadata": { "tags": [] }, "outputs": [], "source": [ "model_quality_baseline_job_name = f\"ModelQualityBaselineJob-{datetime.utcnow():%Y-%m-%d-%H%M}\"\n", "model_quality_baseline_job_result_uri = f\"{s3_baseline_results_path}/model_quality\"\n", "\n", "model_quality_baseline_job = model_quality_monitor.suggest_baseline(\n", " job_name=model_quality_baseline_job_name,\n", " baseline_dataset=\"validation-with-predictions.csv\", # The S3 location of the validation dataset.\n", " dataset_format=DatasetFormat.csv(header=True),\n", " output_s3_uri = model_quality_baseline_job_result_uri, # The S3 location to store the results.\n", " problem_type=\"BinaryClassification\",\n", " inference_attribute= \"prediction\", # The column in the dataset that contains predictions.\n", " probability_attribute= \"probability\", # The column in the dataset that contains probabilities.\n", " ground_truth_attribute= \"label\" # The column in the dataset that contains ground truth labels.\n", ")\n", "\n", "model_quality_baseline_job.wait(logs=False)" ] }, { "cell_type": "markdown", "id": "df54baa2", "metadata": {}, "source": [ "View the suggested model quality baseline constraints." ] }, { "cell_type": "code", "execution_count": null, "id": "e1d08356", "metadata": { "tags": [] }, "outputs": [], "source": [ "latest_model_quality_baseline_job = model_quality_monitor.latest_baselining_job\n", "pd.DataFrame(latest_model_quality_baseline_job.suggested_constraints().body_dict[\"binary_classification_constraints\"]).T" ] }, { "cell_type": "markdown", "id": "0ace0677", "metadata": {}, "source": [ "### Schedule continuous model quality monitoring" ] }, { "cell_type": "markdown", "id": "9bb303e6", "metadata": {}, "source": [ "You can create a model monitoring schedule for the endpoint created earlier.\n", "\n", "Use the baseline resources (constraints and statistics) to compare against the real-time traffic hourly." ] }, { "cell_type": "code", "execution_count": null, "id": "c3279416", "metadata": { "tags": [] }, "outputs": [], "source": [ "model_quality_monitor_schedule_name = (\n", " f\"xgboost-dm-model-monitoring-schedule-{datetime.utcnow():%Y-%m-%d-%H%M}\"\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "6ea26090", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Create an EndpointInput configuration.\n", "endpointInput = EndpointInput(\n", " endpoint_name=predictor.endpoint_name,\n", " probability_attribute=\"0\",\n", " probability_threshold_attribute=0.5,\n", " destination=\"/opt/ml/processing/input_data\",\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "cd2ef352", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Define a monitoring schedule.\n", "response = model_quality_monitor.create_monitoring_schedule(\n", " monitor_schedule_name=model_quality_monitor_schedule_name,\n", " endpoint_input=endpointInput,\n", " output_s3_uri=model_quality_baseline_job_result_uri,\n", " problem_type=\"BinaryClassification\",\n", " ground_truth_input=s3_ground_truth_upload_path,\n", " constraints=latest_model_quality_baseline_job.suggested_constraints(),\n", " schedule_cron_expression=CronExpressionGenerator.hourly(),\n", " enable_cloudwatch_metrics=True,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "c5ba68bb", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Check the model monitor was created.\n", "predictor.list_monitors()" ] }, { "cell_type": "code", "execution_count": null, "id": "bffa14eb", "metadata": { "tags": [] }, "outputs": [], "source": [ "# You will see the monitoring schedule in the 'Scheduled' status.\n", "model_quality_monitor.describe_schedule()" ] }, { "cell_type": "code", "execution_count": null, "id": "7fe0b7b4", "metadata": {}, "outputs": [], "source": [ "# Initially there will be no executions since the first execution happens at the top of the hour\n", "# Note that it is common for the execution to launch up to 20 min after the hour.\n", "executions = model_quality_monitor.list_executions()\n", "executions[:5]" ] }, { "cell_type": "markdown", "id": "bcfd093f", "metadata": { "tags": [] }, "source": [ "## Cleanup" ] }, { "cell_type": "markdown", "id": "cbb5e464", "metadata": {}, "source": [ "Well done! If you are finished with the notebook, run the following cells to terminate lab resources and prevent continued charges." ] }, { "cell_type": "markdown", "id": "0658e5da", "metadata": {}, "source": [ "First, stop the worker threads." ] }, { "cell_type": "code", "execution_count": null, "id": "a5ca395d", "metadata": {}, "outputs": [], "source": [ "invoke_endpoint_thread.terminate()\n", "ground_truth_thread.terminate()" ] }, { "cell_type": "markdown", "id": "0e6f28ae", "metadata": {}, "source": [ "Finally, stop and then delete all monitors scheduled to the endpoint." ] }, { "cell_type": "markdown", "id": "c05f389d", "metadata": {}, "source": [ "If the following cell throws an error similar to `ClientError: An error occurred (ValidationException) when calling the DeleteMonitoringSchedule operation: can't delete schedule as it has in-progress executions`, wait a few minutes and run this cell again. You can't delete a monitor if a monitoring job is executing, once it is done, you can delete the monitoring schedule." ] }, { "cell_type": "code", "execution_count": null, "id": "f1f4708f", "metadata": {}, "outputs": [], "source": [ "model_monitors = predictor.list_monitors()\n", "\n", "for monitor in model_monitors:\n", " monitor.stop_monitoring_schedule()\n", " monitor.delete_monitoring_schedule()" ] } ], "metadata": { "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (Data Science)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-2:429704687514:image/datascience-1.0" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.10" } }, "nbformat": 4, "nbformat_minor": 5 }