{ "cells": [ { "cell_type": "markdown", "id": "4bc92d39-dbce-4079-b943-884deb2373c6", "metadata": {}, "source": [ "# Leverage deployment guardrails to update a HuggangFace SageMaker Inference endpoint using canary traffic shifting\n", "\n", "***\n", "This notebooks is designed to run on `Python 3 (Data Science 2.0)` kernel in Amazon SageMaker Studio\n", "***\n", "\n", "This notebook is developed based on the [SageMaker github examples](https://github.com/aws/amazon-sagemaker-examples/tree/main/sagemaker-inference-deployment-guardrails)\n", "\n", "We will perform following steps:\n", "1. [Introduction](#Introduction)\n", "2. [Setup](#Setup)\n", "3. [Step 1: Deploy the models created in the previous notebooks](#Step-1:-Deploy-the-models-created-in-the-previous-notebooks)\n", "4. [Step 2: Invoke Endpoint](#Step-2:-Invoke-Endpoint)\n", "5. [Step 3: Create CloudWatch alarms to monitor Endpoint performance](#Step-3:-Create-CloudWatch-alarms-to-monitor-Endpoint-performance)\n", "6. [Step 4: Update Endpoint with deployment configurations](#Step-4:-Update-Endpoint-with-deployment-configurations)" ] }, { "cell_type": "markdown", "id": "96ab1685-af71-48b3-b008-1e18fa71b2ed", "metadata": {}, "source": [ "## Introduction\n", "\n", "Deployment guardrails are a set of model deployment options in Amazon SageMaker Inference to update your machine learning models in production. Using the fully managed deployment guardrails, you can control the switch from the current model in production to a new one. Traffic shifting modes, such as canary and linear, give you granular control over the traffic shifting process from your current model to the new one during the course of the update. There are also built-in safeguards such as auto-rollbacks that help you catch issues early and take corrective action before they impact production.\n", "\n", "We support blue-green deployment with multiple traffic shifting modes. A traffic shifting mode is a configuration that specifies how endpoint traffic is routed to a new fleet containing your updates. The following traffic shifting modes provide you with different levels of control over the endpoint update process:\n", "\n", "* **All-At-Once Traffic Shifting** : shifts all of your endpoint traffic from the blue fleet to the green fleet. Once the traffic has shifted to the green fleet, your pre-specified Amazon CloudWatch alarms begin monitoring the green fleet for a set amount of time (the “baking period”). If no alarms are triggered during the baking period, then the blue fleet is terminated.\n", "* **Canary Traffic Shifting** : lets you shift one small portion of your traffic (a “canary”) to the green fleet and monitor it for a baking period. If the canary succeeds on the green fleet, then the rest of the traffic is shifted from the blue fleet to the green fleet before terminating the blue fleet.\n", "* **Linear Traffic Shifting** : provides even more customization over how many traffic-shifting steps to make and what percentage of traffic to shift for each step. While canary shifting lets you shift traffic in two steps, linear shifting extends this to n number of linearly spaced steps.\n", "\n", "\n", "The Deployment guardrails for Amazon SageMaker Inference endpoints feature also allows customers to specify conditions/alarms based on Endpoint invocation metrics from CloudWatch to detect model performance regressions and trigger automatic rollback.\n", "\n", "In this notebook we'll update endpoint with following deployment configurations:\n", " * Blue/Green update policy with **Canary traffic shifting option**\n", " * Configure CloudWatch alarms to monitor model performance and trigger auto-rollback action.\n", " \n", "To demonstrate Canary deployments and the auto-rollback feature, we will update an Endpoint with an incompatible model version and deploy it as a Canary fleet, taking a small percentage of the traffic. Requests sent to this Canary fleet will result in errors, which will be used to trigger a rollback using pre-specified CloudWatch alarms. Finally, we will also demonstrate a success scenario where no alarms are tripped and the update succeeds. \n", "\n", "This notebook is organized in 4 steps -\n", "* Step 1 creates the models and Endpoint Configurations required for the 3 scenarios - the baseline, the update containing the incompatible model version and the update containing the correct model version. \n", "* Step 2 invokes the baseline Endpoint prior to the update. \n", "* Step 3 specifies the CloudWatch alarms used to trigger the rollbacks. \n", "* Finally in step 4, we update the endpoint to trigger a rollback and demonstrate a successful update. " ] }, { "cell_type": "markdown", "id": "72f9c6fb-9995-41a8-ae56-440d4f1f3667", "metadata": {}, "source": [ "## Setup \n", "Ensure that you have an updated version of boto3, which includes the latest SageMaker features:" ] }, { "cell_type": "code", "execution_count": null, "id": "fe00d135-52d6-4347-bc60-5209bee52a2e", "metadata": {}, "outputs": [], "source": [ "%matplotlib inline\n", "\n", "import time\n", "import os\n", "import sys\n", "import boto3\n", "import datetime\n", "import sagemaker\n", "from sagemaker import get_execution_role\n", "\n", "p = os.path.abspath('..')\n", "if p not in sys.path:\n", " sys.path.append(p)\n", "import utils\n", "\n", "sm_session = sagemaker.Session()\n", "role = get_execution_role()\n", "region = sm_session.boto_region_name\n", "bucket = sm_session.default_bucket()\n", "sm_client = sm_session.sagemaker_client\n", "sm_runtime = sm_session.sagemaker_runtime_client\n", "cw = boto3.Session().client(\"cloudwatch\")\n", "prefix = \"sagemaker/huggingface-pytorch-sentiment-analysis\"\n", "\n", "time_now = f'{datetime.datetime.now():%Y-%m-%d-%H-%M-%S}'" ] }, { "cell_type": "code", "execution_count": null, "id": "45337c2c-18c8-49b9-af81-8be00008219f", "metadata": {}, "outputs": [], "source": [ "%store\n", "%store -r" ] }, { "cell_type": "markdown", "id": "77cefe16-18f2-402e-978d-2b08fa1bafce", "metadata": {}, "source": [ "## Step 1: Deploy the models created in the previous notebooks\n", "\n", "### First, we create endpoint configurations based on the previously created models \n", "\n", "\n", "The models in this example are used to analyse the sentiment of a given sentence. The dataset we use is based on a subset of a Kaggle dataset available [here](https://www.kaggle.com/datasets/sbhatti/financial-sentiment-analysis). \n", "\n", "We now create three EndpointConfigs, corresponding to the three Models we created in the previous step." ] }, { "cell_type": "code", "execution_count": null, "id": "5a21964a-04e7-47d2-9fcb-79598e58a0ca", "metadata": {}, "outputs": [], "source": [ "ep_config_name_roberta = f\"hf-EpConfig-roberta-{time_now}\"\n", "ep_config_name_distilbert = f\"hf-EpConfig-distilbert-{time_now}\"\n", "ep_config_name_roberta_mme = f\"hf-EpConfig-roberta-mme-{time_now}\"\n", "\n", "print(f\"Endpoint Config 1: {ep_config_name_roberta}\")\n", "print(f\"Endpoint Config 2: {ep_config_name_distilbert}\")\n", "print(f\"Endpoint Config 3: {ep_config_name_roberta_mme}\")\n", "\n", "resp = sm_client.create_endpoint_config(\n", " EndpointConfigName=ep_config_name_roberta,\n", " ProductionVariants=[\n", " {\n", " \"VariantName\": \"AllTraffic\",\n", " \"ModelName\": roberta_model_name,\n", " \"InstanceType\": deploy_instance_type,\n", " \"InitialInstanceCount\": 3,\n", " }\n", " ],\n", ")\n", "print(f\"Created Endpoint Config: {resp}\")\n", "time.sleep(5)\n", "\n", "resp = sm_client.create_endpoint_config(\n", " EndpointConfigName=ep_config_name_distilbert,\n", " ProductionVariants=[\n", " {\n", " \"VariantName\": \"AllTraffic\",\n", " \"ModelName\": distilbert_model_name,\n", " \"InstanceType\": deploy_instance_type,\n", " \"InitialInstanceCount\": 3,\n", " }\n", " ],\n", ")\n", "print(f\"Created Endpoint Config: {resp}\")\n", "time.sleep(5)\n", "\n", "resp = sm_client.create_endpoint_config(\n", " EndpointConfigName=ep_config_name_roberta_mme,\n", " ProductionVariants=[\n", " {\n", " \"VariantName\": \"AllTraffic\",\n", " \"ModelName\": roberta_mme_model_name,\n", " \"InstanceType\": deploy_instance_type,\n", " \"InitialInstanceCount\": 3,\n", " }\n", " ],\n", ")\n", "print(f\"Created Endpoint Config: {resp}\")\n", "time.sleep(5)" ] }, { "cell_type": "markdown", "id": "eef68aa9-dd5e-472f-baf3-d0d8ae64ad96", "metadata": {}, "source": [ "### Create Endpoint\n", "\n", "Deploy the roberta model to a new SageMaker endpoint:" ] }, { "cell_type": "code", "execution_count": null, "id": "fdd7d903-2ae5-4817-b76c-7a7f9c5556aa", "metadata": {}, "outputs": [], "source": [ "endpoint_name = f\"hf-deployment-guardrails-canary-{time_now}\"\n", "print(f\"Endpoint Name: {endpoint_name}\")\n", "\n", "resp = sm_client.create_endpoint(EndpointName=endpoint_name, EndpointConfigName=ep_config_name_roberta_mme)\n", "print(f\"\\nCreated Endpoint: {resp}\")" ] }, { "cell_type": "markdown", "id": "1dff4509-50ae-4c8d-a1dc-1f536d1dda8a", "metadata": {}, "source": [ "Wait for the endpoint creation to complete." ] }, { "cell_type": "code", "execution_count": null, "id": "8f6b7019-1e2c-465f-82b6-ee6b67816075", "metadata": {}, "outputs": [], "source": [ "%%time\n", "utils.endpoint_creation_wait(endpoint_name)" ] }, { "cell_type": "markdown", "id": "c7ad2486-aaaa-420d-9d13-bfc166a0de1e", "metadata": {}, "source": [ "## Step 2: Invoke Endpoint\n", "\n", "You can now send data to this endpoint to get inferences in real time.\n", "\n", "This step invokes the endpoint with included sample data with maximum invocations count and waiting intervals. " ] }, { "cell_type": "code", "execution_count": null, "id": "f0d14b8b-aad7-4511-912f-9022eafae99d", "metadata": {}, "outputs": [], "source": [ "utils.invoke_endpoint_max_invocations(endpoint_name, max_invocations=100)" ] }, { "cell_type": "markdown", "id": "e4b88aca-2407-495e-9ace-a91a602d95b3", "metadata": {}, "source": [ "### Invocations Metrics\n", "\n", "Amazon SageMaker emits metrics such as Latency and Invocations per variant/Endpoint Config (full list of metrics [here](https://docs.aws.amazon.com/sagemaker/latest/dg/monitoring-cloudwatch.html)) in Amazon CloudWatch.\n", "\n", "Query CloudWatch to get number of Invocations and latency metrics per variant and endpoint configuration." ] }, { "cell_type": "markdown", "id": "f52c9bf3-e6e2-44a2-be0b-e9076846c015", "metadata": {}, "source": [ "### Plot endpoint invocation metrics:\n", "\n", "Below, we are going to plot graphs to show the Invocations,Invocation4XXErrors,Invocation5XXErrors,ModelLatency and OverheadLatency against the Endpoint.\n", "\n", "You will observe that there should be a flat line for Invocation4XXErrors and Invocation5XXErrors as we are using the correct invocation data, model and configs. Additionally, ModelLatency and OverheadLatency will start decreasing over time." ] }, { "cell_type": "code", "execution_count": null, "id": "d7adacc6-1b61-4f26-88f0-9630aaa47869", "metadata": {}, "outputs": [], "source": [ "invocation_metrics = utils.plot_endpoint_invocation_metrics(\n", " endpoint_name, ep_config_name_roberta_mme, \"AllTraffic\", \"Invocations\", \"Sum\"\n", ")\n", "invocation_4xx_metrics = utils.plot_endpoint_invocation_metrics(\n", " endpoint_name, None, \"AllTraffic\", \"Invocation4XXErrors\", \"Sum\"\n", ")\n", "invocation_5xx_metrics = utils.plot_endpoint_invocation_metrics(\n", " endpoint_name, None, \"AllTraffic\", \"Invocation5XXErrors\", \"Sum\"\n", ")\n", "model_latency_metrics = utils.plot_endpoint_invocation_metrics(\n", " endpoint_name, None, \"AllTraffic\", \"ModelLatency\", \"Average\"\n", ")\n", "overhead_latency_metrics = utils.plot_endpoint_invocation_metrics(\n", " endpoint_name, None, \"AllTraffic\", \"OverheadLatency\", \"Average\"\n", ")" ] }, { "cell_type": "markdown", "id": "09c6cd68-3216-4148-8857-95f8a8c0e87d", "metadata": {}, "source": [ "## Step 3: Create CloudWatch alarms to monitor Endpoint performance\n", "\n", "Create CloudWatch alarms to monitor Endpoint performance with following metrics:\n", "* Invocation5XXErrors\n", "* ModelLatency\n", "\n", "Following metric dimensions are used to select the metric per Endpoint config and variant:\n", "* EndpointName\n", "* VariantName" ] }, { "cell_type": "code", "execution_count": null, "id": "a85d6703-c852-425e-abba-b5560801cbdf", "metadata": {}, "outputs": [], "source": [ "error_alarm = f\"TestAlarm-4XXErrors-{endpoint_name}\"\n", "latency_alarm = f\"TestAlarm-ModelLatency-{endpoint_name}\"\n", "\n", "# alarm on 1 4xx error rate for 1 minute\n", "utils.create_auto_rollback_alarm(\n", " error_alarm, endpoint_name, \"AllTraffic\", \"Invocation4XXErrors\", \"Sum\", 1\n", ")\n", "# alarm on model latency >= 200 ms for 1 minute\n", "utils.create_auto_rollback_alarm(\n", " latency_alarm, endpoint_name, \"AllTraffic\", \"ModelLatency\", \"Average\", 200000\n", ")\n", "time.sleep(60)" ] }, { "cell_type": "code", "execution_count": null, "id": "9a865e69-f5c0-424c-936e-8133c034f323", "metadata": {}, "outputs": [], "source": [ "# cw.describe_alarms(AlarmNames=[error_alarm, latency_alarm])" ] }, { "cell_type": "markdown", "id": "e4cc182f-b3c0-4ece-ad8c-e9693f037129", "metadata": {}, "source": [ "## Step 4: Update Endpoint with deployment configurations\n", "\n", "Update the endpoint with deployment configurations and monitor the performance from CloudWatch metrics.\n" ] }, { "cell_type": "markdown", "id": "1ba641a7-5b7d-4919-a750-a486c525ffe8", "metadata": {}, "source": [ "### BlueGreen update policy with Canary traffic shifting\n", "\n", "We define the following deployment configuration to perform Blue/Green update strategy with Canary traffic shifting from old to new stack. The Canary traffic shifting option can reduce the blast ratio of a regressive update to the endpoint. In contrast, for the All-At-Once traffic shifting option, the invocation requests start failing at 100% after flipping the traffic. In the Canary mode, invocation requests are shifted to the new version of model gradually, preventing errors from impacting 100% of your traffic. Additionally, the auto-rollback alarms monitor the metrics during the canary stage." ] }, { "cell_type": "markdown", "id": "defdcdba-5bae-4686-8df0-051db223e44a", "metadata": {}, "source": [ "### Rollback Case \n", "![Rollback case](images/scenario-canary-rollback.png)\n", "\n", "Update the Endpoint with an incompatible model version with the test data format to simulate errors and trigger a rollback." ] }, { "cell_type": "code", "execution_count": null, "id": "14f8ca5b-9b31-49f2-8480-112dd5442f4e", "metadata": {}, "outputs": [], "source": [ "canary_deployment_config = {\n", " \"BlueGreenUpdatePolicy\": {\n", " \"TrafficRoutingConfiguration\": {\n", " \"Type\": \"CANARY\",\n", " \"CanarySize\": {\n", " \"Type\": \"INSTANCE_COUNT\", # or use \"CAPACITY_PERCENT\" as 30%, 50%\n", " \"Value\": 1,\n", " },\n", " \"WaitIntervalInSeconds\": 300, # wait for 5 minutes before enabling traffic on the rest of fleet\n", " },\n", " \"TerminationWaitInSeconds\": 120, # wait for 2 minutes before terminating the old stack\n", " \"MaximumExecutionTimeoutInSeconds\": 1800, # maximum timeout for deployment\n", " },\n", " \"AutoRollbackConfiguration\": {\n", " \"Alarms\": [{\"AlarmName\": error_alarm}, {\"AlarmName\": latency_alarm}],\n", " },\n", "}\n", "\n", "# update endpoint request with new DeploymentConfig parameter\n", "sm_client.update_endpoint(\n", " EndpointName=endpoint_name,\n", " EndpointConfigName=ep_config_name_roberta,\n", " DeploymentConfig=canary_deployment_config,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "5d19a359-63f0-47dc-87ec-7c90ccdbe284", "metadata": {}, "outputs": [], "source": [ "sm_client.describe_endpoint(EndpointName=endpoint_name)" ] }, { "cell_type": "markdown", "id": "aa34731f-a933-4b1e-82e8-043689095e1a", "metadata": {}, "source": [ "### We invoke the endpoint during the update operation is in progress.\n", "\n", "**Note : Invoke endpoint in this notebook is in single thread mode, to stop the invoke requests please stop the cell execution**\n", "\n", "The E's denote the errors generated from the incompatible model version in the canary fleet.\n", "\n", "The purpose of the below cell is to simulate errors in the canary fleet. Since the nature of traffic shifting to the canary fleet is probabilistic, you should wait until you start seeing errors. Then, you may proceed to stop the execution of the below cell. If not aborted, cell will run for 100 invocations." ] }, { "cell_type": "code", "execution_count": null, "id": "4f44020f-4e1b-4bd8-9165-95c6f28da254", "metadata": {}, "outputs": [], "source": [ "utils.invoke_endpoint_max_invocations(endpoint_name, max_invocations=300)" ] }, { "cell_type": "markdown", "id": "6d9005e7-823d-4e2e-bae7-efcf908c6196", "metadata": {}, "source": [ "Wait for the update operation to complete and verify the automatic rollback." ] }, { "cell_type": "code", "execution_count": null, "id": "e355c2b4-a071-4b78-b1d7-4f47682fa727", "metadata": {}, "outputs": [], "source": [ "utils.endpoint_update_wait(endpoint_name)\n", "\n", "sm_client.describe_endpoint(EndpointName=endpoint_name)" ] }, { "cell_type": "markdown", "id": "d1b705d7-c3d7-4d67-8e60-8610b9f201f0", "metadata": {}, "source": [ "Collect the endpoint metrics during the deployment:\n", "\n", "Below, we are going to plot graphs to show the Invocations,Invocation4XXErrors and ModelLatency against the Endpoint.\n", "\n", "You can expect to see as the new endpoint config-2 (erroneous due to model version) starts getting deployed, it encounters failure and leads to the rollback to endpoint config-1. This can be seen in the graphs below as the Invocation4XXErrors and ModelLatency increases during this rollback phase\n" ] }, { "cell_type": "code", "execution_count": null, "id": "f62f9682-ba76-4ce6-ade3-f7b9af87b84f", "metadata": {}, "outputs": [], "source": [ "invocation_metrics = utils.plot_endpoint_invocation_metrics(\n", " endpoint_name, None, \"AllTraffic\", \"Invocations\", \"Sum\"\n", ")\n", "metrics_epc_roberta = utils.plot_endpoint_invocation_metrics(\n", " endpoint_name, ep_config_name_roberta, \"AllTraffic\", \"Invocations\", \"Sum\"\n", ")\n", "metrics_epc_roberta_mme = utils.plot_endpoint_invocation_metrics(\n", " endpoint_name, ep_config_name_roberta_mme, \"AllTraffic\", \"Invocations\", \"Sum\"\n", ")\n", "\n", "metrics_all = invocation_metrics.join([metrics_epc_roberta, metrics_epc_roberta_mme], how=\"outer\")\n", "metrics_all.plot(title=\"Invocations-Sum\")\n", "\n", "invocation_5xx_metrics = utils.plot_endpoint_invocation_metrics(\n", " endpoint_name, None, \"AllTraffic\", \"Invocation4XXErrors\", \"Sum\"\n", ")\n", "model_latency_metrics = utils.plot_endpoint_invocation_metrics(\n", " endpoint_name, None, \"AllTraffic\", \"ModelLatency\", \"Average\"\n", ")" ] }, { "cell_type": "markdown", "id": "ea41a2e6-579d-4bd4-9ec5-e255f26892fe", "metadata": {}, "source": [ "We can check the alarm history by the cloudwatch DescribeAlarmHistory api call. However, please note that this notebook execution role doesn't have the IAM policy to allow this action. You can add the below IAM policy to the SageMaker execution role of your studio user profile from the IAM console.\n", "```json\n", "{\n", " \"Version\": \"2012-10-17\",\n", " \"Statement\": [\n", " {\n", " \"Sid\": \"VisualEditor0\",\n", " \"Effect\": \"Allow\",\n", " \"Action\": \"cloudwatch:DescribeAlarmHistory\",\n", " \"Resource\": \"*\"\n", " }\n", " ]\n", "}\n", "```\n", "\n", "Alternatively, you can open the Cloudwatch Alarm console page to view the alam stats. [Cloudwach console](https://ap-southeast-2.console.aws.amazon.com/cloudwatch/home?region=ap-southeast-2#alarmsV2:)" ] }, { "cell_type": "code", "execution_count": null, "id": "fc9e8345-4ece-42d9-a6af-3f591477769f", "metadata": { "scrolled": true, "tags": [] }, "outputs": [], "source": [ "time.sleep(60)\n", "cw.describe_alarm_history(AlarmName=error_alarm)" ] }, { "cell_type": "markdown", "id": "2e84debb-44db-4155-95b1-143c52091b0c", "metadata": {}, "source": [ "Let's take a look at the Success case where we use the same Canary deployment configuration but a valid endpoint configuration.\n", "\n", "### Success Case\n", "![Success case](images/scenario-canary-success.png)\n", "\n", "Now we show the success case where the Endpoint Configuration is updated to a valid version (using the same Canary deployment config as the rollback case)." ] }, { "cell_type": "markdown", "id": "c3c4276b-a375-483a-9b3a-9a74e7613003", "metadata": {}, "source": [ "Update the endpoint with the same Canary deployment configuration:" ] }, { "cell_type": "code", "execution_count": null, "id": "c5b20e6b-91a1-4530-ba00-e351234f6ecd", "metadata": {}, "outputs": [], "source": [ "# update endpoint with a valid version of DeploymentConfig\n", "\n", "sm_client.update_endpoint(\n", " EndpointName=endpoint_name,\n", " EndpointConfigName=ep_config_name_distilbert,\n", " RetainDeploymentConfig=True,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "bce951ff-ee21-4424-9957-dea57a77bbe7", "metadata": {}, "outputs": [], "source": [ "sm_client.describe_endpoint(EndpointName=endpoint_name)" ] }, { "cell_type": "markdown", "id": "f0801475-52c9-4506-b41b-74081c9e6db6", "metadata": {}, "source": [ "Invoke the endpoint during the update operation is in progress:" ] }, { "cell_type": "code", "execution_count": null, "id": "b1f93416-3a86-4790-a676-2fb62a41d487", "metadata": {}, "outputs": [], "source": [ "utils.invoke_endpoint_max_invocations(endpoint_name, max_invocations=300)" ] }, { "cell_type": "markdown", "id": "ea89f61f-d328-4141-a2aa-e9899ea8a818", "metadata": {}, "source": [ "wait for the update operation to complete:\n", "\n", "While waiting, you can go to SageMaker console to check the status of the endpoint and you can see the endpoint configuration status changing as shown in the diagram.\n", "![during_update](images/during_endpoint_update.png)" ] }, { "cell_type": "code", "execution_count": null, "id": "c2589cb8-96e4-4e1e-9afe-6527a0f43a5f", "metadata": {}, "outputs": [], "source": [ "utils.endpoint_update_wait(endpoint_name)\n", "\n", "sm_client.describe_endpoint(EndpointName=endpoint_name)" ] }, { "cell_type": "markdown", "id": "555ab846-16a5-4430-b743-0c1a18c7bad4", "metadata": {}, "source": [ "Once the endpoint is in service, you can see the new endpoint configuration name is changed to the distilbert endpoint configuration as shown below:\n", "![final_state](images/final_state_config.png)" ] }, { "cell_type": "markdown", "id": "79ed42ee-46ba-4889-9fea-0865adb9c0bf", "metadata": {}, "source": [ "Collect the endpoint metrics during the deployment:\n", "\n", "Below, we are going to plot graphs to show the Invocations,Invocation5XXErrors and ModelLatency against the Endpoint.\n", "\n", "You can expect to see that, as the new endpoint config-3 (correct model version) starts getting deployed, it takes over endpoint config-2 (incompatible due to model version) without any errors. This can be seen in the graphs below as the Invocation5XXErrors and ModelLatency decreases during this transition phase" ] }, { "cell_type": "code", "execution_count": null, "id": "0a3841ec-ff77-40ea-a9bc-6e5386972f0f", "metadata": {}, "outputs": [], "source": [ "invocation_metrics = utils.plot_endpoint_invocation_metrics(\n", " endpoint_name, None, \"AllTraffic\", \"Invocations\", \"Sum\"\n", ")\n", "metrics_epc_1 = utils.plot_endpoint_invocation_metrics(\n", " endpoint_name, ep_config_name_roberta, \"AllTraffic\", \"Invocations\", \"Sum\"\n", ")\n", "metrics_epc_2 = utils.plot_endpoint_invocation_metrics(\n", " endpoint_name, ep_config_name_distilbert, \"AllTraffic\", \"Invocations\", \"Sum\"\n", ")\n", "metrics_epc_3 = utils.plot_endpoint_invocation_metrics(\n", " endpoint_name, ep_config_name_roberta_mme, \"AllTraffic\", \"Invocations\", \"Sum\"\n", ")\n", "\n", "\n", "invocation_4xx_metrics = utils.plot_endpoint_invocation_metrics(\n", " endpoint_name, None, \"AllTraffic\", \"Invocation4XXErrors\", \"Sum\"\n", ")\n", "model_latency_metrics = utils.plot_endpoint_invocation_metrics(\n", " endpoint_name, None, \"AllTraffic\", \"ModelLatency\", \"Average\"\n", ")" ] }, { "cell_type": "markdown", "id": "51215819-cf9a-402e-8946-1fe4f822e589", "metadata": {}, "source": [ "The Amazon CloudWatch metrics for the total invocations for each endpoint config shows how invocation requests are shifted from the old version to the new version during deployment.\n", "\n", "You can now safely update your endpoint and monitor model regressions during deployment and trigger auto-rollback action." ] }, { "cell_type": "markdown", "id": "c8fd60f9-5b7a-434c-a0dd-f321acde2c2e", "metadata": {}, "source": [ "# Cleanup \n", "\n", "If you do not plan to use this endpoint further, you should delete the endpoint to avoid incurring additional charges and clean up other resources created in this notebook." ] }, { "cell_type": "code", "execution_count": null, "id": "9575a77e-f9b8-4017-94e5-8e1c3b277956", "metadata": {}, "outputs": [], "source": [ "sm_client.delete_endpoint(EndpointName=endpoint_name)" ] }, { "cell_type": "code", "execution_count": null, "id": "d6dbfa8a-4bdb-42ed-bb58-02cd9b59c555", "metadata": {}, "outputs": [], "source": [ "sm_client.delete_endpoint_config(EndpointConfigName=ep_config_name_roberta)\n", "sm_client.delete_endpoint_config(EndpointConfigName=ep_config_name_distilbert)\n", "sm_client.delete_endpoint_config(EndpointConfigName=ep_config_name_roberta_mme)" ] }, { "cell_type": "code", "execution_count": null, "id": "18f554b3-7e48-44c9-a994-87cbc0d5a636", "metadata": {}, "outputs": [], "source": [ "cw.delete_alarms(AlarmNames=[error_alarm, latency_alarm])" ] }, { "cell_type": "code", "execution_count": null, "id": "6810b675-9005-4ba5-befa-33f036be1996", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "8e311252-c8f5-404d-8656-712e366cfb3b", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (Data Science 2.0)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/sagemaker-data-science-38" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.13" } }, "nbformat": 4, "nbformat_minor": 5 }