{ "cells": [ { "cell_type": "markdown", "id": "2d7cd97c-6203-41a9-8651-0cf79ae783e0", "metadata": {}, "source": [ "# Register pretrained 🤗 models using SageMaker Model Registry - Deploy 🤗 Transformer models for inference with Shadow Deployment\n", "***\n", "This notebooks is designed to run on `Python 3 Data Science 2.0` kernel in Amazon SageMaker Studio\n", "***\n", "\n", "In this notebook, we will use [Hugging Face Inference DLCs and Pytorch DLCs](https://github.com/aws/deep-learning-containers/blob/master/available_images.md) to deploy two pretrained transformer models for real-time inference. You will firstly register the models to Amazon SageMaker model registry and then deploy each model to a SageMaker real-time endpoint and invoke the endpoint with the test payload. \n", "This example will use [SageMaker boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html) (low level api). You can also use [SageMaker python sdk](https://github.com/aws/sagemaker-python-sdk) to achieve the same results.\n", "\n", "We will perform following steps:\n", "1. [Introduction](#Introduction) \n", "2. [Setup](#Setup)\n", "3. [Download and register HuggingFace Transformer models](#Download-and-register-HuggingFace-Transformer-models)\n", "4. [Deploy registered models for real-time inference](#Deploy-registered-models-for-real\\-time-inference)" ] }, { "cell_type": "markdown", "id": "d3f2a641-d9be-4913-a7f4-a62b46c45d51", "metadata": {}, "source": [ "## Introduction\n", "\n", "For inference, you can use your trained Hugging Face model or one of the pretrained Hugging Face models to deploy an inference job with SageMaker. You can also run inference jobs without having to write any custom inference code. With custom inference code, you can customize the inference logic by providing your own Python script.\n", "\n", "### How to deploy an inference job using the Hugging Face Deep Learning Containers\n", "You have two options for running inference with SageMaker. You can run inference using a model that you trained, or deploy a pre-trained Hugging Face model.\n", "\n", "* Run inference with your trained model: You have two options for running inference with your own trained model. You can run inference with a model that you trained using an existing Hugging Face model with the SageMaker Hugging Face Deep Learning Containers, or you can bring your own existing Hugging Face model and deploy it using SageMaker. When you run inference with a model that you trained with the SageMaker Hugging Face Estimator, you can deploy the model immediately after training completes or you can upload the trained model to an Amazon S3 bucket and ingest it when running inference later. If you bring your own existing Hugging Face model, you must upload the trained model to an Amazon S3 bucket and ingest that bucket when running inference.\n", "\n", "* Run inference with a pre-trained HuggingFace model: You can use one of the thousands of pre-trained Hugging Face models to run your inference jobs with no additional training needed. We will see this in our lab today.\n", "\n", "\n", "### SageMaker shadow testing overview\n", "Amazon SageMaker now enables you to evaluate any changes to your model serving infrastructure, consisting of the ML model, the serving container, or the ML instance by shadow testing its performance against the currently deployed one. Shadow testing can help you catch potential configuration errors and performance issues before they impact end users. With SageMaker, you don’t need to invest in building your own shadow testing infrastructure, allowing you to focus on model development.\n", "\n", "You can use this to validate changes to any component to your production variant, namely the model, the container, or the instance, without any end user impact. It is useful in situations such as:\n", "\n", "- You are considering promoting a new model that has been validated offline to production but want to evaluate operational performance metrics such as latency, error rate before making this decision\n", "- You are considering changes to your serving infrastructure container, such as patching vulnerabilities or upgrading to newer versions, and want to assess the impact of these changes prior to promotion\n", "- You are considering changing your ML instance and want to evaluate how the new instance would perform with live inference requests.\n", "Just select a production variant you want to test against, and SageMaker automatically deploys the new variant in shadow mode and routes a copy of the inference requests to it in real time within the same endpoint. Only the responses of the production variant are returned to the calling application. You can choose to discard or log the responses of the shadow variant for offline comparison.\n", "\n", "This notebook provides a walkthrough of the feature using the SageMaker Inference APIs." ] }, { "cell_type": "markdown", "id": "53af8593-42f4-4006-9955-957e9d971312", "metadata": {}, "source": [ "**SageMaker Background**\n", "\n", "![arch](../img/03_lab_Shadow.png)\n", "\n", "A `production variant` consists of the ML model, Serving Container, and ML Instance. Since each variant is independent of others, you can have different models, containers, or instance types across variants. SageMaker lets you specify autoscaling policies on a per-variant basis so they can scale independently based on incoming load. SageMaker supports up to 10 production variants per endpoint. You can either configure a variant to receive a portion of the incoming traffic by setting variant weights or specify the target variant in the incoming request. The response from the production variant is forwarded back to the invoker.\n", "\n", "A `shadow variant (new)` has the same components as a production production variant. A user specified portion of the requests, known as the traffic sampling percentage (VariantWeight parameter in the ShadowProductionVariants object), is forwarded to the shadow variant. You can choose to log the response of the shadow variant in S3 or discard it. For an endpoint with a shadow variant, you can have a maximum of one production variant.\n", "\n", "You can monitor the [invocation metrics](https://docs.aws.amazon.com/sagemaker/latest/dg/monitoring-cloudwatch.html) for both production and shadow variants in CloudWatch under the AWS/SageMaker namespace" ] }, { "cell_type": "markdown", "id": "39293ff6-eb24-4c39-8708-99a69b3945df", "metadata": {}, "source": [ "## Setup" ] }, { "cell_type": "code", "execution_count": null, "id": "6eb49da9-0c65-4517-9270-34154ebdedcc", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell-00\n", "%pip install -U transformers ipywidgets sagemaker torch==1.13.0 -q" ] }, { "cell_type": "code", "execution_count": null, "id": "7a24b318-a232-405e-9a47-662b26ab5d27", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell-01\n", "import datetime\n", "import json\n", "import os\n", "import shutil\n", "import sys\n", "import tarfile\n", "import time\n", "from pathlib import Path\n", "from uuid import uuid4\n", "\n", "import boto3\n", "import numpy as np\n", "import pandas as pd\n", "import sagemaker\n", "import torch\n", "from sagemaker import get_execution_role, image_uris\n", "from sagemaker.huggingface import HuggingFaceModel\n", "from sagemaker.s3 import S3Uploader, s3_path_join\n", "from transformers import AutoModel, AutoModelForSequenceClassification, AutoTokenizer, pipeline\n", "\n", "p = os.path.abspath(\"..\")\n", "if p not in sys.path:\n", " sys.path.append(p)\n", "import utils" ] }, { "cell_type": "markdown", "id": "fa65b2bb-2c53-44e0-aae7-608f9db1c505", "metadata": {}, "source": [ "### Useful objects and variables\n", "Common objects to interact with SageMaker API" ] }, { "cell_type": "code", "execution_count": null, "id": "54fbf297-1542-4d34-a156-c46f1ebfd1da", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell-02\n", "sm_session = sagemaker.Session()\n", "role = get_execution_role()\n", "bucket = sm_session.default_bucket()\n", "region = sm_session.boto_region_name\n", "sm_client = sm_session.sagemaker_client\n", "sm_runtime = boto3.client(\"sagemaker-runtime\")\n", "prefix = \"sagemaker/huggingface-pytorch-sentiment-analysis\"\n", "deploy_instance_type = \"ml.m5.xlarge\"\n", "%store deploy_instance_type\n", "\n", "# The name of the Model Package Group in Amazon SageMaker Model Registry\n", "model_package_group_name = \"HuggingFaceModels\"\n", "%store model_package_group_name\n", "\n", "print(region)\n", "print(role)\n", "print(bucket)" ] }, { "cell_type": "markdown", "id": "13b2f677-2681-4789-9e2e-6aaf3f52fff9", "metadata": {}, "source": [ "## Download and prepare HuggingFace Transformer models" ] }, { "cell_type": "code", "execution_count": null, "id": "d7264423-f5f5-4d2f-9a4d-35a41ab3ecd9", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell-03\n", "HF_TASK = \"sentiment-analysis\"\n", "HF_MODEL_ROBERTA = \"cardiffnlp/twitter-roberta-base-sentiment\"\n", "HF_MODEL_DISTILBERT = \"distilbert-base-uncased-finetuned-sst-2-english\"\n", "\n", "%store HF_TASK\n", "%store HF_MODEL_ROBERTA\n", "%store HF_MODEL_DISTILBERT" ] }, { "cell_type": "markdown", "id": "43b7e245-1a52-45c1-986c-88d730d3e5bb", "metadata": {}, "source": [ "### Download Hugging Face models\n", "#### twitter-roberta-base-sentiment Pretrained Model\n", "\n", "In this example we are downloading a pre-trained HuggingFace model - `twitter-roberta-base-sentiment` from the HuggingFace library. We will use this model for classifying the text as `Labels: 0 -> Negative; 1 -> Neutral; 2 -> Positive`." ] }, { "cell_type": "code", "execution_count": null, "id": "503a371c-2895-4c8b-82d3-e387329ab9ee", "metadata": { "scrolled": true, "tags": [] }, "outputs": [], "source": [ "# cell-04\n", "MODEL = \"cardiffnlp/twitter-roberta-base-sentiment\"\n", "model = AutoModelForSequenceClassification.from_pretrained(HF_MODEL_ROBERTA)\n", "tokenizer = AutoTokenizer.from_pretrained(HF_MODEL_ROBERTA)\n", "model.save_pretrained(\"model_token_roberta\")\n", "tokenizer.save_pretrained(\"model_token_roberta\")" ] }, { "cell_type": "markdown", "id": "f1338317-3848-4486-ad35-4c9144a90555", "metadata": {}, "source": [ "### Package the saved model to tar.gz format\n", "Once the model is downloaded, we need to package (tokenizer and model weights) it to `.tar.gz` format as expected by Amazon SageMaker." ] }, { "cell_type": "code", "execution_count": null, "id": "9cb8fe73-4e1a-4643-9ea2-31894109a97b", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell-05\n", "tar_file_roberta = \"model_roberta.tar.gz\"\n", "tar_size = utils.create_tar(tar_file_roberta, Path(\"model_token_roberta\"))\n", "print(f\"Created {tar_file_roberta}, size {tar_size:.2f} MB\")" ] }, { "cell_type": "markdown", "id": "a477b138-4c2a-42b8-b1eb-90a0238140c1", "metadata": {}, "source": [ "#### Download distilbert-base-uncased-finetuned-sst-2-english by initiating a `Huggingface pipeline`\n", "\n", "The pipelines are a great and easy way to use models for inference. These pipelines are objects that abstract most of the complex code from the library, offering a simple API dedicated to several tasks, including Named Entity Recognition, Masked Language Modeling, Sentiment Analysis, Feature Extraction and Question Answering. See the [task summary](https://huggingface.co/transformers/task_summary.html) for examples of use." ] }, { "cell_type": "code", "execution_count": null, "id": "99f245a8-8c79-4851-a071-2f321463d201", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell-06\n", "local_artifact_path = Path(\"model_token_distilbert\")\n", "local_artifact_path.mkdir(exist_ok=True, parents=True)\n", "tar_file_distilbert = \"model_distilbert.tar.gz\"" ] }, { "cell_type": "code", "execution_count": null, "id": "e3102b30-4366-448b-86c1-a1e6a90e18fe", "metadata": { "scrolled": true, "tags": [] }, "outputs": [], "source": [ "# cell-07\n", "sentiment_analysis = pipeline(HF_TASK, model=HF_MODEL_DISTILBERT)\n", "sentiment_analysis.save_pretrained(local_artifact_path)" ] }, { "cell_type": "markdown", "id": "624b3ca3-336c-4160-a09d-a27d523de1a0", "metadata": {}, "source": [ "#### Write the Inference Script\n", "\n", "To deploy a pretrained `PyTorch` model, you'll need to use the `PyTorch` estimator object to create a `PyTorchModel` object and set a different `entry_point`.\n", "\n", "You'll use the `PyTorchModel` object to deploy a `PyTorchPredictor`. This creates a `SageMaker` Endpoint -- a hosted prediction service that we can use to perform inference.\n", "\n", "An implementation of `model_fn` is required for inference script. We are going to use default implementations of `input_fn`, `predict_fn`, `output_fn` and `model_fn` defined in [sagemaker-pytorch-containers](https://github.com/aws/sagemaker-pytorch-containers).\n", "\n", "Here's an example of the inference script:" ] }, { "cell_type": "code", "execution_count": null, "id": "90ada110-bebc-457a-8381-bb84fe7caa16", "metadata": { "tags": [] }, "outputs": [], "source": [ "#!cat ../code/inference.py # uncomment this line of code to see the details in the py file" ] }, { "cell_type": "code", "execution_count": null, "id": "d2452093-10ec-4b30-8f94-69ab11e2a30e", "metadata": { "tags": [] }, "outputs": [], "source": [ "# !cat ../code/requirements.txt # uncomment this line to show the packages defined in the requirements.txt" ] }, { "cell_type": "markdown", "id": "c5007344-5718-48e4-8727-dbb201031c59", "metadata": {}, "source": [ "#### Create the directory structure for your model files\n", "\n", "The directory structure where you saved your PyTorch model should look something like the following:\n", "\n", "```\n", "| model\n", "| |--pytorch_model.bin\n", "| |--config.json\n", "| |--vocab.txt\n", "| |--tokenizer.json\n", "| |--tokenizer_config.json\n", "| |--special_tokens_map.json\n", "|\n", "| code\n", "| |--inference.py\n", "| |--requirements.txt\n", "```\n", "\n", "Where `requirements.txt` is an optional file that specifies dependencies on third-party libraries." ] }, { "cell_type": "markdown", "id": "3cf07084-9391-46f2-96f8-50ae35c8a71c", "metadata": {}, "source": [ "#### Copy code to the model directory and tar the model and code" ] }, { "cell_type": "code", "execution_count": null, "id": "b263a67f-2a69-4776-8b87-e815740d91eb", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell-08\n", "shutil.copytree(\"../code\", \"model_token_distilbert/code\", dirs_exist_ok=True)\n", "tar_size =utils.create_tar(tar_file_distilbert, local_artifact_path)\n", "print(f\"Created {tar_file_distilbert}, size {tar_size:.2f} MB\")" ] }, { "cell_type": "markdown", "id": "8653938b-6a27-42b2-aba6-86df2bdee1a3", "metadata": {}, "source": [ "#### Upload the model to S3\n", "\n", "We now have the model archives ready. We need to upload them to S3 before we can use them for hosting." ] }, { "cell_type": "code", "execution_count": null, "id": "40328256-d993-4bf4-a677-b61f9a797968", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell-09\n", "model_data_path = s3_path_join(\"s3://\", bucket, prefix + \"/models\")\n", "print(f\"Uploading Models to {model_data_path}\")\n", "model_roberta_uri = S3Uploader.upload(\"model_roberta.tar.gz\", model_data_path)\n", "print(f\"Uploaded roberta model to {model_roberta_uri}\")\n", "model_distilbert_uri = S3Uploader.upload(\"model_distilbert.tar.gz\", model_data_path)\n", "print(f\"Uploaded distilbert model to {model_distilbert_uri}\")\n", "%store model_data_path\n", "%store model_roberta_uri\n", "%store model_distilbert_uri" ] }, { "cell_type": "markdown", "id": "06d10194-576e-4d15-b07d-cf0fdea138d0", "metadata": {}, "source": [ "### Deploy the two models as production and shadow variants to a real-time Inference endpoint\n", "The first step in deploying a trained model to SageMaker Inference is to created a SageMaker Model using the create_model API." ] }, { "cell_type": "markdown", "id": "6f63e854-2bf0-49e6-8153-3feed3fc8d10", "metadata": {}, "source": [ "#### Prebuilt HuggingFace DLC\n", "You can choose to use a prebuilt HuggingFace DLC as the inference image, which has the [SageMaker huggingface inference toolkit](https://github.com/aws/sagemaker-huggingface-inference-toolkit) for serving 🤗 Transformers models on Amazon SageMaker. The inference toolkit leverages the pipeline for the transformer library to allow zero-code deployments of models, without requiring any code for pre- or post-processing. (see more information of the default [handler service](https://github.com/aws/sagemaker-huggingface-inference-toolkit/blob/main/src/sagemaker_huggingface_inference_toolkit/handler_service.py) provided bythe inference toolkit).\n", "\n", "In addition to zero-code deployment, the Inference Toolkit supports \"bring your own code\" methods, where you can override the default methods. You can learn more about \"bring your own code\" in the documentation [here](https://github.com/aws/sagemaker-huggingface-inference-toolkit#-user-defined-codemodules). In the second lab section, we will use the bring your own code method to deploy models." ] }, { "cell_type": "code", "execution_count": null, "id": "1e1ddda4-5a4f-4810-9def-19d9dffec132", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell-10\n", "ml_domain = \"NATURAL_LANGUAGE_PROCESSING\"\n", "ml_task = \"OTHER\"\n", "ml_framework = \"PYTORCH\"\n", "framework_version = \"1.10.2\"\n", "# nearest_model = \"bert-base-uncased\"" ] }, { "cell_type": "code", "execution_count": null, "id": "bad5f887-0b47-44b1-967f-e8e68ce50878", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell-11\n", "framework = \"huggingface\"\n", "transformer_version = \"4.17.0\"\n", "py_version = \"py38\"\n", "instance_type = \"ml.g\"\n", "image_scope = \"inference\"\n", "\n", "inference_image_roberta = image_uris.retrieve(\n", " framework=framework,\n", " base_framework_version=ml_framework.lower() + framework_version,\n", " region=region,\n", " version=transformer_version,\n", " py_version=py_version,\n", " instance_type=instance_type,\n", " image_scope=image_scope,\n", ")\n", "\n", "print(inference_image_roberta)" ] }, { "cell_type": "markdown", "id": "9ad37381-6085-4ab2-aaa9-a2313f885a99", "metadata": {}, "source": [ "#### Prebuilt Pytorch DLC\n", "You can also use a SageMaker prebuilt [Pytorch DLC](https://github.com/aws/deep-learning-containers/tree/master/pytorch) to deploy the huggingface model. In this case, as the prebuilt Pytorch container doesn't have the transformer package, we have provided a `requirements.txt` file with the additional packages that are required to be installed to the container in the model package. See section [Create the directory structure for your model files](#Create-the-directory-structure-for-your-model-files). We also included the `inference.py` file to define the necessary functions for model loading and model serving." ] }, { "cell_type": "code", "execution_count": null, "id": "4c6b105b-ef78-4d79-87b0-d71673e641b1", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell-12\n", "inference_image_distilbert = image_uris.retrieve(\n", " framework=ml_framework.lower(),\n", " region=region,\n", " version=framework_version,\n", " py_version=py_version,\n", " instance_type=instance_type,\n", " image_scope=image_scope,\n", ")\n", "\n", "print(inference_image_distilbert)" ] }, { "cell_type": "markdown", "id": "1ed48519-0aff-4cf5-9b09-bb9b4c187b44", "metadata": { "tags": [] }, "source": [ "#### Create SageMaker models" ] }, { "cell_type": "markdown", "id": "4fffd377-7bb0-4297-8b40-e0a4247def1b", "metadata": {}, "source": [ "The first step in deploying a trained model to SageMaker Inference is to created a SageMaker Model using the create_model API" ] }, { "cell_type": "code", "execution_count": null, "id": "65979d63-5422-4714-a6b3-4a56080c7c05", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell-13\n", "# provide the consistent time stamp for model, endpoint config and endpoint\n", "now = f\"{datetime.datetime.now():%Y-%m-%d-%H-%M-%S}\"\n", "\n", "roberta_model_name = f\"hf-pytorch-model-roberta-{now}\"\n", "print(\"Model name : {}\".format(roberta_model_name))\n", "%store roberta_model_name" ] }, { "cell_type": "code", "execution_count": null, "id": "9dcd2eb3-07cd-4d74-ab30-76bde2fa818c", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell-14\n", "distilbert_model_name = f\"hf-pytorch-model-distilbert-{now}\"\n", "print(\"Model name : {}\".format(distilbert_model_name))\n", "%store distilbert_model_name" ] }, { "cell_type": "code", "execution_count": null, "id": "001ce559-de47-42de-bdda-0bfe0e556117", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell-15\n", "resp = sm_client.create_model(\n", " ModelName=roberta_model_name,\n", " ExecutionRoleArn=role,\n", " Containers=[{\"Image\": inference_image_roberta, \"ModelDataUrl\": model_roberta_uri}],\n", ")\n", "print(f\"Created Model: {resp}\")" ] }, { "cell_type": "code", "execution_count": null, "id": "7cb18df2-8dbf-49e3-b2e5-b6a4fdce0282", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell-16\n", "resp2 = sm_client.create_model(\n", " ModelName=distilbert_model_name,\n", " ExecutionRoleArn=role,\n", " Containers=[{\"Image\": inference_image_distilbert, \"ModelDataUrl\": model_distilbert_uri}],\n", ")\n", "print(f\"Created Model: {resp2}\")" ] }, { "cell_type": "markdown", "id": "afcb6014-e083-42e9-8f9c-5be6084df209", "metadata": {}, "source": [ "\n", "The next step is to create an endpoint config with the production and shadow variants. The ProductionVariants and the ShadowProductionVariants are of particular interest. We set the InitialVariantWeight in the ShadowProductionVariants to sample and send 50% of the production variant requests to the shadow variant. The production variant receives 100% of the traffic.\n", "\n", "Both these variants have ml.m5.xlarge instances with 4 vCPUs and 16 GiB of memory and the initial instance count is set to 1." ] }, { "cell_type": "code", "execution_count": null, "id": "a4fc02e7-db77-40e5-ab55-819c81f58353", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell-17\n", "ep_config_name = f\"Shadow-EpConfig-{now}\"\n", "production_variant_name = \"production\"\n", "shadow_variant_name = \"shadow\"\n", "\n", "create_endpoint_config_response = sm_client.create_endpoint_config(\n", " EndpointConfigName=ep_config_name,\n", " ProductionVariants=[\n", " {\n", " \"VariantName\": production_variant_name,\n", " \"ModelName\": roberta_model_name,\n", " \"InstanceType\": \"ml.m5.xlarge\",\n", " \"InitialInstanceCount\": 2,\n", " \"InitialVariantWeight\": 1,\n", " }\n", " ],\n", " ShadowProductionVariants=[\n", " {\n", " \"VariantName\": shadow_variant_name,\n", " \"ModelName\": distilbert_model_name,\n", " \"InstanceType\": \"ml.m5.2xlarge\",\n", " \"InitialInstanceCount\": 1,\n", " \"InitialVariantWeight\": 0.5,\n", " }\n", " ],\n", ")\n", "print(f\"Created EndpointConfig: {create_endpoint_config_response['EndpointConfigArn']}\")" ] }, { "cell_type": "code", "execution_count": null, "id": "12254db4-8eb1-4e64-abdb-19a69f0e9df2", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell-18\n", "endpoint_name = f\"hf-prod-shadow-{now}\"\n", "create_endpoint_api_response = sm_client.create_endpoint(\n", " EndpointName=endpoint_name,\n", " EndpointConfigName=ep_config_name,\n", ")" ] }, { "cell_type": "markdown", "id": "094188cc-359c-4cad-94c4-91631c66025d", "metadata": {}, "source": [ "Now, wait for the endpoint creation to complete. This should take 2-5 minutes, depending on your model artifact and serving container size." ] }, { "cell_type": "code", "execution_count": null, "id": "d5eef737-dc09-452d-a0c4-3f1ca99a3f5e", "metadata": { "scrolled": true, "tags": [] }, "outputs": [], "source": [ "%%time\n", "# cell-19\n", "utils.endpoint_creation_wait(endpoint_name)" ] }, { "cell_type": "markdown", "id": "b1cfc729-ada4-4a71-86ea-8063a0e61e1c", "metadata": { "tags": [] }, "source": [ "### Invoke Endpoint with `boto3`\n", "\n", "After you deploy a model into production using Amazon SageMaker hosting services, your client applications use this API to get inferences from the model hosted at the specified endpoint.\n", "\n", "For an overview of Amazon SageMaker, [see How It Works](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works.html).\n", "\n", "Amazon SageMaker strips all POST headers except those supported by the API. Amazon SageMaker might add additional headers. You should not rely on the behavior of headers outside those enumerated in the request syntax.\n", "\n", "Calls to `InvokeEndpoint` are authenticated by using AWS Signature Version 4. For information, see Authenticating Requests (AWS Signature Version 4) in the Amazon S3 API Reference.\n", "\n", "A customer's model containers must respond to requests within 60 seconds. The model itself can have a maximum processing time of 60 seconds before responding to invocations. If your model is going to take 50-60 seconds of processing time, the SDK socket timeout should be set to be 70 seconds.\n", "\n", "More info on `invoke_endpoint` can be found on the [Boto3 `SageMakerRuntime` documentation page](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker-runtime.html#SageMakerRuntime.Client.invoke_endpoint)." ] }, { "cell_type": "code", "execution_count": null, "id": "016d4c7f-a9a7-4bc0-8388-8516bcffc04d", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell-20\n", "test_data = pd.read_csv(\"../sample_payload/test_data.csv\", header=None, names=[\"inputs\"])\n", "json_data = dict({\"inputs\": test_data.iloc[:, 0].to_list()})\n", "print(json_data)\n", "test_data.head()" ] }, { "cell_type": "code", "execution_count": null, "id": "72ae1ef5-59c3-48df-83c8-ea93981f9cbb", "metadata": { "scrolled": true, "tags": [] }, "outputs": [], "source": [ "# cell-21\n", "def invoke_endpoint(endpoint_name, should_raise_exp=False):\n", "\n", " try:\n", " for i in range(50): # send the same payload 50 times for testing purpose\n", " response = sm_runtime.invoke_endpoint(\n", " EndpointName=endpoint_name,\n", " Body=test_data.to_csv(header=True, index=False),\n", " ContentType=\"text/csv\",\n", " )\n", " print(response[\"Body\"].read())\n", " except Exception as e:\n", " print(\"E\", end=\"\", flush=True)\n", " if should_raise_exp:\n", " raise e\n", "\n", "\n", "invoke_endpoint(endpoint_name)" ] }, { "cell_type": "markdown", "id": "0cd42b84-868d-43cf-8a1c-98221f7e766c", "metadata": {}, "source": [ "Now that the endpoint is InService and has been invoked, the following cells help collect CloudWatch metrics between the production and shadow variants for metrics comparison." ] }, { "cell_type": "code", "execution_count": null, "id": "fdf54e89-0844-41ee-b630-d8b696e27a50", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell-22\n", "%matplotlib inline\n", "import pandas as pd\n", "\n", "cw = boto3.Session().client(\"cloudwatch\", region_name=region)\n", "\n", "\n", "def get_sagemaker_metrics(\n", " endpoint_name,\n", " variant_name,\n", " metric_name,\n", " statistic,\n", " start_time,\n", " end_time,\n", "):\n", " dimensions = [\n", " {\"Name\": \"EndpointName\", \"Value\": endpoint_name},\n", " {\"Name\": \"VariantName\", \"Value\": variant_name},\n", " ]\n", " namespace = \"AWS/SageMaker\"\n", " if metric_name in [\"CPUUtilization\", \"MemoryUtilization\", \"DiskUtilization\"]:\n", " namespace = \"/aws/sagemaker/Endpoints\"\n", "\n", " metrics = cw.get_metric_statistics(\n", " Namespace=namespace,\n", " MetricName=metric_name,\n", " StartTime=start_time,\n", " EndTime=end_time,\n", " Period=1,\n", " Statistics=[statistic],\n", " Dimensions=dimensions,\n", " )\n", "\n", " if len(metrics[\"Datapoints\"]) == 0:\n", " return\n", " return (\n", " pd.DataFrame(metrics[\"Datapoints\"])\n", " .sort_values(\"Timestamp\")\n", " .set_index(\"Timestamp\")\n", " .drop([\"Unit\"], axis=1)\n", " .rename(columns={statistic: variant_name})\n", " )\n", "\n", "\n", "def plot_endpoint_invocation_metrics(\n", " endpoint_name,\n", " metric_name,\n", " statistic,\n", " start_time=None,\n", "):\n", " from datetime import datetime, timezone, timedelta\n", " start_time = start_time or datetime.now(timezone.utc) - timedelta(minutes=10)\n", " end_time = datetime.now(timezone.utc)\n", " metrics_production = get_sagemaker_metrics(\n", " endpoint_name,\n", " production_variant_name,\n", " metric_name,\n", " statistic,\n", " start_time,\n", " end_time,\n", " )\n", " metrics_shadow = get_sagemaker_metrics(\n", " endpoint_name,\n", " shadow_variant_name,\n", " metric_name,\n", " statistic,\n", " start_time,\n", " end_time,\n", " )\n", " try:\n", " metrics_variants = pd.merge(metrics_production, metrics_shadow, on=\"Timestamp\")\n", " return metrics_variants.plot(y=[\"production\", \"shadow\"])\n", " except Exception as e:\n", " print(e)" ] }, { "cell_type": "markdown", "id": "a40851f3-c75d-43b5-a8f3-29db6b77a5d8", "metadata": {}, "source": [ "#### Metric Comparison\n", "Now that we have deployed both the production and shadow models, let us compare the invocation metrics. Here is a [list](https://docs.aws.amazon.com/sagemaker/latest/dg/monitoring-cloudwatch.html) of invocation metrics available for comparison. Let us start by comparing invocations between the production and shadow variants" ] }, { "cell_type": "code", "execution_count": null, "id": "a5ba637b-e7cf-4214-b5a5-880ab11ea4c4", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell-23\n", "invocations = plot_endpoint_invocation_metrics(endpoint_name, \"Invocations\", \"Sum\")\n", "invocations_per_instance = plot_endpoint_invocation_metrics(\n", " endpoint_name, \"InvocationsPerInstance\", \"Sum\"\n", ")" ] }, { "cell_type": "markdown", "id": "af0627d1-8cdb-4672-9911-0d17d25f9a62", "metadata": {}, "source": [ "The Invocation metric refers to the number of invocations sent to the production variant. A fraction of these invocations, specified in the variant weight, are sent to the shadow variant. The invocation per instance is calculated by dividing the total number of invocations by the number of instances in a variant. From the chart above, we can confirm that both the production and shadow variants are receiving invocation requests according to the weights specified in the endpoint config.\n", "\n", "Next let us compare the model latency between the production and shadow variants. Model latency is the time taken by a model to respond as viewed from SageMaker." ] }, { "cell_type": "code", "execution_count": null, "id": "b73b0746-6c5a-4076-a187-0a723d69c317", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell-24\n", "model_latency = plot_endpoint_invocation_metrics(endpoint_name, \"ModelLatency\", \"Average\")" ] }, { "cell_type": "markdown", "id": "fe399ac8-7d4e-4513-b9b2-398c724eb47c", "metadata": {}, "source": [ "Using the chart above, we can observe how the model latency of the shadow variant compares with the production variant without exposing end users to the shadow variant.\n", "\n", "We expect the overhead latency to be comparable across production and shadow variants. Overhead latency is the interval measured from the time SageMaker receives the request until it returns a response to the client, minus the model Latency." ] }, { "cell_type": "code", "execution_count": null, "id": "4497e2bc-bf57-4c58-af76-ebae54e5a1ac", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell-25\n", "overhead_latency = plot_endpoint_invocation_metrics(endpoint_name, \"OverheadLatency\", \"Average\")" ] }, { "cell_type": "markdown", "id": "678c97e5-bc08-446a-93b0-61e653e267d1", "metadata": {}, "source": [ "Finally, let us review the 4xx, 5xx and total model errors returned by the model serving container." ] }, { "cell_type": "code", "execution_count": null, "id": "c2130910-89fd-4819-b890-6cfcf2a3d800", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell-26\n", "Invocation4xxErrors = plot_endpoint_invocation_metrics(endpoint_name, \"Invocation4XXErrors\", \"Sum\")\n", "Invocation5xxErrors = plot_endpoint_invocation_metrics(endpoint_name, \"Invocation5XXErrors\", \"Sum\")\n", "Invocation5xxErrors = plot_endpoint_invocation_metrics(\n", " endpoint_name, \"InvocationModelErrors\", \"Sum\"\n", ")" ] }, { "cell_type": "markdown", "id": "b9aeb2cf-c9fd-4566-93a6-d953b0ed7fb3", "metadata": {}, "source": [ "We can consider promoting the shadow model if we do not see any differences in 4xx and 5xx errors between the production shadow variants.\n", "\n", "To promote the shadow model to production, create a new endpoint configuration with current ShadowProductionVariant as the new ProductionVariant and removing the ShadowProductionVariant. This will remove the current ProductionVariant and promote the shadow variant to become the new production variant. As always, all SageMaker updates are orchestrated as blue/green deployments under the hood and there is no loss of availability while performing the update. Optionally, you can leverage [Deployment Guardrails](https://docs.aws.amazon.com/sagemaker/latest/dg/deployment-guardrails.html) if you want to use all-at-once traffic shifting and auto rollbacks during your update." ] }, { "cell_type": "code", "execution_count": null, "id": "87bb6e62-8f91-4f72-abb4-4264d31ccf3f", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell-27\n", "promote_ep_config_name = f\"PromoteShadow-EpConfig-{datetime.datetime.now():%Y-%m-%d-%H-%M-%S}\"\n", "\n", "create_endpoint_config_response = sm_client.create_endpoint_config(\n", " EndpointConfigName=promote_ep_config_name,\n", " ProductionVariants=[\n", " {\n", " \"VariantName\": shadow_variant_name,\n", " \"ModelName\": distilbert_model_name,\n", " \"InstanceType\": \"ml.m5.xlarge\",\n", " \"InitialInstanceCount\": 2,\n", " \"InitialVariantWeight\": 1.0,\n", " }\n", " ],\n", ")\n", "print(f\"Created EndpointConfig: {create_endpoint_config_response['EndpointConfigArn']}\")" ] }, { "cell_type": "code", "execution_count": null, "id": "456baea4-26f6-4f3a-a5ae-cefd0907720f", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell-28\n", "update_endpoint_api_response = sm_client.update_endpoint(\n", " EndpointName=endpoint_name,\n", " EndpointConfigName=promote_ep_config_name,\n", ")\n", "\n", "utils.endpoint_creation_wait(endpoint_name)\n", "\n", "sm_client.describe_endpoint(EndpointName=endpoint_name)" ] }, { "cell_type": "markdown", "id": "8af14c2b-3217-42ae-9299-b81270e47883", "metadata": {}, "source": [ "If you do not want to create multiple endpoint configurations and want SageMaker to manage the end to end workflow of creating, managing, and acting on the results of the shadow tests, consider using the SageMaker Inference Experiement APIs/Console experience. As stated earlier, they enable you to setup shadow tests for a predefined duration of time, monitor the progress through a live dashboard, presents clean up options upon completion, and act on the results. To get started, please navigate to the 'Shadow Tests' section of the SageMaker Inference console." ] }, { "cell_type": "markdown", "id": "619e81b0-9e65-472d-98aa-7a089cb873cd", "metadata": {}, "source": [ "### Cleanup\n", "If you do not plan to use this endpoint further, you should delete the endpoint to avoid incurring additional charges and clean up other resources created in this notebook.\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "id": "83e89b59-f846-4021-8aef-972d000c0566", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell-29\n", "sm_client.delete_endpoint(EndpointName=endpoint_name)" ] }, { "cell_type": "code", "execution_count": null, "id": "59befeef-eadd-43fe-8661-a19948923c47", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (Data Science 2.0)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/sagemaker-data-science-38" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.13" } }, "nbformat": 4, "nbformat_minor": 5 }