{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Improve high value research with Hugging Face and Amazon SageMaker asynchronous endpoints"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Table of Contents**\n",
"\n",
"* [Background](#background)\n",
"* [Architecture](#overview)\n",
"* [Section 1 - Setup](#setup) \n",
" * [Create Model](#createmodel)\n",
" * [Create EndpointConfig](#endpoint-config)\n",
" * [Create Endpoint](#create-endpoint)\n",
"* [Section 2 - Using the Endpoint](#endpoint) \n",
" * [Invoke Endpoint](#invoke-endpoint)\n",
"* [Section 3 - Clean up](#clean)\n",
"\n",
"### Background \n",
"Amazon SageMaker Asynchronous Inference is a new capability in SageMaker that queues incoming requests and processes them asynchronously.\n",
"SageMaker currently offers 3 inference options for customers to deploy machine learning models:\n",
"1. Real-time option for low-latency workloads\n",
"2. Batch transform, an offline option to process inference requests on batches of data available upfront.\n",
"3. Asynchornous Inference\n",
"\n",
"Real-time inference is suited for workloads with payload sizes of less than 6 MB and require inference requests to be processed within 60 seconds. Batch transform is suitable for offline inference on batches of data. \n",
"\n",
"Asynchronous inference is a new inference option for near real-time inference needs. Requests can take up to 15 minutes to process and have payload sizes of up to 1 GB. Asynchronous inference is suitable for workloads that do not have sub-second latency requirements and have relaxed latency requirements. For example, you might need to process an inference on a large image of several MBs within 5 minutes. In addition, asynchronous inference endpoints let you control costs by scaling down endpoints instance count to zero when they are idle, so you only pay when your endpoints are processing requests. \n",
"\n",
"\n",
"### Architecture \n",
"Asynchronous inference endpoints have many similarities (and some key differences) compared to real-time endpoints. The process to create asynchronous endpoints is similar to real-time endpoints. You will need to create: a model, an endpoint configuration, and then an endpoint. However, there are specific configuration parameters specific to asynchronous inference endpoints which we will explore below. \n",
"\n",
"Invocation of asynchronous endpoints differ from real-time endpoints. Rather than pass request payload inline with the request, you upload the payload to Amazon S3 and pass an Amazon S3 URI as a part of the request. Upon receiving the request, SageMaker provides you with a token with the output location where the result will be placed once processed. Internally, SageMaker maintains a queue with these requests and processes them. During endpoint creation, you can optionally specify an Amazon SNS topic to receive success or error notifications. Once you receive the notification that your inference request has been successfully processed, you can access the result in the output Amazon S3 location. \n",
"\n",
"In this example: \n",
"\n",
"* We will be deploying a pretrained Huggeging Face model to [SageMaker hosting services](https://docs.aws.amazon.com/sagemaker/latest/dg/deploy-model.html). This will automatically provision an asynchronous endpoint that host your model, from which you can get predictions in near real time.\n",
"* We demonstrate the new capabilities of an internal queue with user-defined concurrency and completion notifications. We configure autoscaling of instances to scale down to 0 when traffic subsides and scales back up as the request queue fills up. \n",
"* We also use [Amazon CloudWatch](https://aws.amazon.com/cloudwatch/) metrics to monitor the queue size, total processing time, and invocations processed. \n",
"\n",
"\n",
"\n",
"\n",
"\n",
"1. Our pre-trained PEGASUS (https://huggingface.co/google/pegasus-large) ML model is first hosted on the scaling endpoint.\n",
"2. The user or some other mechanism uploads the article to be summarized to an input S3 bucket.\n",
"3. The user or some other mechanism invokes the endpoint and is immediately returned an output Amazon S3 location where the inference is written.\n",
"4. After the inference is complete, the result is saved to the output S3 bucket.\n",
"5. An Amazon [Simple Notification Service](https://aws.amazon.com/sns/?whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc) (SNS) notification is sent to the user notifying them of the completed success or failure.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"## 1. Setup "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First we ensure we have an updated version of Sagemaker, which includes the latest SageMaker features:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Import the required python libraries:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!python -m pip install --upgrade pip --quiet\n",
"!pip install -U awscli --quiet\n",
"!pip install --upgrade sagemaker --quiet"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from time import gmtime, strftime\n",
"from sagemaker import image_uris\n",
"import sagemaker\n",
"import logging\n",
"import boto3\n",
"import json\n",
"import urllib\n",
"import boto3\n",
"import datetime\n",
"import time\n",
"import json\n",
"import os\n",
"import sys\n",
"import io"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"region = sagemaker.Session().boto_region_name\n",
"role = sagemaker.get_execution_role()\n",
"boto3.setup_default_session(region_name=region)\n",
"boto_session = boto3.Session(region_name=region)\n",
"sm_session = sagemaker.session.Session()\n",
"sm_client = boto_session.client('sagemaker')\n",
"sm_runtime = boto_session.client(\"sagemaker-runtime\")\n",
"s3_bucket = sm_session.default_bucket()\n",
"sns_client = boto3.client('sns')\n",
"print(f'Region = {region}')\n",
"print(f'Role = {role}')\n",
"s3_bucket = sm_session.default_bucket()\n",
"print(f\"We will use S3 bucket : '{s3_bucket}' for storing all resources related to this notebook\")\n",
"bucket_prefix = \"async-inference-demo\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Specify your IAM role. Go the AWS [IAM console](https://console.aws.amazon.com/iam/home) and add the following policies to your IAM Role:\n",
"* SageMakerFullAccessPolicy\n",
"* Amazon S3 access: Apply this to get and put objects in your Amazon S3 bucket. Replace `bucket_name` with the name of your Amazon S3 bucket: \n",
"\n",
"```json\n",
"{\n",
" \"Version\": \"2012-10-17\",\n",
" \"Statement\": [\n",
" {\n",
" \"Action\": [\n",
" \"s3:GetObject\",\n",
" \"s3:PutObject\",\n",
" \"s3:AbortMultipartUpload\",\n",
" \"s3:ListBucket\"\n",
" ],\n",
" \"Effect\": \"Allow\",\n",
" \"Resource\": \"arn:aws:s3:::/*\"\n",
" }\n",
" ]\n",
"}\n",
"```\n",
"\n",
"* (Optional) Amazon SNS access: Add `sns:Publish` on the topics you define. Apply this if you plan to use Amazon SNS to receive notifications.\n",
"\n",
"```json\n",
"{\n",
" \"Version\": \"2012-10-17\",\n",
" \"Statement\": [\n",
" {\n",
" \"Action\": [\n",
" \"sns:Publish\"\n",
" ],\n",
" \"Effect\": \"Allow\",\n",
" \"Resource\": \"arn:aws:sns:us-east-2:123456789012:MyTopic\"\n",
" }\n",
" ]\n",
"}\n",
"```\n",
"\n",
"* (Optional) KMS decrypt, encrypt if your Amazon S3 bucket is encrypted."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Specify your SageMaker IAM Role (`role`) and Amazon S3 bucket . You can optionally use a default SageMaker Session IAM Role and Amazon S3 bucket. Make sure the role you use has the necessary permissions for SageMaker, Amazon S3, and optionally Amazon SNS."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 1.1 Create Model \n",
"Specify the location of the pre-trained model stored in Amazon S3. We will be using the [PEGASUS](https://huggingface.co/google/pegasus-large) model for the purpose of this blog. We will use the model as is from HuggingFace for simplicity purpose. But if you would like to fine tune the model based on custom dataset, you can do so by following [this](https://aws.amazon.com/blogs/machine-learning/fine-tune-and-host-hugging-face-bert-models-on-amazon-sagemaker/) blog. Please feel free to try out other sequence-to-sequence models available in the [HuggingFace Model Hub](https://huggingface.co/models?pipeline_tag=summarization&sort=downloads). The full Amazon S3 URI is stored in a string variable `MODEL_DATA_URL`. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model_s3_key = f\"{bucket_prefix}/summarization-model.tar.gz\"\n",
"with open(\"model/summarization-model.tar.gz\", \"rb\") as model_file:\n",
" boto_session.resource(\"s3\").Bucket(s3_bucket).Object(model_s3_key).upload_fileobj(model_file)\n",
"print(\"Uploaded the model to S3 bucket\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model_url = f\"s3://{s3_bucket}/{model_s3_key}\"\n",
"print(model_url)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Specify a primary container. For the primary container, you specify the Docker image that contains inference code, artifacts (from prior training), and a custom environment map that the inference code uses when you deploy the model for predictions. In this example, we retrieve the appropriate container image by specifying the right framework version and framework details. Here in this case we are downloading container image associated with Hugging Face framework. For further details on right container images to use for your use case please refer to this link https://github.com/awsdocs/amazon-sagemaker-developer-guide/blob/master/doc_source/ and look in to appropriate ecr folder pertaining to the region of your interest"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ecr_image = image_uris.retrieve(framework='huggingface', \n",
" region=region, \n",
" version='4.6.1', \n",
" image_scope='inference', \n",
" base_framework_version='pytorch1.7.1', \n",
" py_version='py36', \n",
" container_version='ubuntu18.04', \n",
" instance_type='ml.m5.xlarge')\n",
"print(f\"ECR Image:{ecr_image}\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model_name = f\"Pegasus-summarization-async-{strftime('%Y-%m-%d-%H-%M-%S', gmtime())}\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create a model by specifying the `ModelName`, the `ExecutionRoleARN` (the ARN of the IAM role that Amazon SageMaker can assume to access model artifacts/ docker images for deployment), and the `PrimaryContainer`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"response = sm_client.create_model(ModelName=model_name, \n",
" ExecutionRoleArn=role, \n",
" PrimaryContainer={'Image': ecr_image, \n",
" 'ModelDataUrl': model_url,\n",
" 'Environment':{\n",
" 'HF_MODEL_ID':'google/pegasus-large',\n",
" 'HF_TASK':'summarization',\n",
" 'SAGEMAKER_CONTAINER_LOG_LEVEL':'20',\n",
" 'SAGEMAKER_REGION':region\n",
" }\n",
" }\n",
" )\n",
"model_arn = response['ModelArn']\n",
"\n",
"print(f'Created Model: {model_arn}')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"endpoint_config_name = f\"Pegasus-summarization-async-config-{strftime('%Y-%m-%d-%H-%M-%S', gmtime())}\"\n",
"\n",
"print(endpoint_config_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create Error and Success SNS topics"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"response = sns_client.create_topic(Name=\"Async-ErrorTopic\")\n",
"error_topic= response['TopicArn']\n",
"print(error_topic)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"response = sns_client.create_topic(Name=\"Async-SuccessTopic\")\n",
"success_topic = response['TopicArn']\n",
"print(success_topic)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Optionally Subscribe to an SNS topic"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"##Note: Replace with your email id\n",
"\n",
"# email_id = 'youremail@domain.com'\n",
"# email_sub_1 = sns_client.subscribe(\n",
"# TopicArn=success_topic,\n",
"# Protocol='email',\n",
"# Endpoint=email_id)\n",
"\n",
"# email_sub_2 = sns_client.subscribe(\n",
"# TopicArn=error_topic,\n",
"# Protocol='email',\n",
"# Endpoint=email_id)\n",
"\n",
"##Note: You will need to confirm by clicking on the email you recieve to complete the subscription"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 1.2 Create EndpointConfig "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once you have a model, create an endpoint configuration with CreateEndpointConfig. Amazon SageMaker hosting services uses this configuration to deploy models. In the configuration, you identify one or more models that were created using with CreateModel API, to deploy the resources that you want Amazon SageMaker to provision. Specify the AsyncInferenceConfig object and provide an output Amazon S3 location for OutputConfig. You can optionally specify Amazon SNS topics on which to send notifications about prediction results."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"response = sm_client.create_endpoint_config(\n",
" EndpointConfigName=endpoint_config_name,\n",
" ProductionVariants=[\n",
" {\n",
" \"VariantName\": \"variant1\",\n",
" \"ModelName\": model_name,\n",
" \"InstanceType\": \"ml.m5.xlarge\",\n",
" \"InitialInstanceCount\": 1\n",
" }\n",
" ],\n",
" AsyncInferenceConfig={\n",
" \"OutputConfig\": {\n",
" \"S3OutputPath\": f\"s3://{s3_bucket}/{bucket_prefix}/output\",\n",
" # Optionally specify Amazon SNS topics\n",
" \"NotificationConfig\": {\n",
" \"SuccessTopic\": success_topic,\n",
" \"ErrorTopic\": error_topic,\n",
" }\n",
" },\n",
" \"ClientConfig\": {\n",
" \"MaxConcurrentInvocationsPerInstance\": 2\n",
" }\n",
" }\n",
")\n",
"endpoint_config_arn = response['EndpointConfigArn']\n",
"print(f\"Created EndpointConfig: {endpoint_config_arn}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 1.3 Create Asynchronous Endpoint "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Unlike real time hosted endpoints, asynchronous endpoints support scaling down instances to 0 by setting the minimum capacity to 0. With this feature, we can scale down to 0 instances when there is no traffic and pay only when the payloads arrive. Let's create an asynchronous endpoint to see it in action below -"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"endpoint_name = f\"async-summarization-inference-huggingface-{strftime('%Y-%m-%d-%H-%M-%S', gmtime())}\"\n",
"\n",
"response = sm_client.create_endpoint(EndpointName=endpoint_name, \n",
" EndpointConfigName=endpoint_config_name)\n",
"\n",
"print(f'Creating Endpoint: {endpoint_name}')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"waiter = sm_client.get_waiter(\"endpoint_in_service\")\n",
"print(\"Waiting for endpoint to create...\")\n",
"waiter.wait(EndpointName=endpoint_name)\n",
"resp = sm_client.describe_endpoint(EndpointName=endpoint_name)\n",
"print(f\"Endpoint Status: {resp['EndpointStatus']}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"--- \n",
"## 2. Using the Asynchronous Endpoint \n",
"\n",
"### 2.1 Uploading the Request Payload "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"input data is placed in the input location in .json format"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"input_s3_key = f\"{bucket_prefix}/input/input.json\"\n",
"with open(\"data/input.json\", \"rb\") as input_file:\n",
" boto_session.resource(\"s3\").Bucket(s3_bucket).Object(input_s3_key).upload_fileobj(input_file)\n",
"print(\"Uploaded the input data to S3 bucket\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"input_s3_location = f\"s3://{s3_bucket}/{bucket_prefix}/input/input.json\"\n",
"print(input_s3_location)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2.1 Invoke Endpoint "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Get inferences from the model hosted at your asynchronous endpoint with InvokeEndpointAsync. Specify the location of your inference data in the InputLocation field and the name of your endpoint for EndpointName. The response payload contains the output Amazon S3 location where the result will be placed."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"response = sm_runtime.invoke_endpoint_async(\n",
" EndpointName=endpoint_name, InputLocation=input_s3_location,ContentType=\"application/json\"\n",
")\n",
"print(response)\n",
"output_location = response['OutputLocation']\n",
"print(f\"OutputLocation: {output_location}\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from botocore.exceptions import ClientError\n",
"\n",
"def get_output(output_location):\n",
" output_url = urllib.parse.urlparse(output_location)\n",
" bucket = output_url.netloc\n",
" key = output_url.path[1:]\n",
" while True:\n",
" try:\n",
" return sm_session.read_s3_file(bucket=output_url.netloc, key_prefix=output_url.path[1:])\n",
" except ClientError as e:\n",
" if e.response['Error']['Code'] == 'NoSuchKey':\n",
" print(\"waiting for output...\")\n",
" time.sleep(2)\n",
" continue\n",
" raise "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Printing the summarized output\n",
"\n",
"output = get_output(output_location)\n",
"print(f\"Sumarrized text is: {((output))}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Trigger 10 asynchronous requests on a single instance"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"inferences = []\n",
"for i in range(1,10):\n",
" start = time.time()\n",
" response = sm_runtime.invoke_endpoint_async(\n",
" EndpointName=endpoint_name, \n",
" InputLocation=input_s3_location,\n",
" ContentType=\"application/json\" )\n",
" \n",
" output_location = response[\"OutputLocation\"]\n",
" inferences += [(input_s3_location, output_location)]\n",
"\n",
"print(\"\\Async invocations for Pytorch serving default: \\n\")\n",
"\n",
"for input_file, output_location in inferences:\n",
" output = get_output(output_location)\n",
" print(f\"Input File: {input_file}, Output location: {output_location}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Enable autoscaling "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"client = boto3.client('application-autoscaling') # Common class representing Application Auto Scaling for SageMaker amongst other services\n",
"\n",
"resource_id='endpoint/' + endpoint_name + '/variant/' + 'variant1' # This is the format in which application autoscaling references the endpoint\n",
"\n",
"response = client.register_scalable_target(\n",
" ServiceNamespace='sagemaker', \n",
" ResourceId=resource_id,\n",
" ScalableDimension='sagemaker:variant:DesiredInstanceCount',\n",
" MinCapacity=0, \n",
" MaxCapacity=5\n",
")\n",
"\n",
"response = client.put_scaling_policy(\n",
" PolicyName='Invocations-ScalingPolicy',\n",
" ServiceNamespace='sagemaker', # The namespace of the AWS service that provides the resource. \n",
" ResourceId=resource_id, # Endpoint name \n",
" ScalableDimension='sagemaker:variant:DesiredInstanceCount', # SageMaker supports only Instance Count\n",
" PolicyType='TargetTrackingScaling', # 'StepScaling'|'TargetTrackingScaling'\n",
" TargetTrackingScalingPolicyConfiguration={\n",
" 'TargetValue': 5.0, # The target value for the metric. \n",
" 'CustomizedMetricSpecification': {\n",
" 'MetricName': 'ApproximateBacklogSizePerInstance',\n",
" 'Namespace': 'AWS/SageMaker',\n",
" 'Dimensions': [\n",
" {'Name': 'EndpointName', 'Value': endpoint_name }\n",
" ],\n",
" 'Statistic': 'Average',\n",
" },\n",
" 'ScaleInCooldown': 120, # The cooldown period helps you prevent your Auto Scaling group from launching or terminating \n",
" # additional instances before the effects of previous activities are visible. \n",
" # You can configure the length of time based on your instance startup time or other application needs.\n",
" # ScaleInCooldown - The amount of time, in seconds, after a scale in activity completes before another scale in activity can start. \n",
" 'ScaleOutCooldown': 120 # ScaleOutCooldown - The amount of time, in seconds, after a scale out activity completes before another scale out activity can start.\n",
" \n",
" #'DisableScaleIn': True # Indicates whether scale in by the target tracking policy is disabled. \n",
" # If the value is true , scale in is disabled and the target tracking policy won't remove capacity from the scalable resource.\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Trigger 1000 asynchronous invocations with autoscaling from 1 to 5 and then scale down to 0 on completion\n",
"\n",
"Optionally unsubscribe or [delete the SNS topic](https://boto3.amazonaws.com/v1/documentation/api/1.9.42/reference/services/sns.html#SNS.Client.delete_topic) to avoid flooding of notifications on 1000 invocations below"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"inferences = []\n",
"for i in range(1,1000):\n",
" start = time.time()\n",
" response = sm_runtime.invoke_endpoint_async(\n",
" EndpointName=endpoint_name, \n",
" InputLocation=input_s3_location,\n",
" ContentType=\"application/json\" )\n",
" \n",
" output_location = response[\"OutputLocation\"]\n",
" inferences += [(input_s3_location, output_location)]\n",
"\n",
"print(\"\\Async invocations for Pytorch serving default: \\n\")\n",
"\n",
"for input_file, output_location in inferences:\n",
" output = get_output(output_location)\n",
" print(f\"Input File: {input_file}, Output location: {output_location}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Plot graphs from CloudWatch Metrics"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"import datetime\n",
"from datetime import datetime,timedelta\n",
"cw = boto3.Session().client(\"cloudwatch\")\n",
"\n",
"def get_sagemaker_metrics(endpoint_name,\n",
" endpoint_config_name,\n",
" variant_name,\n",
" metric_name,\n",
" statistic,\n",
" start_time,\n",
" end_time):\n",
" dimensions = [\n",
" {\n",
" \"Name\": \"EndpointName\",\n",
" \"Value\": endpoint_name\n",
" },\n",
" {\n",
" \"Name\": \"VariantName\",\n",
" \"Value\": variant_name\n",
" }\n",
" ]\n",
" if endpoint_config_name is not None:\n",
" dimensions.append({\n",
" \"Name\": \"EndpointConfigName\",\n",
" \"Value\": endpoint_config_name\n",
" })\n",
" metrics = cw.get_metric_statistics(\n",
" Namespace=\"AWS/SageMaker\",\n",
" MetricName=metric_name,\n",
" StartTime=start_time,\n",
" EndTime=end_time,\n",
" Period=120,\n",
" Statistics=[statistic],\n",
" Dimensions=dimensions\n",
" )\n",
" rename = endpoint_config_name if endpoint_config_name is not None else 'ALL'\n",
" return pd.DataFrame(metrics[\"Datapoints\"])\\\n",
" .sort_values(\"Timestamp\")\\\n",
" .set_index(\"Timestamp\")\\\n",
" .drop([\"Unit\"], axis=1)\\\n",
" .rename(columns={statistic: rename})\n",
"\n",
"def plot_endpoint_model_latency_metrics(endpoint_name, endpoint_config_name, variant_name, start_time=None):\n",
" start_time = start_time or datetime.now() - timedelta(minutes=120)\n",
" end_time = datetime.now()\n",
" metric_name = \"ModelLatency\"\n",
" statistic = \"Average\"\n",
" metrics_variants = get_sagemaker_metrics(\n",
" endpoint_name,\n",
" endpoint_config_name,\n",
" variant_name,\n",
" metric_name, \n",
" statistic,\n",
" start_time,\n",
" end_time)\n",
" metrics_variants.plot(title=f\"{metric_name}-{statistic}\")\n",
" return metrics_variants"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model_latency_metrics = plot_endpoint_model_latency_metrics(endpoint_name, None, \"variant1\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Similarly, we plot other Cloud Watch Metrics associated with the Endpoint as shown below - "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"The instances autoscale bumps up to 5(Maxiumum configured) when the "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The instances autoscale down to 0 once the queue size goes down to 0"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3. Summary & Clean up "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To Summarize, In this notebook we learned how to use the SageMaker Asynchronous inference capability with pre-trained Hugging Face models."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you enabled auto-scaling for your endpoint, ensure you deregister the endpoint as a scalable target before deleting the endpoint. To do this, run the following:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"# response = sm_client.deregister_scalable_target(\n",
"# ServiceNamespace='sagemaker',\n",
"# ResourceId='resource_id',\n",
"# ScalableDimension='sagemaker:variant:DesiredInstanceCount'\n",
"# )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Remember to delete your endpoint after use as you will be charged for the instances used in this Demo. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"sm_client.delete_endpoint(EndpointName=endpoint_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You may also want to delete any other resources you might have created such as SNS topics, S3 objects, etc."
]
}
],
"metadata": {
"instance_type": "ml.t3.medium",
"kernelspec": {
"display_name": "Python 3 (Data Science)",
"language": "python",
"name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-west-2:236514542706:image/datascience-1.0"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.10"
}
},
"nbformat": 4,
"nbformat_minor": 4
}