{ "cells": [ { "cell_type": "markdown", "id": "ca966d6d", "metadata": {}, "source": [ "# Right-sizing your Amazon SageMaker Endpoints" ] }, { "cell_type": "markdown", "id": "8fffec52", "metadata": {}, "source": [ "__Disclaimer__:\n", "* To run this notebook, you are recommended to use a ml.m5.4xlarge or a larger instance-type to avoid running into CPU limit errors for load testing. " ] }, { "cell_type": "markdown", "id": "4b73de31", "metadata": {}, "source": [ "## Introduction" ] }, { "cell_type": "markdown", "id": "15ae2d2a", "metadata": {}, "source": [ "This notebook is intended to guide you through the process of choosing the correct instance type for model serving depending on the following criteria:\n", "1. Number of requests per second\n", "2. Endpoint costs.\n", "\n", "By running part of this notebook, you will run the load tests and identify the optimal instance type. These load tests are executed with the [Locust](https://locust.io/) load testing framework. Locust allows you to run load tests in parallel until the instance fails for each instance type that you wants to compare, thus providing a comprehensive view of performance vs cost for each instance type. \n", "\n", "The load testing to identify the best fit instance can be carried out via two different approaches. The first approach runs a set of automatic tests to build a performance map of all candidate instance types you specify. The second approach lets you take a more hands-on aproach to iterate manually.\n", "\n", "This notebook demonstrates the endpoint instance type optimization for the Wide ResNet-50 Image Classification model from the AWS Marketplace. Note that you may however, tweak this notebook and use it to test other ML models too.\n", "\n", "**Note** - The cost of running this notebook depends on the model you use and its corresponding acceptable instance types. Although the ML model configured in this notebook does not have any software costs, other ML Models from AWS Marketplace may incur additional software costs. This notebook will cost between 30-40$ USD to deploy and test the endpoints.\n", "\n", "\n", "### Table of Contents\n", "\n", "0. [Prerequisites](#prereq)\n", "\n", "1. [Step 1: Setting up the model and endpoint](#setup-model-endpoint)\n", "\n", " 1. [Step 1.1: Set up environment](#env-setup)\n", " 2. [Step 1.2: Identify and prepare data for load testing](#data-setup)\n", " 3. [Step 1.3: Subscribe to the PyTorch ResNet50 ML Model from AWS Marketplace](#ml-setup)\n", " 4. [Step 1.4: Set up Lambda function and API Gateway](#infra-setup)\n", " \n", "2. [Step 2: Load Testing](#load-testing)\n", "\n", " 1. [Step 2.1: Comprehensive testing](#comprehensive-testing)\n", " 1. [Step 2.1.1: Deploy endpoints](#deploy-ep)\n", " 2. [Step 2.1.2: Test endpoints with a sample paylod](#check-eps)\n", " 3. [Step 2.1.3: Execute load tests](#run-load-tests)\n", " 4. [Step 2.1.4: Performance vs price plot](#plot)\n", " 5. [Step 2.1.5: Finalize Configuration](#recommendations)\n", " \n", " 2. [Step 2.2: Semi-automatic testing](#semi-aut-testing)\n", " 1. [Step 2.2.1: First iteration](#first-iteration)\n", " 2. [Step 2.2.2: Second iteration](#second-iteration)\n", " \n", "3. [Step 3: Clean up Resources](#clean-up)" ] }, { "cell_type": "markdown", "id": "384c67ca", "metadata": {}, "source": [ "### Prerequisites\n", "" ] }, { "cell_type": "code", "execution_count": null, "id": "05f6956a", "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Install Locust library for load testing\n", "import sys\n", "!{sys.executable} -m pip install locust==1.2.3" ] }, { "cell_type": "markdown", "id": "67870e0d", "metadata": {}, "source": [ "1. If you'd like to perform load tests for your custom algorithm and model, you should already have a model registered on Amazon SageMaker, and have the model ARN ready.\n", "2. You will need **sufficient account limits** to create all endpoints you'd like to test. This notebook tests GPU instances, including ml.p3.2xlarge and ml.g4dn.xlarge instances. If you do not have the sufficient instance count limits, you can request account quotas updates [here](https://aws.amazon.com/premiumsupport/knowledge-center/ec2-instance-limit/), or edit the instance types accordingly in the first cell under the [\"Comprehensive testing\"](#comprehensive-testing) section. To execute this notebook as-is, you will need the following limits - \n", "\n", " - ml.c5.xlarge - 1\n", " - ml.c5.2xlarge - 1\n", " - ml.m5.large - 1\n", " - ml.m5.xlarge - 1\n", " - ml.p2.xlarge - 1\n", " - ml.p3.2xlarge - 1\n", " - ml.g4dn.xlarge -3" ] }, { "cell_type": "code", "execution_count": null, "id": "c8498f31", "metadata": {}, "outputs": [], "source": [ "from sagemaker import get_execution_role\n", "\n", "print(get_execution_role())" ] }, { "cell_type": "markdown", "id": "a1da8e45", "metadata": {}, "source": [ "3. **Amazon SageMaker execution role with _Administrator_ permissions on the account**, or the following AWS managed policies:\n", " - AmazonSageMakerFullAccess (this policy is attached by default to notebook execution roles). \n", " - IAMFullAccess\n", " - AmazonAPIGatewayAdministrator\n", " - AWSPriceListServiceFullAccess\n", " - AWSLambda_FullAccess\n", " \n", "If you are unsure, go to the IAM console [here](https://console.aws.amazon.com/iam/home?region=us-east-1#/roles), search for the notebook execution role (the role ARN is printed in the cell above), and attach the above policies using the 'Attach policies' button. \n", "\n", "" ] }, { "cell_type": "markdown", "id": "4eaa39ad", "metadata": {}, "source": [ "## Step 1: Setting up the model and endpoint\n", "\n", " " ] }, { "cell_type": "markdown", "id": "c6ae0b44", "metadata": {}, "source": [ "In this section, you will load the model from the AWS Marketplace and host it to real-time endpoints on Amazon SageMaker based on suggested instance types. You will also set up an Amazon API Gateway and AWS Lambda function to trigger the endpoints for load testing. If you are testing on a custom model, skip to the [Infrastructure Setup](#infra-setup) section to set up the API Gateway." ] }, { "cell_type": "markdown", "id": "b3bdd2b2", "metadata": {}, "source": [ "### Step 1.1: Set up environment\n", "" ] }, { "cell_type": "markdown", "id": "2bc30f91", "metadata": {}, "source": [ "In this section, you will import the necessary libraries and declare variables." ] }, { "cell_type": "code", "execution_count": null, "id": "f01bb408", "metadata": {}, "outputs": [], "source": [ "%load_ext autoreload\n", "%autoreload 2\n", "\n", "# Import all necessary libraries\n", "import os\n", "import re\n", "import json\n", "import time\n", "import boto3\n", "import random\n", "import requests\n", "import base64\n", "import sagemaker\n", "import pandas as pd\n", "\n", "from pprint import pprint\n", "from IPython.display import Image \n", "from sagemaker import ModelPackage\n", "from sagemaker import get_execution_role\n", "\n", "# Import from helper functions\n", "from api_helper import create_infra, delete_infra\n", "from sagemaker_helper import deploy_endpoints, clean_up_endpoints\n", "from load_test_helper import run_load_tests, generate_plots, get_min_max_instances, generate_latency_plot\n", "\n", "# Define the boto3 clients/resources that will be used later on\n", "sm_client = boto3.client('sagemaker')\n", "\n", "# Define session variables\n", "sagemaker_session = sagemaker.Session()\n", "region = sagemaker_session.boto_region_name\n", "account_id = sagemaker_session.account_id()\n", "role = get_execution_role()" ] }, { "cell_type": "markdown", "id": "fb82dd18", "metadata": {}, "source": [ "### Step 1.2: Identify and prepare data for load testing\n", "" ] }, { "cell_type": "markdown", "id": "249f0057", "metadata": {}, "source": [ "#### Dataset\n", "\n", "The **Pytorch ResNet50** ML model used by this notebook has been trained on the [ImageNet](http://www.image-net.org/about) dataset, which consists of around 100,000 image classes, with 1000 images each. To run the load testing on the ML model, you will use a sample image attached with this repository. In next cell, you will load and view the image." ] }, { "cell_type": "code", "execution_count": null, "id": "90f53cc6", "metadata": {}, "outputs": [], "source": [ "# Images are unzipped automatically to the val2017/ folder\n", "input_file = \"plants.jpg\"\n", "# Load and display image\n", "pil_img = Image(filename=input_file)\n", "display(pil_img)" ] }, { "cell_type": "markdown", "id": "32d3955e", "metadata": {}, "source": [ "### Step 1.3: Subscribe to the PyTorch ResNet50 ML Model from AWS Marketplace\n", "" ] }, { "cell_type": "markdown", "id": "aface749", "metadata": {}, "source": [ "Next, you will subscribe to the model package from the AWS Marketplace and create a dynamic model which you will configure for the load testing.\n", "\n", "In this notebook, you will use the **Pytorch ResNet50** models available in the AWS Marketplace [here](https://aws.amazon.com/marketplace/ai/configuration?productId=f2590bef-2833-45ce-951b-55a28349e14f&ref=sa_campaign_ds_vj). Since this is a demonstration notebook, it covers both, a CPU and GPU ML model.\n", "\n", "\n", "1. Open the following ML models in separate tabs. Note that both ML models have application/x-image as the mime type.\n", " * [Wide ResNet 50 - CPU](https://aws.amazon.com/marketplace/pp/prodview-dnc7grtzdiihs) \n", " * [Wide ResNet 50 - GPU](https://aws.amazon.com/marketplace/pp/prodview-v2r2tm2tepa3o) \n", "2. For both the ML models follow the following process:\n", " 2. Read the **Highlights** section and then **product overview** section of the listing.\n", " 3. View **usage information** and then **additional resources**.\n", " 4. Note the supported instance types.\n", " 5. Next, click on **Continue to subscribe**.\n", " 6. Review **End user license agreement**, **support terms**, as well as **pricing information**.\n", " 7. **\"Accept Offer\"** button needs to be clicked if your organization agrees with EULA, pricing information as well as support terms.\n", " 8. Choose **Continue to Configuration**.\n", " 9. Copy the **Product Arn** and specify the same in the following cell" ] }, { "cell_type": "code", "execution_count": null, "id": "09d8619e", "metadata": {}, "outputs": [], "source": [ "# Note down the model ARN from AWS Marketplace\n", "# The sample notebook is developed in the us-east-1 region. If you are working in a different region, make sure to get the right product Arn.\n", "model_arn_cpu = \"arn:aws:sagemaker:us-east-1:865070037744:model-package/pytorch-ic-wide-resnet50-2-cpu-6a1d8d24bbc97d8de3e39de7e74b3293\"\n", "model_arn_gpu = \"arn:aws:sagemaker:us-east-1:865070037744:model-package/pytorch-ic-wide-resnet50-2-gpu-445fe358cb7a3a0d92861174cf00c113\"" ] }, { "cell_type": "code", "execution_count": null, "id": "2e98bcdb", "metadata": {}, "outputs": [], "source": [ "# Get content type for invoking endpoint\n", "content_type = 'application/x-image'" ] }, { "cell_type": "markdown", "id": "03235b4b", "metadata": {}, "source": [ "Next, you will create dynamic ML models from the model packages specified. " ] }, { "cell_type": "code", "execution_count": null, "id": "ce5430e9", "metadata": {}, "outputs": [], "source": [ "# Create CPU and GPU models on Amazon SageMaker from the given model ARNs\n", "cpu_model = ModelPackage(\n", " role=role,\n", " model_package_arn=model_arn_cpu,\n", " sagemaker_session=sagemaker_session\n", ")\n", "\n", "gpu_model = ModelPackage(\n", " role=role,\n", " model_package_arn=model_arn_gpu,\n", " sagemaker_session=sagemaker_session\n", ")" ] }, { "cell_type": "markdown", "id": "e2e94ec3", "metadata": {}, "source": [ "At this point, you have created two Amazon SageMaker models, for the CPU and GPU versions. The Amazon SageMaker endpoints will not be created yet, since deploying the endpoints is part of the process that will be automated based on your choice of instance types. The next step for you is to configure the threshold limits." ] }, { "cell_type": "code", "execution_count": null, "id": "a677062b", "metadata": {}, "outputs": [], "source": [ "# Specify the minimum and maximum number of requests for your model here\n", "min_requests_per_second = 30\n", "max_requests_per_second = 70" ] }, { "cell_type": "markdown", "id": "d4a3d1ed", "metadata": {}, "source": [ "### Step 1.4: Set up Lambda Function and an API Gateway\n", "" ] }, { "cell_type": "markdown", "id": "34797e2d", "metadata": {}, "source": [ "**Prerequisite** - If you are using your own model, ensure that it is loaded on Amazon SageMaker to be hosted on an endpoint before proceeding with this step. \n", "\n", "To test the functioning of the model endpoint in a production environment, you will implement an API Gateway and an AWS Lambda function to direct the requests to the model endpoint. The infrastructure approach followed is the same as described in the blog [Call an Amazon SageMaker model endpoint using Amazon API Gateway and AWS Lambda](https://aws.amazon.com/blogs/machine-learning/call-an-amazon-sagemaker-model-endpoint-using-amazon-api-gateway-and-aws-lambda/).\n", "\n", "The blog suggests the following architecture:\n", "1. An Amazon API Gateway that receives the prediction requests\n", "2. A AWS Lambda function that sits behind the API Gateway and calls the endpoint for requests\n", "\n", "As a complimentary element of this infrastructure, you will also need to account for the roles and policies that allow the resources to call the required AWS services.\n", "\n", "To successfully set up the suggested infrastructure, you will also need corresponding IAM roles and policies that allow the resources to call the required AWS services. To make it easy for you to set up the infrasture suggested for the load test, you can use the `create_infra()` helper function. The function outputs the API gateway URL that will be used later on to request the predictions via POST requests.\n", "\n", "__Note:__\n", "If you are using your custom model, edit the lambda_index function to update the right payload (input data and content types)." ] }, { "cell_type": "code", "execution_count": null, "id": "a3e5b02b", "metadata": {}, "outputs": [], "source": [ "%%writefile lambda_index.py\n", "\n", "import os\n", "import boto3\n", "import json\n", "import base64\n", "from botocore.config import Config\n", "from botocore.exceptions import ClientError\n", "\n", "\n", "# SageMaker runtime is used to invoke the endpoint\n", "runtime = boto3.client(\n", " 'runtime.sagemaker',\n", " config = Config(\n", " connect_timeout = 30,\n", " read_timeout = 60,\n", " retries={'max_attempts': 20}\n", " )\n", ")\n", "\n", "\n", "def lambda_handler(event, context):\n", " \"\"\"\n", " Invokes SageMaker endpoint and return response\n", " \n", " This function invokes a given SageMaker endpoint and returns\n", " the endpoint response. SageMaker error codes are sent back for\n", " mapping by API gateway if the invocation results in an error.\n", " \n", " Inputs:\n", " data - imageb64 object to be classified\n", " endpoint - name of the endpoint to invoke\n", " \n", " Output:\n", " predicted response/error code.\n", " \"\"\"\n", " \n", " # Load data sent through API gateway\n", " data = json.loads(json.dumps(event))\n", " payload = data['data']\n", " endpoint_name = data['endpoint']\n", " image = base64.b64decode(payload)\n", " try:\n", " # Invoke endpoint\n", " response = runtime.invoke_endpoint(\n", " EndpointName=endpoint_name,\n", " ContentType=\"application/x-image\",\n", " Accept=\"application/json\",\n", " Body=image)\n", " \n", " # Read success/failure response\n", " response_code = response['ResponseMetadata']['HTTPStatusCode']\n", " \n", " return response['Body'].read().decode('utf-8')\n", " \n", " except ClientError as e:\n", " \n", " # Return failure code\n", " if e.response['Error']['Code'] == 'ModelError':\n", " response = json.loads(e.response['OriginalMessage'])\n", " return {'statusCode': response[\"code\"], 'body': response[\"type\"]}" ] }, { "cell_type": "code", "execution_count": null, "id": "fed8633e", "metadata": {}, "outputs": [], "source": [ "%%time\n", "# This function takes a few seconds to deploy the full architecture\n", "# including a lambda function and an API gateway\n", "# For easy identification of resources, specify a prefix for the application\n", "project_name = \"right-size-endpoints\" \n", "api_url = create_infra(project_name, account_id, region)\n", "\n", "# note the REST API id for clean up\n", "rest_api_id = api_url.replace('https://', '').split('.')[0]" ] }, { "cell_type": "markdown", "id": "8b0636fc", "metadata": {}, "source": [ "## Step 2: Load testing\n", "" ] }, { "cell_type": "markdown", "id": "62c0fbcb", "metadata": {}, "source": [ "In the second section, deploy the model in the form of multiple Amazon SageMaker Endpoints, to perform load testing. \n", "\n", "There are two ways of executing this notebook - \n", "- **Comprehensive testing**: In this section, you will deploy the supported instance types of the model. You will then perform testing across all supported instance types and run the load tests until the endpoint fails for each instance type. In this notebook, we \"fail\" an endpoint when the instance cannot respond to 1% of the incoming requests. Finally, you will plot performance in maximum requests per second against the price (assuming single instance endpoints), so that the you can make an informed decision on choosing the right instance. \n", "\n", "- **Semi-automatic testing**: If you are familiar with Amazon SageMaker instances and have a shortlisted set of instance types and number of instances that you would like to test, proceed to the [Semi-Automatic Testing](#semi-aut-testing) section, skipping the comprehensive testing.\n", "\n", "*Note: If you like to increase or decrease the error rate, you can edit it in line 15 of `locust_file.py`*" ] }, { "cell_type": "markdown", "id": "d1c7b539", "metadata": {}, "source": [ "### Step 2.1: Comprehensive testing\n", "" ] }, { "cell_type": "markdown", "id": "7d339591", "metadata": {}, "source": [ "This section creates an endpoint each for each of the supported instance types. From the model configuration page [here](https://aws.amazon.com/marketplace/pp/prodview-dnc7grtzdiihs) under the 'Pricing' tab, you can obtain the supported instance types. They are listed below:\n", "\n", "__CPU__\n", "- ml.c5.xlarge\n", "- ml.c5.2xlarge\n", "- ml.m5.large\n", "- ml.m5.xlarge\n", "\n", "__GPU__\n", "- ml.p2.xlarge\n", "- ml.p3.2xlarge\n", "- ml.g4dn.xlarge\n", "\n", "Next, you will create multiple endpoints, one of each instance type, and keep the instance count at 1 (as specified in the following configuration). " ] }, { "cell_type": "markdown", "id": "b9438b5e", "metadata": {}, "source": [ "### Step 2.1.1 Deploy endpoints\n", "" ] }, { "cell_type": "code", "execution_count": null, "id": "806750c5", "metadata": {}, "outputs": [], "source": [ "# Update with the supported instances if running for a different model\n", "\n", "endpoints_dict = [ \n", " {\"instance_type\": \"ml.c5.xlarge\", \"instance_count\": 1},\n", " {\"instance_type\": \"ml.c5.2xlarge\", \"instance_count\": 1},\n", " {\"instance_type\": \"ml.m5.large\", \"instance_count\": 1},\n", " {\"instance_type\": \"ml.m5.xlarge\", \"instance_count\": 1},\n", " {\"instance_type\": \"ml.p2.xlarge\", \"instance_count\": 1},\n", " {\"instance_type\": \"ml.p3.2xlarge\", \"instance_count\": 1},\n", " {\"instance_type\": \"ml.g4dn.xlarge\", \"instance_count\": 1},\n", "]" ] }, { "cell_type": "code", "execution_count": null, "id": "24334563", "metadata": {}, "outputs": [], "source": [ "%%time\n", "# The endpoints are created synchronously, so this code will take\n", "# a few minutes to execute\n", "endpoints = deploy_endpoints(endpoints_dict, cpu_model, gpu_model)" ] }, { "cell_type": "markdown", "id": "489eb36f", "metadata": {}, "source": [ "### Step 2.1.2: Test endpoints with a sample payload\n", "" ] }, { "cell_type": "markdown", "id": "d0adabcc", "metadata": {}, "source": [ "Now that there are multiple endpoints, let's test all the endpoints to make sure they are running without any errors. A HTTP 200 response for the requests ensures that the model is working as expected. " ] }, { "cell_type": "code", "execution_count": null, "id": "a81aa9a5", "metadata": {}, "outputs": [], "source": [ "# Create an Image object by reading the image\n", "with open(input_file, 'rb') as f:\n", " image = f.read()" ] }, { "cell_type": "code", "execution_count": null, "id": "bb48507b", "metadata": {}, "outputs": [], "source": [ "# Run the lambda function once for each endpoint\n", "# and check for HTTP 200 response\n", "for ep in endpoints:\n", " # Create a payload for Lambda - input variables are the image and endpoint name\n", " payload = {\n", " 'data': str(image),\n", " 'endpoint': ep}\n", "\n", " response = requests.post(api_url, json=payload)\n", " print(f\"Endpoint {ep}: {response}\")" ] }, { "cell_type": "markdown", "id": "20051cff", "metadata": {}, "source": [ "### Step 2.1.3: Execute load tests\n", "" ] }, { "cell_type": "markdown", "id": "74f4a33c", "metadata": {}, "source": [ "Now, to run the load tests, you will execute the `run_load_tests` function that calls a short bash script that runs locust.\n", "\n", "To run the tests, you will just need to pass the list of endpoints to test to the function `run_load_tests()`. The load test generates multiple comma separated files (.csv) with the test results and store them in a folder with the name `results-YYYY-MM-DD-HH-MM-SS`. Inside this folder you can find the individual test results for each endpoint instance." ] }, { "cell_type": "code", "execution_count": null, "id": "7087611e", "metadata": {}, "outputs": [], "source": [ "# View the run_locust file\n", "!cat run_locust.sh" ] }, { "cell_type": "markdown", "id": "8944f101", "metadata": {}, "source": [ "__Run_locust.sh breakdown__\n", "\n", "The `run_locust.sh` file takes in endpoint name and the created API gateway URL as inputs and invokes the load test. \n", "\n", "The `env` prefix specifies environment variables that are sent to the locust file. Since we're testing multiple endpoints, send the endpoint name as a variable.\n", "\n", "`locust -f locust_file.py` runs the load tests. Refer to the file for details on how the load testing is conducted.\n", "\n", "`--headless` runs the test without a web UI. Refer [here](https://docs.locust.io/en/stable/running-locust-without-web-ui.html) for documentation.\n", "\n", "`--csv` specifies a prefix to store the test results. Here, specify the endpoint name for easy identification.\n", "\n", "`-u` specifies the number of users to spawn.\n", "\n", "`-r` specifies the spawn rate.\n", "\n", "`--host` specifies the endpoint URL to test.\n", "\n", "\n", "For all available confiuration options, refer to the [Locust documentation.](https://docs.locust.io/en/stable/configuration.html)" ] }, { "cell_type": "code", "execution_count": null, "id": "52ea8aa4", "metadata": {}, "outputs": [], "source": [ "%%time\n", "# This cell runs the load tests on all endpoints. Grab a cup of coffee and wait for it to complete!\n", "\n", "# NOTE: If you are using a smaller instance than a ml.m5.4xlarge, this cell might take from\n", "# minutes to hours to run. On the ml.m5.xlarge instance for the given endpoints, this took \n", "# about 30 minutes.\n", "\n", "# You can also test the invocation metrics on the CloudWatch console\n", "results_folder = run_load_tests(api_url, endpoints)" ] }, { "cell_type": "markdown", "id": "4e5bff3d", "metadata": {}, "source": [ "**Background on Locust load tests**\n", "\n", "The `run_load_tests` function performs load tests and organizes the resulting files into a single folder. Locust saves up to four files for each load test with the following suffixes, similar to the directory structure below.\n", "\n", "```\n", "├── results-\n", "│ ├── \n", "│ │ ├── _failures.csv\n", "│ │ ├── _stats.csv\n", "│ │ └── _stats_history.csv\n", "│ │ └── _exceptions.csv (optional)\n", "│ ├── ...\n", "```\n", "The first two files contain the failures and stats for the whole test run, with a row for every stats entry and an aggregated row. The stats history will get new rows with the current (10 seconds sliding window) stats appended during the whole test run. You can find more information on the files and how to increase/decrease the interval of writing stats in the [documentation](https://docs.locust.io/en/stable/retrieving-stats.html).\n", "\n", "In the next section, plot the maximum number of requests per second handled by each endpoint. You are also recommended to explore the files generated to check for exceptions, failures, users/requests generated etc." ] }, { "cell_type": "markdown", "id": "3c1d6db7", "metadata": {}, "source": [ "### Step 2.1.4: Performance vs price plot\n", "" ] }, { "cell_type": "markdown", "id": "6adbd517", "metadata": {}, "source": [ "AWS constantly updates instance prices to provide best value to our customers. The Price List Service API (AKA the Query API) and AWS Price List API (AKA the Bulk API) enables you to query for the prices of AWS services using either JSON (with the Price List Service API) or HTML (with the AWS Price List API). To query the instance prices in real time, your Amazon SageMaker execution role must have permissions to access the service (refer to the [Prerequisites](#prereq) section)" ] }, { "cell_type": "code", "execution_count": null, "id": "ed9f561d", "metadata": {}, "outputs": [], "source": [ "# Plot your results in a single chart to compare instance types\n", "# Set sep_cpu_gpu=True if you want the results plotted in two separate\n", "# graphs for the CPU and GPU instances\n", "results = generate_plots(endpoints, endpoints_dict, results_folder, sep_cpu_gpu=False)" ] }, { "cell_type": "markdown", "id": "ab6173da", "metadata": {}, "source": [ "**Load test analysis**\n", "1. The GPU instances perform significantly better for image classification problems, with the least metric at ~19 requests per second for USD1.125 per hour (ml.p2.xlarge instance), versus the most expensive CPU providing around 4 requests per second at USD0.40 per hour.\n", "\n", "2. Within the GPU instances, the `ml.g4dn.xlarge` instance is the industry's most cost-effective GPU instance for deploying machine learning models that are graphics-intensive such as image classification, object detection etc. At USD0.70 an hour, it performs almost twice as better as the next cheaper option, the `ml.p2.xlarge`. \n", "\n", "3. Depending on your requests per second (10 rps or lower versus 30 rps or more), you can choose the CPU or the GPU option. If you go the GPU route, the G4 instance clearly emerges the winner. In the next section, we will programmatically obtain the recommended instance type and autoscaling configuration." ] }, { "cell_type": "markdown", "id": "087ed475", "metadata": {}, "source": [ "__Optional: View latency metrics__" ] }, { "cell_type": "markdown", "id": "77ebb437", "metadata": {}, "source": [ "Optionally, if latency is what you desire, you can also view the average, minimum and maximum response times for each endpoint using the `generate_latency_plot` function from the helper file. " ] }, { "cell_type": "code", "execution_count": null, "id": "096cc0cf", "metadata": {}, "outputs": [], "source": [ "generate_latency_plot(endpoints, results_folder)" ] }, { "cell_type": "markdown", "id": "b54256ab", "metadata": {}, "source": [ "**Latency metrics analysis**\n", "\n", "The minimum response times stay at below one second for all endpoints regardless of the instance type. However, the average response time is the least (quickest) for the three GPU isntances, with the `ml.g4dn.xlarge` and `ml.p3.2xlarge` having the lowest average response times. \n", "\n", "**Note** - The response time includes model processing time and the API Gateway/Lambda processing time. If a faster response is required for your application, you can use the Locust result file to examine the response times and choose accordingly. " ] }, { "cell_type": "markdown", "id": "19aacdc5", "metadata": {}, "source": [ "### Step 2.1.5 Finalize configuration\n", "" ] }, { "cell_type": "markdown", "id": "6a8a33d3", "metadata": {}, "source": [ "Now that you are done with the \"grid search\" to run all possible endpoints, you can decide on the final endpoint configuration for your use case based on the number of average and maximum requests per second expected for your application.\n", "\n", "In the following cell, enter the minimum and maximum requests and then run the remaining section to print the recommended instances for autoscaling. If your application or endpoint gets spiky loads, it is a good idea to configure an auto-scaling configuration. \n", "\n", "For other best practices on ML deployment, see [Deployment Best Practices](https://docs.aws.amazon.com/sagemaker/latest/dg/best-practices.html)." ] }, { "cell_type": "code", "execution_count": null, "id": "1a085da4", "metadata": {}, "outputs": [], "source": [ "# Get suggestions for a specific payload\n", "\n", "# This function calculates the number of instances based on linear scaling\n", "# and returns the suggested instance type and counts\n", "phase2_endpoints_dict = get_min_max_instances(results, min_requests_per_second, max_requests_per_second)\n", "\n", "# Print recommended number of instances based on linear scaling\n", "print(\"Recommended endpoint configuration for the specified number of requests: \")\n", "print(\"________________________________________________________________\")\n", "print(f\"Instance Type: {phase2_endpoints_dict[0]['instance_type']}\")\n", "print(f\"Minimum Instance Count: {phase2_endpoints_dict[0]['instance_count']}\")\n", "print(f\"Maximum Instance Count: {phase2_endpoints_dict[1]['instance_count']}\")" ] }, { "cell_type": "markdown", "id": "0e1521b4", "metadata": {}, "source": [ "The function assumes linear scaling while calculating the minimum and maximum number of instances needed to serve your expected requests per second. The code calculates the price for each instance type and instance count, and returns the least expensive option. \n", "\n", "Test the endpoints with the given instance count configurations to ensure that they can serve your requirements." ] }, { "cell_type": "code", "execution_count": null, "id": "b9a46a02", "metadata": {}, "outputs": [], "source": [ "# Optional: test the instances\n", "phase2_endpoints = deploy_endpoints(phase2_endpoints_dict, cpu_model, gpu_model)\n", "phase2_results_folder = run_load_tests(api_url, phase2_endpoints)\n", "generate_plots(phase2_endpoints, phase2_endpoints_dict, phase2_results_folder, sep_cpu_gpu=False)" ] }, { "cell_type": "markdown", "id": "50371ca4", "metadata": {}, "source": [ "Amazon provides you with the ability to automaticaly scale endpoints for your hosted models (autoscaling), i.e., dynamically adjusts the number of instances in response to changes in your workload. When workload increases, autoscaling provisions more instances, and when workload decreases, it removes unnecessary instances so you don't pay for instances that you aren't using.\n", "\n", "The main components of an autoscaling policy are - \n", "1. A target metric - an Amazon CloudWatch metric that is monitored to determine if and when to scale\n", "2. Minimum and maximum capacity - minimum and maximum number of instances (provided in the cell above)\n", "3. Cool down period - time, in seconds, after a scale-in or scale-out activity completes before another scale-out activity can start\n", "4. IAM policy and role to allow Amazon SageMaker to configure autoscaling.\n", "\n", "Next, you can update the **min_capacity** and the **max_capacity** you identified in the preceding cell, and adjust the target value, scale in and scale out cool down values in the following configuration. -\n", "```\n", "resource_id='endpoint/' + endpoint_name + '/variant/' + 'AllTraffic'\n", "\n", "client = boto3.client('application-autoscaling')\n", "\n", "response = client.register_scalable_target(\n", " ServiceNamespace='sagemaker',\n", " ResourceId=resource_id,\n", " ScalableDimension='sagemaker:variant:DesiredInstanceCount',\n", " MinCapacity=5, # obtained from load testing\n", " MaxCapacity=7\n", ")\n", "\n", "response = client.put_scaling_policy(\n", " PolicyName='Invocations-ScalingPolicy',\n", " ServiceNamespace='sagemaker',\n", " ResourceId=resource_id, # Endpoint name \n", " ScalableDimension='sagemaker:variant:DesiredInstanceCount',\n", " PolicyType='TargetTrackingScaling',\n", " TargetTrackingScalingPolicyConfiguration={\n", " # update target value, scale in and scale out cool down values as per your requirements\n", " 'TargetValue': 1000.0, # The target value for the metric (in minutes)\n", " 'PredefinedMetricSpecification': {\n", " 'PredefinedMetricType': 'SageMakerVariantInvocationsPerInstance', \n", " 'ScaleInCooldown': 600, # wait 600 seconds before terminating additional instances\n", " 'ScaleOutCooldown': 300 # wait 300 seconds before adding new instances\n", " }\n", ")\n", "```\n", "\n", "For more information, see [Automatically Scale Amazon SageMaker Models](https://docs.aws.amazon.com/sagemaker/latest/dg/endpoint-auto-scaling.html)." ] }, { "cell_type": "markdown", "id": "6f4d56bf", "metadata": {}, "source": [ "### Step 2.2: Semi-automatic testing\n", "" ] }, { "cell_type": "markdown", "id": "1ec257a6", "metadata": {}, "source": [ "As commented previously, the semi-automatic testing is intended for a user more experienced with Amazon Sagemaker, and who perhaps has interest on running the iteration process of finding the best instance, manually.\n", "\n", "To be able to run this, you need to know, or have an idea, of what the maximum and average requests per second your endpoint will be serving. This is a key starting point to be able to use the functionality of this notebook.\n", "\n", "As a rule of thumb, test the following two different instance types first, with a single instance of each:\n", "\n", "1. The \"most expensive\" CPU that can host your model (in the sense of cost per hour).\n", "2. The \"cheapest\" GPU that can host your model.\n", "\n", "Then, use your minimum and maximum requests per second goal (rps) that you will need to handle, throughout the decision process below.\n", "\n", "\n", "**NOTE**: The idea behind this approach is to identify first what type of instance might suit you better. \n", "\n", "The possible outcome results of this first iteration are as follows:\n", "\n", "1. If your target rps is above the max rps achieved by a single instance of the cheapest available GPU, it is possible that you will have either to use a more expensive GPU or multiple instances of the cheapest one. --> In this scenario, it is very likely that the CPU instances might not be the best option (in some cases you will have to launch too many CPUs to match the performance of a single GPU, for a higher price).\n", "\n", "\n", "\n", "2. If your target rps are below the max rps achieved by a single instance of the most expensive CPU, it is possible that you will be able to use a cheaper CPU instance, or just stay with the one you tested (depending on the target rps vs the max rps achieved). --> In this scenario, it is very likely that the GPU instances might not be the best option.\n", "\n", "\n", "\n", "3. If your target rps are between the max rps achieved by the cheapest GPU and the most expensive CPU, then you can evaluate from a cost perspective what might be the best, either use multiple CPU instances (assuming linear scaling), or using a single GPU.\n", "\n", "\n" ] }, { "cell_type": "markdown", "id": "50a06f90", "metadata": {}, "source": [ "### Step 2.2.1: First iteration\n", "" ] }, { "cell_type": "markdown", "id": "b9eb3f91", "metadata": {}, "source": [ "For the model that we will be using (**Pytorch ResNet50**), the most expensive CPU based instance is the `ml.c5.2xlarge` and the cheapest GPU based instance is `ml.g4dn.xlarge`. Deploy the model to these two instances." ] }, { "cell_type": "code", "execution_count": null, "id": "1124176d", "metadata": {}, "outputs": [], "source": [ "sat_endpoints_dict = [\n", " {\"instance_type\": \"ml.c5.2xlarge\", \"instance_count\": 1},\n", " {\"instance_type\": 'ml.g4dn.xlarge', \"instance_count\": 1}\n", "]" ] }, { "cell_type": "code", "execution_count": null, "id": "2696015d", "metadata": {}, "outputs": [], "source": [ "# Deploy the endpoints and plot max requests per second for each endpoint type \n", "\n", "sat_endpoints = deploy_endpoints(sat_endpoints_dict, cpu_model, gpu_model)\n", "sat_results_folder = run_load_tests(api_url, sat_endpoints)\n", "results = generate_plots(sat_endpoints, sat_endpoints_dict, sat_results_folder, sep_cpu_gpu=False)" ] }, { "cell_type": "code", "execution_count": null, "id": "261ff4c1", "metadata": {}, "outputs": [], "source": [ "# Programmatically get suggested instance type and counts\n", "sa_endpoints_dict = get_min_max_instances(results, min_requests_per_second, max_requests_per_second)" ] }, { "cell_type": "code", "execution_count": null, "id": "492ea067", "metadata": {}, "outputs": [], "source": [ "# Print recommended number of instances based on linear scaling\n", "print(\"Recommended endpoint configuration for the specified number of requests: \")\n", "print(\"________________________________________________________________\")\n", "print(f\"Instance Type: {sa_endpoints_dict[0]['instance_type']}\")\n", "print(f\"Minimum Instance Count: {sa_endpoints_dict[0]['instance_count']}\")\n", "print(f\"Maximum Instance Count: {sa_endpoints_dict[1]['instance_count']}\")" ] }, { "cell_type": "markdown", "id": "7b1bc984", "metadata": {}, "source": [ "In the results above, the goal of average rps exceeds both the capacity of the most expensive CPU and the cheapest GPU. In addition, the recommended number/type of instances (assuming linear scaling) favours the GPU rather than the CPU.\n", "\n", "In this sense, our exploratory work implies that we need to proceed with a second iteration to see which instances can serve our model properly.\n", "\n", "_Note: If you updated the maximum and minimum requests per second, your results may vary._" ] }, { "cell_type": "markdown", "id": "da768526", "metadata": {}, "source": [ "### Step 2.2.2: Second iteration\n", "\n", "\n", "In this second iteration, since you already found that the CPU instances perform worse, run a test comparing only GPUs.\n", "\n", "In the previous test, the single `ml.g4dn.xlarge` instance serves a bit more than 30rps. Assuming a linear scalability you can test if using 2 instances in an endpoint will allow it to serve 70rps (to verify linear scaling).\n", "\n", "In addition, test the performance of the most expensive GPU (available for this model - in this case the `ml.p3.2xlarge`) to see how it would perform in comparison with the `ml.g4dn.xlarge x 2`. Depending on the results, you can decide if another iteration is required." ] }, { "cell_type": "code", "execution_count": null, "id": "c7886213", "metadata": {}, "outputs": [], "source": [ "sat_p2_endpoints_dict = [\n", " {\"instance_type\": \"ml.p3.2xlarge\", \"instance_count\": 1},\n", " {\"instance_type\": 'ml.g4dn.xlarge', \"instance_count\": 2}\n", "]" ] }, { "cell_type": "code", "execution_count": null, "id": "e3e7b7be", "metadata": {}, "outputs": [], "source": [ "# Deploy the endpoints\n", "sat_p2_endpoints = deploy_endpoints(sat_p2_endpoints_dict, cpu_model, gpu_model)\n", "\n", "# Run the load tests and organize the data\n", "sat_p2_results_folder = run_load_tests(api_url, sat_p2_endpoints)\n", "\n", "# Generate the plots showing the results\n", "results = generate_plots(sat_p2_endpoints, sat_p2_endpoints_dict, sat_p2_results_folder, sep_cpu_gpu=False)" ] }, { "cell_type": "markdown", "id": "13932a44", "metadata": {}, "source": [ "In this case, the `ml.g4dn.xlarge x 2` endpoint managed to deal with ~70rps (actually, a bit more), whereas the `ml.p3.2xlarge` started to fail at 30rps (very similar performance to the single `ml.g4dn.xlarge`).\n", "\n", "Now, with regards to price, the `ml.g4dn.xlarge x 2` configuration looks like the final choice, in the sense that even though it has 2 instances, its price is even lower than a single instance `ml.p3.2xlarge`." ] }, { "cell_type": "markdown", "id": "3691b78f", "metadata": {}, "source": [ "## Step 3. Clean up Resources\n", "" ] }, { "cell_type": "markdown", "id": "ec237e26", "metadata": {}, "source": [ "Remember to delete the endpoints and unsubscribe from AWS Marketplace once your tests are complete to avoid incurring additional daily/monthly costs." ] }, { "cell_type": "code", "execution_count": null, "id": "7e961aeb", "metadata": { "scrolled": true }, "outputs": [], "source": [ "# delete endpoints from comprehensive testing\n", "clean_up_endpoints(endpoints + phase2_endpoints)\n", "\n", "# delete endpoints from semi-automatic testing\n", "clean_up_endpoints(sat_endpoints + sat_p2_endpoints)" ] }, { "cell_type": "markdown", "id": "183decb0", "metadata": {}, "source": [ "If you would like to unsubscribe to the model, follow these steps. Before you cancel the subscription, ensure that you do not have any [deployable model](https://console.aws.amazon.com/sagemaker/home?region=us-east-1#/models) created from the model package or using the algorithm. Note - You can find this information by looking at the container name associated with the model.\n", "\n", "__Steps to unsubscribe to product from AWS Marketplace:___\n", "1. Navigate to __Machine Learning__ tab on [Your Software subscriptions page](https://aws.amazon.com/marketplace/ai/library?productType=ml)\n", "2. Locate the listing that you would need to cancel subscription for, and then choose __Cancel Subscription__ to cancel the subscription.\n", "\n", "Note: If you do not delete the endpoint, you will be charged for the model as long as it is in use." ] }, { "cell_type": "code", "execution_count": null, "id": "c157c39e", "metadata": {}, "outputs": [], "source": [ "# delete infrastructure created to load test the endpoints\n", "models_list = [cpu_model, gpu_model]\n", "delete_infra(project_name, account_id, rest_api_id, models_list)" ] } ], "metadata": { "kernelspec": { "display_name": "conda_python3", "language": "python", "name": "conda_python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.13" }, "toc": { "base_numbering": 1, "nav_menu": {}, "number_sections": true, "sideBar": true, "skip_h1_title": false, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": false, "toc_position": { "height": "calc(100% - 180px)", "left": "10px", "top": "150px", "width": "404.567px" }, "toc_section_display": true, "toc_window_display": true }, "toc-autonumbering": true, "toc-showcode": false, "toc-showmarkdowntxt": false, "toc-showtags": false }, "nbformat": 4, "nbformat_minor": 5 }