{ "cells": [ { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "# Optimize SageMaker Endpoint Auto scaling using Inference recommender" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "This notebook's CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook. \n", "\n", "![This us-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-2/sagemaker-inference-recommender|auto-scaling|optimize_endpoint_scaling.ipynb)\n", "\n", "---" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## 1. Introduction\n", "\n", "Amazon SageMaker real-time endpoints allow you to host ML applications at scale. SageMaker Hosting offers various scaling options for real-time endpoint and they are detailed in this [blog](https://aws.amazon.com/blogs/machine-learning/configuring-autoscaling-inference-endpoints-in-amazon-sagemaker/). In summary, SageMaker endpoint supports three scaling options. The first and the recommended option is TragetTracking. In this option, a target value that represents the average utilization or/and throughput of single host is set as a scaling threshold. Secondly, you can define StepScaling, which is an advanced method for scaling based on the size of alarm breach. The final one is scheduled scaling, which lets you specify an one-time schedule or a recurring schedule. It is always recommended to combine the scaling options to better resilience. \n", "\n", "In this notebook, we provide a design pattern for arriving right deployment configuration for your application. In addition, we provide a list of steps to follow, so even if your application has an unique behavior, such as a different system characteristics or traffic pattern, these systemic approach can be applied to arrive the scaling policies. The procedure is further simplified with the use of Inference recommender, a right sizing and benchmarking tool, built inside SageMaker. " ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## 2. Setup \n", "\n", "Note that we are using the `conda_python3` kernel in SageMaker Notebook Instances. This is running Python 3.6. If you'd like to use the same setup, in the AWS Management Console, go to the Amazon SageMaker console. Choose Notebook Instances, and click create a new notebook instance. Upload the current notebook and set the kernel. You can also run this in SageMaker Studio Notebooks with the `Python 3 (Data Science)` kernel.\n", "\n", "In the next steps, you import standard methods and libraries as well as set variables that will be used in this notebook. The `get_execution_role` function retrieves the AWS Identity and Access Management (IAM) role you created at the time of creating your notebook instance." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false }, "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "# Install latest botocore\n", "!pip install --upgrade pip awscli botocore boto3 --quiet" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "from sagemaker import get_execution_role, Session, image_uris\n", "import boto3\n", "import time\n", "\n", "region = boto3.Session().region_name\n", "role = get_execution_role()\n", "sm_client = boto3.client(\"sagemaker\", region_name=region)\n", "sagemaker_session = Session()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2.1 Permission update\n", "\n", "In addition to the Sagemaker permission, you need `application auto-scaling` and `cloudwatch` permission to execute this notebook. Please add the below policy to your notebook execution role. \n", "\n", "```\n", "{\n", " \"Version\": \"2012-10-17\",\n", " \"Statement\": [\n", " {\n", " \"Sid\": \"VisualEditor0\",\n", " \"Effect\": \"Allow\",\n", " \"Action\": [\n", " \t\"application-autoscaling:RegisterScalableTarget\",\n", " \"application-autoscaling:DescribeScalableTargets\",\n", " \"application-autoscaling:DescribeScalingPolicies\",\n", " \"application-autoscaling:PutScalingPolicy\",\n", " \"application-autoscaling:DeleteScalingPolicy\",\n", " \"application-autoscaling:DeregisterScalableTarget\",\n", " \"cloudwatch:GetMetricStatistics\",\n", " \"cloudwatch:GetMetricWidgetImage\"\n", " ],\n", " \"Resource\": \"*\"\n", " }\n", " ]\n", "}\n", "```" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "### Optional: Train an XGBoost model\n", "\n", "Let's quickly train an XGBoost model. If you already have a model, you can skip this step and proceed to the next section.\n", "\n", "For the purposes of this notebook, we are training an XGBoost model on random data. As a first step, install scikit-learn and XGBoost" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "# Install sklearn and XGBoost\n", "!pip3 install -U scikit-learn xgboost==1.5.0 --quiet" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "### Generate Model and payload\n", "\n", "As mentioned, you should skip this step and proceed to the next section if you already have the model. In addition to the model, you should also have a valid payload which can be used for testing your model. \n", "\n", "In this case, we are training the XGBoost model with random data and also generating a sample payload for the exercise. However this procedure is independent of the deployment model or configuration. So you can adopt the same approach for your application and for your deployment choice." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "import numpy as np\n", "from numpy import loadtxt\n", "from xgboost import XGBClassifier\n", "from sklearn.model_selection import train_test_split\n", "\n", "# Generate dummy data to perform binary classification\n", "seed = 7\n", "features = 50 # number of features\n", "samples = 10000 # number of samples\n", "X = np.random.rand(samples, features).astype(\"float32\")\n", "Y = np.random.randint(2, size=samples)\n", "\n", "test_size = 0.1\n", "X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=test_size, random_state=seed)\n", "\n", "model = XGBClassifier()\n", "# Train the classifier model with random data\n", "model.fit(X_train, y_train)\n", "\n", "model_archive_name = \"model.tar.gz\"\n", "payload_archive_name = \"payload.tar.gz\"\n", "model_fname = \"xgboost.model\"\n", "# Save the model\n", "model.save_model(model_fname)\n", "\n", "batch_size = 100\n", "# Generate a sample payload which can then be used for benchmarking\n", "np.savetxt(\"sample.csv\", X_test[0:batch_size, :], delimiter=\",\")" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## 3. Create a tarball\n", "\n", "If you are bringing your own XGBoost model, SageMaker requires that it is in .tar.gz format containing a model file and, optionally, inference code.\n", "\n", "Similarly for the payload, SageMaker Inference recommender service expects a single archive file in .tar.gz format, which contains a list of valid samples. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "!tar -cvpzf {model_archive_name} 'xgboost.model'\n", "!tar -cvzf {payload_archive_name} sample.csv" ] }, { "cell_type": "markdown", "metadata": { "id": "z9kutpTP-uxd", "pycharm": { "name": "#%% md\n" } }, "source": [ "## 4. Upload to S3\n", "\n", "We now have a model and the payload archive ready. We need to upload it to S3 before we can use with Inference Recommender. Furthermore, we use the SageMaker Python SDK to handle the upload." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "TocwZSw4-uxd", "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "# model package tarball (model artifact + inference code)\n", "model_url = sagemaker_session.upload_data(path=model_archive_name, key_prefix=\"model\")\n", "sample_payload_url = sagemaker_session.upload_data(path=payload_archive_name, key_prefix=\"payload\")\n", "print(\"model uploaded to: {} and the sample payload to {}\".format(model_url, sample_payload_url))" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## 5. Container image URL\n", "\n", "If you don’t have an inference container image, you can use one of the open source AWS [Deep Learning Containers (DLCs)](https://github.com/aws/deep-learning-containers) provided by AWS to serve your ML model. The code below retrieves a SageMaker managed container for XGBoost." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "framework_version = \"1.5-1\"\n", "framework = \"xgboost\"\n", "container_url = image_uris.retrieve(framework, region, framework_version)\n", "container_url" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## 6a. Identify Application characteristics\n", "\n", "To dertermine to the correct scaling property, the first step in the plan is to find application behavior on the chosen hardware. This can be achieved by running the application on a single host and increasing the request load to the endpoint gradually until it saturates.\n", "\n", "For benchmarking, we use the Inference recommender `Default` job. In Default recommendation job, we can limit the search to a single instance by passing the single instance type in the supported_instance_types. The service then provisioning the endpoint. It then gradually increases the request and stops when the benchmark reaches saturation or when 1% of invocations call (invoke_endpoint) fails. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "from inference_recommender import trigger_inference_recommender_job\n", "\n", "# For this usecase we have using ml.c5.large instance type\n", "instance_type = \"ml.c5.large\"\n", "\n", "# We are starting an inference recommender job. The job uses model_url, container_url and instance type\n", "# for provisioning SageMaker endpoint and then uses sample_payload for benchmarking the endpoint.\n", "# As part of the benchmarking, the service increasing the request load to the endpoint gradually\n", "# until it saturates. In many cases, the endpoint can no longer handle any more requests and performance\n", "# begins to deteriorate after saturation. Inference recommender will then stop the benchmark and return results.\n", "job_name, model_package_arn = trigger_inference_recommender_job(\n", " model_url=model_url,\n", " payload_url=sample_payload_url,\n", " container_url=container_url,\n", " instance_type=instance_type,\n", " execution_role=role,\n", " framework=framework,\n", " framework_version=framework_version,\n", ")\n", "print(\"Inference recommender job_name={} model_package_arn={}\".format(job_name, model_package_arn))" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## 6b. Analyzing the result:\n", "\n", "We then analyze the results using hosting endpoint metrics. Below is a visualization of the invocation metrics, and from that, it follows that the hardware utilization. By layering invocations and utilization graphs, we are able to easily set limits for invocations per instance.\n", "\n", "In this step we run various scaling percentage to find the right scaling limit. As a general scaling rule, the utilization percentage should be around 40% if you are optimizing for availability, around 70% if you are optimizing for cost, and around 50% if you want to balance availability and cost. The above guidance gives an overview of the two dimensions, availability and cost. The lower the threshold, the better the availability, and the higher the threshold, the better the cost. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "from result_analysis import analysis_inference_recommender_result\n", "\n", "# The following function allows you to change the percentage and see how the invocations,\n", "# latency and utilization metrics limit. we highly recommended that you play around with\n", "# different percentage thresholds and find the best fit based on your metrics.\n", "# For this use-case, we have decided to proceed with a threshold between 45 - 55%\n", "max_value, upper_limit, lower_limit = analysis_inference_recommender_result(\n", " job_name=job_name, index=0, upper_threshold=55.0, lower_threshold=45.0\n", ")" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## 7. Set Scaling expectation:\n", "\n", "The next step in the plan is to set the scaling expectation and develop scaling policies based on that expectation. This step involves defining the maximum and minimum requests to be served.\n", "\n", "For our application, the expectations are maximum requests per second (max) = 500, and minimum request per second (min) = 70. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "import math\n", "\n", "# Scaling expectation\n", "max_tps = 500\n", "min_tps = 70\n", "\n", "# Based on the above set expectation, we define the MinCapacity and MaxCapacity using the formula below.\n", "# As InvocationsPerInstance is per minute, we normalize it second for the below calculation.\n", "tps_single_instance = upper_limit / 60\n", "\n", "# The growth factor is the amount of additional capacity that you are willing to add when your scale exceeds max_tps\n", "growth_factor = 1.2\n", "max_capacity = math.ceil((max_tps / tps_single_instance) * growth_factor)\n", "min_capacity = math.ceil((min_tps / tps_single_instance))\n", "invocations_per_instance = upper_limit\n", "print(\n", " \"Scaling configuration: MaxCapacity = {}, MinCapacity = {}, InvocationsPerInstance = {}\".format(\n", " max_capacity, min_capacity, invocations_per_instance\n", " )\n", ")" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## 8. Create endpoint and set scaling policy\n", "\n", "The final step in the plan is to define a scaling policy and evaluate its impact. The evaluation step is essential to validate the results of the calculations made so far. In addition, it helps us adjust the scaling setting if it does not meet the need. \n", "\n", "The first step in this procedure is to provision an Endpoint." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "from sagemaker.model import Model\n", "\n", "# create an evaluation endpoint\n", "endpoint_name = \"evaluation-endpoint-\" + str(round(time.time()))\n", "model = Model(image_uri=container_url, model_data=model_url, role=role)\n", "\n", "predictor = model.deploy(\n", " initial_instance_count=min_capacity, instance_type=instance_type, endpoint_name=endpoint_name\n", ")\n", "\n", "endpoint = sm_client.describe_endpoint(EndpointName=endpoint_name)\n", "variant_name = endpoint[\"ProductionVariants\"][0][\"VariantName\"]\n", "print(\"Endpoint details: EndpointName = {}, VariantName = {}\".format(endpoint_name, variant_name))" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## 8a. Apply scaling policies to SageMaker endpoint\n", "\n", "In order to define the scaling policies, you should register scaling and then add policies. The following section defines functions for registering scaling, setting target tracking policies on CPU utilization and InvocationsPerInstance.\n", "\n", "We use the defined functions to apply scaling configuration to provision SageMaker endpoint" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "from endpoint_scaling import register_scaling\n", "from endpoint_scaling import set_target_scaling_on_invocation\n", "from endpoint_scaling import set_target_scaling_on_cpu_utilization\n", "\n", "register_scaling_response = register_scaling(\n", " endpoint_name=endpoint_name,\n", " variant_name=variant_name,\n", " max_capacity=max_capacity,\n", " min_capacity=min_capacity,\n", ")\n", "\n", "invocation_scaling = set_target_scaling_on_invocation(\n", " endpoint_name=endpoint_name, variant_name=variant_name, target_value=invocations_per_instance\n", ")\n", "\n", "cpu_scaling = set_target_scaling_on_cpu_utilization(\n", " endpoint_name=endpoint_name, variant_name=variant_name, target_value=100\n", ")" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## 8c. Evaluate the scaling policy\n", "\n", "The evaluation is done using the Inference recommender `Advanced` job, where we specify the traffic pattern, MaxInvocations and Endpoint to benchmark against. In this case, we provision the endpoint and set the scaling policies, then run the inference recommender `Advanced` job to validate the policy." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "from inference_recommender import trigger_inference_recommender_evaluation_job\n", "from result_analysis import analysis_evaluation_result\n", "\n", "eval_job = trigger_inference_recommender_evaluation_job(\n", " model_package_arn=model_package_arn,\n", " execution_role=role,\n", " endpoint_name=endpoint_name,\n", " instance_type=instance_type,\n", " max_invocations=max_tps * 60,\n", " max_model_latency=10000,\n", " spawn_rate=1,\n", ")\n", "\n", "print(\"Evaluation job = {}, EndpointName = {}\".format(eval_job, endpoint_name))\n", "\n", "# In the next step, we will visualize the cloudwatch metrics and verify if we reach 30000 invocation.\n", "max_value = analysis_evaluation_result(endpoint_name, variant_name, job_name=eval_job)\n", "\n", "print(\"Max invocation realized = {}, and the expecation is {}\".format(max_value, 30000))" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "Now with the scaling policy, we could achieve 500 TPS at a request rate where we increase one user per minute. But does the same policy work when we increase the request rate twice? Let's rerun the same evaluation set with a higher request rate. \n", "\n", "To execute the step again, clear the auto scaling policies we set before and reset to the initial instance count." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "from endpoint_scaling import clear_auto_scaling_and_reset_to_initialCount\n", "\n", "# clear scaling and reset to initial count\n", "clear_auto_scaling_and_reset_to_initialCount(\n", " endpoint_name=endpoint_name, variant_name=variant_name, intial_count=min_capacity\n", ")" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "We use the same defined functions to apply scaling configuration to provision SageMaker endpoint, again." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "from endpoint_scaling import register_scaling\n", "from endpoint_scaling import set_target_scaling_on_invocation\n", "from endpoint_scaling import set_target_scaling_on_cpu_utilization\n", "\n", "register_scaling_response = register_scaling(\n", " endpoint_name=endpoint_name,\n", " variant_name=variant_name,\n", " max_capacity=max_capacity,\n", " min_capacity=min_capacity,\n", ")\n", "\n", "invocation_scaling = set_target_scaling_on_invocation(\n", " endpoint_name=endpoint_name,\n", " variant_name=variant_name,\n", " target_value=invocations_per_instance * 1.3,\n", ")\n", "\n", "cpu_scaling = set_target_scaling_on_cpu_utilization(\n", " endpoint_name=endpoint_name, variant_name=variant_name, target_value=100\n", ")" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## 8d. Repeat the process if the anything changes. \n", "\n", "We then increase the request rate (aka spawn rate) to 4 user per minute, meaning we increase the request rate twice as fast. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "from endpoint_scaling import wait_for_endpoint_to_finish_updating_or_creating\n", "\n", "# wait for endpoint to be in stable state\n", "wait_for_endpoint_to_finish_updating_or_creating(endpoint_name=endpoint_name)\n", "\n", "eval_aggregive_job = trigger_inference_recommender_evaluation_job(\n", " model_package_arn=model_package_arn,\n", " execution_role=role,\n", " endpoint_name=endpoint_name,\n", " instance_type=instance_type,\n", " max_invocations=max_tps * 60,\n", " max_model_latency=10000,\n", " spawn_rate=3,\n", ")\n", "\n", "print(\"Evaluation job = {}, EndpointName = {}\".format(eval_job, endpoint_name))\n", "\n", "# In the next step, we will visualize the cloudwatch metrics and verify if we reach 30000 invocation.\n", "max_value = analysis_evaluation_result(endpoint_name, variant_name, job_name=eval_aggregive_job)\n", "\n", "print(\"Max invocation realized = {}, and the expecation is {}\".format(max_value, 30000))" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "Note, the evaluation fails as the max invocation reached is less than 30000 invocations / 500 TPS. So to improve availability, we now need to alter our scaling policy. \n", "\n", "Using step scaling configuration, we try to set scaling adjustment to provision *n* of instances when a threshold is met. To try Step scaling, we now reset to the initial instance count and attempt setting the revised scaling policies." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "from endpoint_scaling import clear_auto_scaling_and_reset_to_initialCount\n", "\n", "clear_auto_scaling_and_reset_to_initialCount(\n", " endpoint_name=endpoint_name, variant_name=variant_name, intial_count=min_capacity\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "aas_client = boto3.client(\"application-autoscaling\", region_name=region)\n", "\n", "\n", "def set_step_scaling(endpoint_name, variant_name):\n", " policy_name = \"step-scaling-{}\".format(str(round(time.time())))\n", " resource_id = \"endpoint/{}/variant/{}\".format(endpoint_name, variant_name)\n", " response = aas_client.put_scaling_policy(\n", " PolicyName=policy_name,\n", " ServiceNamespace=\"sagemaker\",\n", " ResourceId=resource_id,\n", " ScalableDimension=\"sagemaker:variant:DesiredInstanceCount\",\n", " PolicyType=\"StepScaling\",\n", " StepScalingPolicyConfiguration={\n", " \"AdjustmentType\": \"ChangeInCapacity\",\n", " \"StepAdjustments\": [\n", " {\n", " \"MetricIntervalLowerBound\": 0.0,\n", " \"MetricIntervalUpperBound\": 5.0,\n", " \"ScalingAdjustment\": 1,\n", " },\n", " {\n", " \"MetricIntervalLowerBound\": 5.0,\n", " \"MetricIntervalUpperBound\": 80.0,\n", " \"ScalingAdjustment\": 3,\n", " },\n", " {\"MetricIntervalLowerBound\": 80.0, \"ScalingAdjustment\": 4},\n", " ],\n", " \"MetricAggregationType\": \"Average\",\n", " },\n", " )\n", " return policy_name, response" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "from endpoint_scaling import register_scaling\n", "from endpoint_scaling import set_target_scaling_on_invocation\n", "from endpoint_scaling import set_target_scaling_on_cpu_utilization\n", "\n", "register_scaling_response = register_scaling(\n", " endpoint_name=endpoint_name,\n", " variant_name=variant_name,\n", " max_capacity=max_capacity,\n", " min_capacity=min_capacity,\n", ")\n", "\n", "\n", "cpu_scaling = set_target_scaling_on_cpu_utilization(\n", " endpoint_name=endpoint_name, variant_name=variant_name, target_value=100\n", ")\n", "\n", "set_step_scaling(endpoint_name=endpoint_name, variant_name=variant_name)" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "Repeat the process of evaluation with Inference recommender advanced job." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "from endpoint_scaling import wait_for_endpoint_to_finish_updating_or_creating\n", "\n", "# wait for endpoint to be in stable state\n", "wait_for_endpoint_to_finish_updating_or_creating(endpoint_name=endpoint_name)\n", "\n", "eval_aggregive_job = trigger_inference_recommender_evaluation_job(\n", " model_package_arn=model_package_arn,\n", " execution_role=role,\n", " endpoint_name=endpoint_name,\n", " instance_type=instance_type,\n", " max_invocations=max_tps * 60,\n", " max_model_latency=10000,\n", " spawn_rate=3,\n", ")\n", "\n", "print(\"Evaluation job = {}, EndpointName = {}\".format(eval_job, endpoint_name))\n", "\n", "# In the next step, we will visualize the cloudwatch metrics and verify if we reach 30000 invocation.\n", "max_value = analysis_evaluation_result(endpoint_name, variant_name, job_name=eval_aggregive_job)\n", "\n", "print(\"Max invocation realized = {}, and the expecation is {}\".format(max_value, 30000))" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "Step scaling now allows us to reach the expected TPS at twice the rate of the original test. \n", "\n", "Therefore, defining the scaling policies and evaluating the results using the Inference recommender is a necessary part of validation. " ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## 9. Clean up the resources\n", "\n", "If you don't intend on trying out inference or to do anything else with the endpoint, you should delete it." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "model.delete_model()" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## 10. Conclusion:\n", "\n", "The process of defining the scaling policy for your application can be challenging. You must understand the characteristics of the application, determine your scaling needs, and iterate scaling policies to meet those needs. This notebook has reviewed each of these steps carefully and explained the approach you should take at each step. Furthermore, the notebook simplified the process with the SageMaker Inference recommender. You can find your application characteristics and evaluate scaling policies by using the Inference recommender benchmarking system. The proposed design pattern can help you create a scalable application that takes into account the availability and cost of your application in hours rather than days." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Notebook CI Test Results\n", "\n", "This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.\n", "\n", "![This us-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-1/sagemaker-inference-recommender|auto-scaling|optimize_endpoint_scaling.ipynb)\n", "\n", "![This us-east-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-2/sagemaker-inference-recommender|auto-scaling|optimize_endpoint_scaling.ipynb)\n", "\n", "![This us-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-1/sagemaker-inference-recommender|auto-scaling|optimize_endpoint_scaling.ipynb)\n", "\n", "![This ca-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ca-central-1/sagemaker-inference-recommender|auto-scaling|optimize_endpoint_scaling.ipynb)\n", "\n", "![This sa-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/sa-east-1/sagemaker-inference-recommender|auto-scaling|optimize_endpoint_scaling.ipynb)\n", "\n", "![This eu-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-1/sagemaker-inference-recommender|auto-scaling|optimize_endpoint_scaling.ipynb)\n", "\n", "![This eu-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-2/sagemaker-inference-recommender|auto-scaling|optimize_endpoint_scaling.ipynb)\n", "\n", "![This eu-west-3 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-3/sagemaker-inference-recommender|auto-scaling|optimize_endpoint_scaling.ipynb)\n", "\n", "![This eu-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-central-1/sagemaker-inference-recommender|auto-scaling|optimize_endpoint_scaling.ipynb)\n", "\n", "![This eu-north-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-north-1/sagemaker-inference-recommender|auto-scaling|optimize_endpoint_scaling.ipynb)\n", "\n", "![This ap-southeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-1/sagemaker-inference-recommender|auto-scaling|optimize_endpoint_scaling.ipynb)\n", "\n", "![This ap-southeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-2/sagemaker-inference-recommender|auto-scaling|optimize_endpoint_scaling.ipynb)\n", "\n", "![This ap-northeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-1/sagemaker-inference-recommender|auto-scaling|optimize_endpoint_scaling.ipynb)\n", "\n", "![This ap-northeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-2/sagemaker-inference-recommender|auto-scaling|optimize_endpoint_scaling.ipynb)\n", "\n", "![This ap-south-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-south-1/sagemaker-inference-recommender|auto-scaling|optimize_endpoint_scaling.ipynb)\n" ] } ], "metadata": { "availableInstances": [ { "_defaultOrder": 0, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.t3.medium", "vcpuNum": 2 }, { "_defaultOrder": 1, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.t3.large", "vcpuNum": 2 }, { "_defaultOrder": 2, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.t3.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 3, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.t3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 4, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5.large", "vcpuNum": 2 }, { "_defaultOrder": 5, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 6, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 7, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 8, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 9, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 10, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 11, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 12, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5d.large", "vcpuNum": 2 }, { "_defaultOrder": 13, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5d.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 14, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5d.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 15, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5d.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 16, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5d.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 17, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5d.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 18, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5d.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 19, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 20, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": true, "memoryGiB": 0, "name": "ml.geospatial.interactive", "supportedImageNames": [ "sagemaker-geospatial-v1-0" ], "vcpuNum": 0 }, { "_defaultOrder": 21, "_isFastLaunch": true, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.c5.large", "vcpuNum": 2 }, { "_defaultOrder": 22, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.c5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 23, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.c5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 24, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.c5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 25, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 72, "name": "ml.c5.9xlarge", "vcpuNum": 36 }, { "_defaultOrder": 26, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 96, "name": "ml.c5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 27, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 144, "name": "ml.c5.18xlarge", "vcpuNum": 72 }, { "_defaultOrder": 28, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.c5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 29, "_isFastLaunch": true, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g4dn.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 30, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g4dn.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 31, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g4dn.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 32, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g4dn.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 33, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g4dn.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 34, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g4dn.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 35, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 61, "name": "ml.p3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 36, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 244, "name": "ml.p3.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 37, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 488, "name": "ml.p3.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 38, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.p3dn.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 39, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.r5.large", "vcpuNum": 2 }, { "_defaultOrder": 40, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.r5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 41, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.r5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 42, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.r5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 43, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.r5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 44, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.r5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 45, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 512, "name": "ml.r5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 46, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.r5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 47, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 48, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 49, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 50, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 51, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 52, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 53, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.g5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 54, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.g5.48xlarge", "vcpuNum": 192 }, { "_defaultOrder": 55, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 1152, "name": "ml.p4d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 56, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 1152, "name": "ml.p4de.24xlarge", "vcpuNum": 96 } ], "kernelspec": { "display_name": "Python 3 (Data Science 3.0)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-west-2:236514542706:image/sagemaker-data-science-310-v1" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.6" } }, "nbformat": 4, "nbformat_minor": 4 }