{ "cells": [ { "cell_type": "markdown", "id": "eef21726", "metadata": {}, "source": [ "# Pre-processing and XGBoost model inference pipeline with NVIDIA Triton Inference Server on Amazon SageMaker using Multi-model endpoint(MME)" ] }, { "cell_type": "markdown", "id": "541db3cc", "metadata": {}, "source": [ "With the 22.05 version release of [NVIDIA Triton](https://github.com/triton-inference-server/server/) container image on SageMaker you can now use Triton's Forest Inference Library (FIL) backend to easily serve tree based ML models like XGBoost for high-performance CPU and GPU inference in SageMaker. Using Triton's FIL backend allows you to benefit from performance optimizations like dynamic batching and concurrent execution which help maximize the utilization of GPU and CPU, further lowering the cost of inference. The multi-framework support provided by NVIDIA Triton allows you to seamlessly deploy tree-based ML models alongside deep learning models for fast, unified inference pipelines.\n", "\n", "Machine Learning applications are complex and can often require data pre-processing. In this notebook, we will not only deep dive into how to deploy a tree-based ML model like XGBoost using the FIL Backend in Triton on SageMaker endpoint but also cover how to implement python-based data pre-processing inference pipeline for your model using the ensemble feature in Triton. This will allow us to send in the raw data from client side and have both data pre-processing and model inference happen in Triton SageMaker endpoint for the optimal inference performance.\n", "\n", "## To Run This Notebook Please Select `Python 3 (Data Science)` Kernel from the Kernel Dropdown menu\n", "\n", "**Note:** This notebook was tested with the `Python 3 (Data Science)` kernel on an Amazon SageMaker Studio instance of type `ml.c5.xlarge`.\n", "\n", "The alternate Studio instance types - `ml.c5.large`, `ml.c5.2xlarge`" ] }, { "cell_type": "markdown", "id": "924738a1", "metadata": {}, "source": [ "## Forest Inference Library (FIL)" ] }, { "cell_type": "markdown", "id": "4c1cf64b", "metadata": {}, "source": [ "RAPIDS Forest Inference Library (FIL) is a library to provide high-performance inference for tree-based models. Here are some important FIL features:\n", "\n", "* Supports XGBoost, LightGBM, cuML RandomForest, and Scikit Learn Random Forest\n", "* No conversion needed for XGBoost and LightGBM. SKLearn or cuML pickle models need to be converted to Treelite's binary checkpoint format \n", "* SKLearn Random Forest is supported for single-output regression and multi-class classification\n", "* Both CPU and GPU are supported\n", "\n", "Below we show benchmark highlighting FIL's throughput performance against CPU XGBoost.\n", "\n", "\"fil-benchmark\"" ] }, { "cell_type": "markdown", "id": "f29e6793", "metadata": {}, "source": [ "## Triton FIL Backend\n", "FIL is available as a backend in Triton with features to allow for serving XGBoost, LightGBM and RandomForest models both on CPU and GPU with high performance. Here are some important features of the FIL Backend:\n", "\n", "* **Shapley Value Support (GPU)**: GPU Shapley Values are supported for Model Explainability\n", "* **Categorical Feature Support**: Models trained on categorical features fully supported.\n", "* **CPU Optimizations**: Optimized CPU mode offers faster execution than native XGBoost.\n", "\n", "To learn more about FIL Backend's features please see the [FAQ Notebook](https://github.com/triton-inference-server/fil_backend/blob/fea-faq_nb/notebooks/faq/FAQs.ipynb) and [Triton FIL Backend GitHub.](https://github.com/triton-inference-server/fil_backend/tree/main)" ] }, { "cell_type": "markdown", "id": "0a32ed9b", "metadata": {}, "source": [ "## Triton Model Ensemble Feature\n", "Triton Inference Server greatly simplifies the deployment of AI models at scale in production. Triton Server comes with a convenient solution that simplifies building pre-processing and post-processing pipelines. Triton Server platform provides the ensemble scheduler, which is responsible for pipelining models participating in the inference process while ensuring efficiency and optimizing throughput. Using ensemble models can avoid the overhead of transferring intermediate tensors and minimize the number of requests that must be sent to Triton.\n", "\n", "\"triton-ensemble\"" ] }, { "cell_type": "markdown", "id": "d035038d", "metadata": {}, "source": [ "In this notebook we will be show how to use the ensemble feature for building a pipeline of data preprocessing with XGBoost model inference and you can extrapolate from it to add custom postprocessing to the pipeline." ] }, { "cell_type": "markdown", "id": "7156842c", "metadata": {}, "source": [ "## Set up Environment" ] }, { "cell_type": "markdown", "id": "b0b87fe3", "metadata": {}, "source": [ "We begin by setting up the required environment. We will install the dependencies required to package our model pipeline and run inferences using Triton server. Also define the IAM role that will give SageMaker access to the model artifacts and the NVIDIA Triton ECR image." ] }, { "cell_type": "code", "execution_count": null, "id": "7309c247", "metadata": { "tags": [] }, "outputs": [], "source": [ "!pip install nvidia-pyindex\n", "!pip install tritonclient[http]" ] }, { "cell_type": "code", "execution_count": null, "id": "b67f58f9", "metadata": { "tags": [] }, "outputs": [], "source": [ "import boto3\n", "import json\n", "import sagemaker\n", "import time\n", "import os\n", "from sagemaker import get_execution_role\n", "import pandas as pd\n", "import numpy as np\n", "import subprocess\n", "\n", "sess = boto3.Session()\n", "sm = sess.client(\"sagemaker\")\n", "\n", "default_bucket=\"\" #Enter just the bucket name i.e do not include the s3:// prefix\n", "assert default_bucket != \"\", \"Please enter the bucket you wish to use for this lab. Enter without s3://\"\n", "sagemaker_session = sagemaker.Session(default_bucket=default_bucket)\n", "role = get_execution_role()\n", "client = boto3.client(\"sagemaker-runtime\")\n", "s3_bucket = sagemaker_session.default_bucket()\n", "print(f\"Will use S3 bucket '{s3_bucket}' for storing all resources related to this notebook\")\n", "print(f\"Using Role '{role}'\")\n", "\n", "##NOTE : Make sure to have SageMakerFullAccess permission to the above IAM Role\n", "\n", "proc=subprocess.Popen('cat /opt/ml/metadata/resource-metadata.json', shell=True, stdout=subprocess.PIPE, )\n", "studio_user_profile_output=json.loads(proc.communicate()[0].decode('utf-8'))['UserProfileName'] # retrieve current Studio User Profile name\n", "studio_user_profile_output" ] }, { "cell_type": "code", "execution_count": null, "id": "fdf302aa", "metadata": { "tags": [] }, "outputs": [], "source": [ "account_id_map = {\n", " \"us-east-1\": \"785573368785\",\n", " \"us-east-2\": \"007439368137\",\n", " \"us-west-1\": \"710691900526\",\n", " \"us-west-2\": \"301217895009\",\n", " \"eu-west-1\": \"802834080501\",\n", " \"eu-west-2\": \"205493899709\",\n", " \"eu-west-3\": \"254080097072\",\n", " \"eu-north-1\": \"601324751636\",\n", " \"eu-south-1\": \"966458181534\",\n", " \"eu-central-1\": \"746233611703\",\n", " \"ap-east-1\": \"110948597952\",\n", " \"ap-south-1\": \"763008648453\",\n", " \"ap-northeast-1\": \"941853720454\",\n", " \"ap-northeast-2\": \"151534178276\",\n", " \"ap-southeast-1\": \"324986816169\",\n", " \"ap-southeast-2\": \"355873309152\",\n", " \"cn-northwest-1\": \"474822919863\",\n", " \"cn-north-1\": \"472730292857\",\n", " \"sa-east-1\": \"756306329178\",\n", " \"ca-central-1\": \"464438896020\",\n", " \"me-south-1\": \"836785723513\",\n", " \"af-south-1\": \"774647643957\",\n", "}\n", "\n", "region = boto3.Session().region_name\n", "if region not in account_id_map.keys():\n", " raise (\"UNSUPPORTED REGION\")\n", "\n", "base = \"amazonaws.com.cn\" if region.startswith(\"cn-\") else \"amazonaws.com\"\n", "\n", "triton_image_uri = (\n", " \"{account_id}.dkr.ecr.{region}.{base}/sagemaker-tritonserver:22.10-py3\".format(\n", " account_id=account_id_map[region], region=region, base=base\n", " )\n", ")\n", "triton_image_uri" ] }, { "cell_type": "markdown", "id": "f1a51913", "metadata": {}, "source": [ "## Set up pre-processing with Triton Python Backend" ] }, { "cell_type": "markdown", "id": "e7603be6", "metadata": {}, "source": [ "We will be using Triton's [Python Backend](https://github.com/triton-inference-server/python_backend) to perform the some tabular data preprocessing (categotical encoding) during inference time for raw data requests coming into the server. For more information to see the preprocessing that was done during training feel free to take a look at the training notebook [here](https://github.com/aws/amazon-sagemaker-examples/blob/main/sagemaker-triton/fil_ensemble/1_prep_rapids_train_xgb.ipynb).\n", "\n", "\n", "The Python backend enables pre-process, post-processing and any other custom logic to be implemented in Python and served with Triton." ] }, { "cell_type": "markdown", "id": "75524a53", "metadata": {}, "source": [ "Using Triton on SageMaker requires us to first set up a model repository folder containing the models we want to serve. We have already set up model for python data preprocessing called `preprocessing` in the `model_repository`.\n", "\n", "\"preprocessing-model\"" ] }, { "cell_type": "markdown", "id": "02a57108", "metadata": {}, "source": [ "Now Triton has specific requirements for model repository layout. Within the top-level model repository directory each model has its own sub-directory containing the information for the corresponding model. Each model directory in Triton must have at least one numeric sub-directory representing a version of the model. Here that is `1` representing version 1 of our python preprocessing model. Each model is executed by a specific backend so within each version sub-directory there must be the model artifact required by that backend. Here, we are using the Python backend and it requires the python file you are serving to be called `model.py` and the file needs to implement [certain functions](https://github.com/triton-inference-server/python_backend#usage). If we were using a PyTorch backend a `model.pt` file would be required and so on. For more details on naming conventions for model files please see the [model files doc](https://github.com/triton-inference-server/server/blob/185253ce225a0b012e73cade5c9a948ef9e75abd/docs/model_repository.md#model-files).\n", "\n", "\n", "[Our model.py](model_repository/preprocessing/1/model.py) python file we are using here implements all the tabular data preprocessing logic to convert raw data into features that can be fed into our XGBoost model.\n", "\n", "Every Triton model must also provide a `config.pbtxt` file describing the model configuration. To learn more about the config settings please see [model configuration](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_configuration.md) doc. Our `config.pbtxt` specifies the backend as `python` and specifies all the input columns for raw data along with preprocessed output that consists of 15 features. We also specify we want to run this python preprocessing model on the CPU." ] }, { "cell_type": "markdown", "id": "ebc20c3a", "metadata": { "tags": [] }, "source": [ "### Create Conda Env for Preprocessing Dependencies" ] }, { "cell_type": "markdown", "id": "344af7ad", "metadata": {}, "source": [ "The Python backend in Triton requires us to use conda environment for any additional dependencies. In this case we are using the Python backend to do preprocessing of the raw data before feeding it into the XGBoost model being run in FIL Backend. Even though we originally used RAPIDS cuDF and cuML to do the data preprocessing here we use Pandas and Scikit-learn as preprocessing dependencies for inference time. We do this for three reasons. \n", "* Firstly, to show how to create conda environment for your dependencies and how to package it in [format expected](https://github.com/triton-inference-server/python_backend#2-packaging-the-conda-environment) by Triton's Python backend. \n", "* Secondly, by showing the preprocessing model running in Python backend on the CPU while the XGBoost runs on the GPU in FIL Backend we illustrate how each model in Triton's ensemble pipeline can run on different framework backend as well as different hardware configurations\n", "* Thirdly, it highlights how the RAPIDS libraries (cuDF, cuML) are compatible with their CPU counterparts (Pandas, Scikit-learn). For example this way we get to show how LabelEncoders created in cuML can be used in Scikit-learn and vice-versa" ] }, { "cell_type": "markdown", "id": "2f4cb3bf", "metadata": {}, "source": [ "We follow the instructions from the [Triton documentation](https://github.com/triton-inference-server/python_backend#2-packaging-the-conda-environment) for packaging preprocessing dependencies (scikit-learn and pandas) to be used in the python backend as conda env tar file. The bash script [create_prep_env.sh](./create_prep_env.sh) creates the conda environment tar file and then we move it into the preprocessing model directory." ] }, { "cell_type": "code", "execution_count": null, "id": "9eec3f14", "metadata": { "tags": [] }, "outputs": [], "source": [ "!bash create_prep_env.sh\n", "time.sleep(5)\n", "!cp preprocessing_env.tar.gz model_cpu_repository/preprocessing/" ] }, { "cell_type": "code", "execution_count": null, "id": "26ca0fd1", "metadata": { "tags": [] }, "outputs": [], "source": [ "time.sleep(5)\n", "!cp preprocessing_env.tar.gz model_gpu_repository/preprocessinggpu/" ] }, { "cell_type": "markdown", "id": "2f8ac698", "metadata": {}, "source": [ "After creating the tar file from the conda environment and placing it in model folder, you need to tell Python backend to use that environment for your model. We do this by including the lines below in the model `config.pbtxt` file:" ] }, { "cell_type": "markdown", "id": "a5c07c3b", "metadata": {}, "source": [ "```\n", "parameters: {\n", " key: \"EXECUTION_ENV_PATH\",\n", " value: {string_value: \"$$TRITON_MODEL_DIRECTORY/preprocessing_env.tar.gz\"}\n", "}\n", "```" ] }, { "cell_type": "markdown", "id": "5d254654", "metadata": {}, "source": [ "Here, `$$TRITON_MODEL_DIRECTORY` helps provide environment path relative to the model folder in model repository and is resolved to `$pwd/model_repository/preprocessing`. Finally `preprocessing_env.tar.gz` is the name we gave to our conda env file. " ] }, { "cell_type": "markdown", "id": "66f0f891", "metadata": {}, "source": [ "### Set up Label Encoders" ] }, { "cell_type": "markdown", "id": "94a9f875", "metadata": {}, "source": [ "We also move the label encoders we had serialized earlier into `preprocessing` model folder so that we can use them to encode raw data categorical features at inference time." ] }, { "cell_type": "code", "execution_count": null, "id": "82804c83", "metadata": { "tags": [] }, "outputs": [], "source": [ "!cp label_encoders.pkl model_cpu_repository/preprocessing/1/\n", "!cp label_encoders.pkl model_gpu_repository/preprocessinggpu/1/" ] }, { "cell_type": "markdown", "id": "e990d5a1", "metadata": {}, "source": [ "## Set up Tree-based ML Model for FIL Backend" ] }, { "cell_type": "markdown", "id": "08c5e323", "metadata": {}, "source": [ "Next, we set up the model directory for tree-based ML model like XGBoost which will be using FIL Backend.\n", "\n", "The expected layout for model directory is similar to the one we showed above:\n", "\n", "\"fil-model\"" ] }, { "cell_type": "markdown", "id": "26f4447a", "metadata": {}, "source": [ "Here, `fil` is the name of the model. We can give it a different name like xgboost if we want to. `1` is the version sub-directory which contains the model artifact, in this case it's the `xgboost.json` model that we saved at the end of [first notebook](1_prep_rapids_train_xgb.ipynb). Let's create this expected layout." ] }, { "cell_type": "code", "execution_count": null, "id": "528a563c", "metadata": { "tags": [] }, "outputs": [], "source": [ "# move saved xgboost model into fil model directory\n", "!mkdir -p model_cpu_repository/fil/1\n", "!cp xgboost.json model_cpu_repository/fil/1/\n", "!cp xgboost.json model_gpu_repository/filgpu/1/" ] }, { "cell_type": "markdown", "id": "7d33eb0b", "metadata": {}, "source": [ "And then finally we need to have configuration file `config.pbtxt` describing the model configuration for tree-based ML model so that FIL Backend in Triton can understand how to serve it." ] }, { "cell_type": "markdown", "id": "f72c3539", "metadata": {}, "source": [ "### Create Config File for FIL Backend Model" ] }, { "cell_type": "markdown", "id": "50f7a5db", "metadata": {}, "source": [ "You can read about all generic Triton configuration options [here](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_configuration.md) and about configuration options specific to the FIL backend [here](https://github.com/triton-inference-server/fil_backend#configuration), but we will focus on just a few of the most common and relevant options in this example. Below are general descriptions of these options:\n", "\n", "* **max_batch_size:** The maximum batch size that can be passed to this model. In general, the only limit on the size of batches passed to a FIL backend is the memory available with which to process them. \n", "* **input:** Options in this section tell Triton the number of features to expect for each input sample.\n", "* **output:** Options in this section tell Triton how many output values there will be for each sample. If the \"predict_proba\" option (described further on) is set to true, then a probability value will be returned for each class. Otherwise, a single value will be returned indicating the class predicted for the given sample.\n", "* **instance_group:** This determines how many instances of this model will be created and whether they will use the GPU or CPU.\n", "* **model_type:** A string indicating what format the model is in (\"xgboost_json\" in this example, but \"xgboost\", \"lightgbm\", and \"tl_checkpoint\" are valid formats as well).\n", "* **predict_proba:** If set to true, probability values will be returned for each class rather than just a class prediction.\n", "* **output_class:** True for classification models, false for regression models.\n", "* **threshold:** A score threshold for determining classification. When output_class is set to true, this must be provided, although it will not be used if predict_proba is also set to true.\n", "* **storage_type:** In general, using \"AUTO\" for this setting should meet most usecases. If \"AUTO\" storage is selected, FIL will load the model using either a sparse or dense representation based on the approximate size of the model. In some cases, you may want to explicitly set this to \"SPARSE\" in order to reduce the memory footprint of large models.\n", "\n", "Here we have 15 input features and 2 classes (FRAUD, NOT FRAUD) that we are doing classification for in our XGBoost Model. Based on this information, let's set up FIL Backend configuration file for our tree-based model for serving on GPU." ] }, { "cell_type": "code", "execution_count": null, "id": "22ada6c0", "metadata": { "tags": [] }, "outputs": [], "source": [ "USE_GPU = False\n", "FIL_MODEL_DIR = \"./model_cpu_repository/fil\"\n", "\n", "# Maximum size in bytes for input and output arrays. If you are\n", "# using Triton 21.11 or higher, all memory allocations will make\n", "# use of Triton's memory pool, which has a default size of\n", "# 67_108_864 bytes\n", "MAX_MEMORY_BYTES = 60_000_000\n", "NUM_FEATURES = 15\n", "NUM_CLASSES = 2\n", "bytes_per_sample = (NUM_FEATURES + NUM_CLASSES) * 4\n", "max_batch_size = MAX_MEMORY_BYTES // bytes_per_sample\n", "\n", "IS_CLASSIFIER = True\n", "model_format = \"xgboost_json\"\n", "\n", "# Select deployment hardware (GPU or CPU)\n", "if USE_GPU:\n", " instance_kind = \"KIND_GPU\"\n", "else:\n", " instance_kind = \"KIND_CPU\"\n", "\n", "# whether the model is doing classification or regression\n", "if IS_CLASSIFIER:\n", " classifier_string = \"true\"\n", "else:\n", " classifier_string = \"false\"\n", "\n", "# whether to predict probabilites or not\n", "predict_proba = False\n", "\n", "if predict_proba:\n", " predict_proba_string = \"true\"\n", "else:\n", " predict_proba_string = \"false\"\n", "\n", "config_text = f\"\"\"backend: \"fil\"\n", "max_batch_size: {max_batch_size}\n", "input [ \n", " {{ \n", " name: \"input__0\"\n", " data_type: TYPE_FP32\n", " dims: [ {NUM_FEATURES} ] \n", " }} \n", "]\n", "output [\n", " {{\n", " name: \"output__0\"\n", " data_type: TYPE_FP32\n", " dims: [ 1 ]\n", " }}\n", "]\n", "instance_group [{{ kind: {instance_kind} }}]\n", "parameters [\n", " {{\n", " key: \"model_type\"\n", " value: {{ string_value: \"{model_format}\" }}\n", " }},\n", " {{\n", " key: \"predict_proba\"\n", " value: {{ string_value: \"{predict_proba_string}\" }}\n", " }},\n", " {{\n", " key: \"output_class\"\n", " value: {{ string_value: \"{classifier_string}\" }}\n", " }},\n", " {{\n", " key: \"threshold\"\n", " value: {{ string_value: \"0.5\" }}\n", " }},\n", " {{\n", " key: \"storage_type\"\n", " value: {{ string_value: \"AUTO\" }}\n", " }},\n", " {{\n", " key: \"use_experimental_optimizations\"\n", " value: {{ string_value: \"true\" }}\n", " }}\n", "]\n", "\n", "dynamic_batching {{}}\"\"\"\n", "\n", "config_path = os.path.join(FIL_MODEL_DIR, \"config.pbtxt\")\n", "with open(config_path, \"w\") as file_:\n", " file_.write(config_text)" ] }, { "cell_type": "code", "execution_count": null, "id": "9e6ac2e6", "metadata": { "tags": [] }, "outputs": [], "source": [ "USE_GPU = True\n", "FIL_MODEL_DIR = \"./model_gpu_repository/filgpu\"\n", "\n", "# Maximum size in bytes for input and output arrays. If you are\n", "# using Triton 21.11 or higher, all memory allocations will make\n", "# use of Triton's memory pool, which has a default size of\n", "# 67_108_864 bytes\n", "MAX_MEMORY_BYTES = 60_000_000\n", "NUM_FEATURES = 15\n", "NUM_CLASSES = 2\n", "bytes_per_sample = (NUM_FEATURES + NUM_CLASSES) * 4\n", "max_batch_size = MAX_MEMORY_BYTES // bytes_per_sample\n", "\n", "IS_CLASSIFIER = True\n", "model_format = \"xgboost_json\"\n", "\n", "# Select deployment hardware (GPU or CPU)\n", "if USE_GPU:\n", " instance_kind = \"KIND_GPU\"\n", "else:\n", " instance_kind = \"KIND_CPU\"\n", "\n", "# whether the model is doing classification or regression\n", "if IS_CLASSIFIER:\n", " classifier_string = \"true\"\n", "else:\n", " classifier_string = \"false\"\n", "\n", "# whether to predict probabilites or not\n", "predict_proba = False\n", "\n", "if predict_proba:\n", " predict_proba_string = \"true\"\n", "else:\n", " predict_proba_string = \"false\"\n", "\n", "config_text = f\"\"\"backend: \"fil\"\n", "max_batch_size: {max_batch_size}\n", "input [ \n", " {{ \n", " name: \"input__0\"\n", " data_type: TYPE_FP32\n", " dims: [ {NUM_FEATURES} ] \n", " }} \n", "]\n", "output [\n", " {{\n", " name: \"output__0\"\n", " data_type: TYPE_FP32\n", " dims: [ 1 ]\n", " }}\n", "]\n", "instance_group [{{ kind: {instance_kind} }}]\n", "parameters [\n", " {{\n", " key: \"model_type\"\n", " value: {{ string_value: \"{model_format}\" }}\n", " }},\n", " {{\n", " key: \"predict_proba\"\n", " value: {{ string_value: \"{predict_proba_string}\" }}\n", " }},\n", " {{\n", " key: \"output_class\"\n", " value: {{ string_value: \"{classifier_string}\" }}\n", " }},\n", " {{\n", " key: \"threshold\"\n", " value: {{ string_value: \"0.5\" }}\n", " }},\n", " {{\n", " key: \"storage_type\"\n", " value: {{ string_value: \"AUTO\" }}\n", " }}\n", "]\n", "\n", "dynamic_batching {{}}\"\"\"\n", "\n", "config_path = os.path.join(FIL_MODEL_DIR, \"config.pbtxt\")\n", "with open(config_path, \"w\") as file_:\n", " file_.write(config_text)" ] }, { "cell_type": "markdown", "id": "424f615c", "metadata": { "tags": [] }, "source": [ "## Set up Inference Pipeline of Data Preprocessing Python Backend and FIL Backend using Ensemble" ] }, { "cell_type": "markdown", "id": "71d6fe22", "metadata": {}, "source": [ "Now we are ready to set up the inference pipeline for data preprocessing and tree-based model inference using an [ensemble model](https://github.com/triton-inference-server/server/blob/main/docs/architecture.md#ensemble-models). An ensemble model represents a pipeline of one or more models and the connection of input and output tensors between those models. Here we use the ensemble model to build a pipeline of Data Preprocessing in Python backend followed by XGBoost in FIL Backend. " ] }, { "cell_type": "markdown", "id": "68786e5f", "metadata": {}, "source": [ "The expected layout for `ensemble` model directory is similar to the ones we showed above:\n", "\n", "\"ensemble-model\"" ] }, { "cell_type": "code", "execution_count": null, "id": "c8aa29ec", "metadata": { "tags": [] }, "outputs": [], "source": [ "# create model version directory for ensemble CPU model\n", "!mkdir -p model_cpu_repository/ensemble/1\n", "# create model version directory for ensemble GPU model\n", "!mkdir -p model_gpu_repository/ensemble/1" ] }, { "cell_type": "markdown", "id": "7bddf209", "metadata": {}, "source": [ "We created the ensemble model's [config.pbtxt](model_repository/ensemble/config.pbtxt) following the guidance on [ensemble doc](https://github.com/triton-inference-server/server/blob/main/docs/architecture.md#ensemble-models). Importantly, we need to set up the ensemble scheduler in config.pbtxt which specifies the dataflow between models within the ensemble. The ensemble scheduler collects the output tensors in each step, provides them as input tensors for other steps according to the specification." ] }, { "cell_type": "markdown", "id": "b3a61cf2", "metadata": {}, "source": [ "## Package model repository and upload to S3" ] }, { "cell_type": "markdown", "id": "499c659b", "metadata": {}, "source": [ "Finally, we end up with the following model repository directory structure, containing a Python preprocessing model and its dependencies along with XGBoost FIL model, and the model ensemble.\n", "\n", "\"model-repo\"" ] }, { "cell_type": "markdown", "id": "0fe14dbf", "metadata": {}, "source": [ "We will package this up as `model.tar.gz` for uploading it to S3." ] }, { "cell_type": "markdown", "id": "f83f8e06", "metadata": {}, "source": [ "### Create and Upload the model package for CPU-based instance (optimized for CPU)" ] }, { "cell_type": "code", "execution_count": null, "id": "c8ec0619", "metadata": { "tags": [] }, "outputs": [], "source": [ "!tar --exclude='.ipynb_checkpoints' -czvf model-cpu.tar.gz -C model_cpu_repository ." ] }, { "cell_type": "markdown", "id": "640f49c3", "metadata": {}, "source": [ "\n", "If you do not have access to the default bucket. You can upload the model tar ball to the bucket and prefix of your choice using the following code:\n", "\n", "```\n", "model_uri=\"s3:////model.tar.gz\"\n", "\n", "!aws s3 cp model.tar.gz \"$model_uri\"\n", "```" ] }, { "cell_type": "code", "execution_count": null, "id": "f91fc711", "metadata": { "tags": [] }, "outputs": [], "source": [ "# This method will upload the model tar ball to the SageMaker default bucket for the account in a prefix named as the User Profile for this Studio User. \n", "\n", "model_uri_cpu = sagemaker_session.upload_data(path=\"model-cpu.tar.gz\", key_prefix=f\"{studio_user_profile_output}/lab2\")\n", "print(model_uri_cpu)\n" ] }, { "cell_type": "markdown", "id": "0b41b707", "metadata": {}, "source": [ "### Create and Upload the model package for GPU-based instance (optimized for GPU)" ] }, { "cell_type": "code", "execution_count": null, "id": "b4101998", "metadata": { "tags": [] }, "outputs": [], "source": [ "!tar --exclude='.ipynb_checkpoints' -czvf model-gpu.tar.gz -C model_gpu_repository ." ] }, { "cell_type": "code", "execution_count": null, "id": "d3054fec", "metadata": { "tags": [] }, "outputs": [], "source": [ "model_uri_gpu = sagemaker_session.upload_data(path=\"model-gpu.tar.gz\", key_prefix=f\"{studio_user_profile_output}/lab2\") \n", "print(model_uri_gpu)" ] }, { "cell_type": "code", "execution_count": null, "id": "03080e0a", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Set the primary path for where all the models are stored on S3 bucket\n", "\n", "model_location = f\"s3://{s3_bucket}/{studio_user_profile_output}/lab2/\"\n", "model_location" ] }, { "cell_type": "markdown", "id": "38f302f7", "metadata": {}, "source": [ "## Create SageMaker Endpoint" ] }, { "cell_type": "markdown", "id": "7fc51955", "metadata": {}, "source": [ "We start off by creating a SageMaker model from the model repository we uploaded to S3 in the previous step.\n", "\n", "In this step we also provide an additional Environment Variable `SAGEMAKER_TRITON_DEFAULT_MODEL_NAME` which specifies the name of the model to be loaded by Triton. **The value of this key should match the folder name in the model package uploaded to S3.** This variable is optional in case of a single model. In case of ensemble models, this **key has to be specified** for Triton to startup in SageMaker.\n", "\n", "Additionally, customers can set `SAGEMAKER_TRITON_BUFFER_MANAGER_THREAD_COUNT` and `SAGEMAKER_TRITON_THREAD_COUNT` for optimizing the thread counts." ] }, { "cell_type": "code", "execution_count": null, "id": "9d5c7309", "metadata": { "tags": [] }, "outputs": [], "source": [ "sm_model_name = f\"{studio_user_profile_output}-lab2-\" + time.strftime(\"%Y-%m-%d-%H-%M-%S\", time.gmtime())\n", "\n", "container = {\n", " \"Image\": triton_image_uri,\n", " \"ModelDataUrl\": model_location,\n", " \"Mode\": \"MultiModel\",\n", " \"Environment\": {\n", " # \"SAGEMAKER_TRITON_DEFAULT_MODEL_NAME\": model_uri.rsplit('/')[-2], #m_name,\n", " #\"SAGEMAKER_TRITON_LOG_VERBOSE\": \"true\", #\"200\",\n", " #\"SAGEMAKER_TRITON_SHM_DEFAULT_BYTE_SIZE\" : \"20000000\", #\"1677721600\", #\"16777216000\", \"16777216\"\n", " #\"SAGEMAKER_TRITON_SHM_GROWTH_BYTE_SIZE\": \"1048576\"\n", "\n", " }\n", "}\n", "\n", "create_model_response = sm.create_model(\n", " ModelName=sm_model_name, ExecutionRoleArn=role, PrimaryContainer=container\n", ")\n", "\n", "print(\"Model Arn: \" + create_model_response[\"ModelArn\"])" ] }, { "cell_type": "markdown", "id": "e865dfcf", "metadata": {}, "source": [ "Using the model above, we create an endpoint configuration where we can specify the type and number of instances we want in the endpoint." ] }, { "cell_type": "code", "execution_count": null, "id": "9da29f14", "metadata": { "tags": [] }, "outputs": [], "source": [ "endpoint_config_name = f\"{studio_user_profile_output}-lab2-\" + time.strftime(\"%Y-%m-%d-%H-%M-%S\", time.gmtime())\n", "\n", "create_endpoint_config_response = sm.create_endpoint_config(\n", " EndpointConfigName=endpoint_config_name,\n", " ProductionVariants=[\n", " {\n", " \"InstanceType\": \"ml.g4dn.2xlarge\",\n", " #\"InstanceType\": \"ml.g4dn.xlarge\",\n", " #\"InstanceType\": \"ml.g4dn.4xlarge\",\n", " #\"InstanceType\": \"ml.g5.xlarge\",\n", " \"InitialVariantWeight\": 1,\n", " \"InitialInstanceCount\": 1,\n", " \"ModelName\": sm_model_name,\n", " \"VariantName\": \"AllTraffic\",\n", " }\n", " ],\n", ")\n", "\n", "print(\"Endpoint Config Arn: \" + create_endpoint_config_response[\"EndpointConfigArn\"])" ] }, { "cell_type": "markdown", "id": "37c3884c", "metadata": {}, "source": [ "Using the above endpoint configuration we create a new SageMaker endpoint and wait for the deployment to finish. The status will change to InService once the deployment is successful." ] }, { "cell_type": "code", "execution_count": null, "id": "ef3db648", "metadata": { "tags": [] }, "outputs": [], "source": [ "endpoint_name = f\"{studio_user_profile_output}-lab2-\" + time.strftime(\"%Y-%m-%d-%H-%M-%S\", time.gmtime())\n", "\n", "create_endpoint_response = sm.create_endpoint(\n", " EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name\n", ")\n", "\n", "print(\"Endpoint Arn: \" + create_endpoint_response[\"EndpointArn\"])" ] }, { "cell_type": "code", "execution_count": null, "id": "76c9cbd0", "metadata": { "tags": [] }, "outputs": [], "source": [ "waiter = sm.get_waiter(\"endpoint_in_service\")\n", "print(\"Waiting for endpoint to create...\")\n", "waiter.wait(EndpointName=endpoint_name)\n", "resp = sm.describe_endpoint(EndpointName=endpoint_name)\n", "print(f\"Endpoint Status: {resp['EndpointStatus']}\")\n", "\n", "print(\"Arn: \" + resp[\"EndpointArn\"])" ] }, { "cell_type": "markdown", "id": "e675e445", "metadata": {}, "source": [ "## Run Inference" ] }, { "cell_type": "markdown", "id": "c685372b", "metadata": {}, "source": [ "Once we have the endpoint running we can use some sample raw data to do an inference using json as the payload format. For the inference request format, Triton uses the KFServing community standard [inference protocols.](https://github.com/triton-inference-server/server/blob/main/docs/protocol/README.md)" ] }, { "cell_type": "code", "execution_count": null, "id": "5b786ace", "metadata": { "tags": [] }, "outputs": [], "source": [ "data_infer = pd.read_csv(\"data_infer.csv\")\n", "data_infer" ] }, { "cell_type": "code", "execution_count": null, "id": "fc340714", "metadata": { "tags": [] }, "outputs": [], "source": [ "STR_COLUMNS = [\n", " \"Time\",\n", " \"Amount\",\n", " \"Zip\",\n", " \"MCC\",\n", " \"Merchant Name\",\n", " \"Use Chip\",\n", " \"Merchant City\",\n", " \"Merchant State\",\n", " \"Errors?\",\n", "]\n", "\n", "batch_size = len(data_infer)\n", "\n", "payload = {}\n", "payload[\"inputs\"] = []\n", "data_dict = {}\n", "for col_name in data_infer.columns:\n", " data_dict[col_name] = {}\n", " data_dict[col_name][\"name\"] = col_name\n", " if col_name in STR_COLUMNS:\n", " data_dict[col_name][\"data\"] = data_infer[col_name].astype(str).tolist()\n", " data_dict[col_name][\"datatype\"] = \"BYTES\"\n", " else:\n", " data_dict[col_name][\"data\"] = data_infer[col_name].astype(\"float32\").tolist()\n", " data_dict[col_name][\"datatype\"] = \"FP32\"\n", " data_dict[col_name][\"shape\"] = [batch_size, 1]\n", " payload[\"inputs\"].append(data_dict[col_name])" ] }, { "cell_type": "markdown", "id": "d9c00ca7", "metadata": {}, "source": [ "### Call Model A (optimized for CPU)" ] }, { "cell_type": "code", "execution_count": null, "id": "b608711b", "metadata": { "tags": [] }, "outputs": [], "source": [ "import time\n", "start = time.time()\n", "response = client.invoke_endpoint(\n", " EndpointName=endpoint_name, ContentType=\"application/octet-stream\", Body=json.dumps(payload),TargetModel=\"model-gpu.tar.gz\"\n", ")\n", "end = time.time()\n", "print(end - start)\n", "\n", "response_body = json.loads(response[\"Body\"].read().decode(\"utf8\"))\n", "predictions = response_body[\"outputs\"][0][\"data\"]\n", "\n", "CLASS_LABELS = [\"NOT FRAUD\", \"FRAUD\"]\n", "predictions = [CLASS_LABELS[int(idx)] for idx in predictions]\n", "print(predictions)" ] }, { "cell_type": "markdown", "id": "a6bf559f", "metadata": {}, "source": [ "### Call Model B (optimized for GPU)" ] }, { "cell_type": "code", "execution_count": null, "id": "68cda79e", "metadata": { "tags": [] }, "outputs": [], "source": [ "import time\n", "start = time.time()\n", "response = client.invoke_endpoint(\n", " EndpointName=endpoint_name, ContentType=\"application/octet-stream\", Body=json.dumps(payload),TargetModel=\"model-gpu.tar.gz\"\n", ")\n", "end = time.time()\n", "print(end - start)\n", "\n", "response_body = json.loads(response[\"Body\"].read().decode(\"utf8\"))\n", "predictions = response_body[\"outputs\"][0][\"data\"]\n", "\n", "CLASS_LABELS = [\"NOT FRAUD\", \"FRAUD\"]\n", "predictions = [CLASS_LABELS[int(idx)] for idx in predictions]\n", "print(predictions)" ] }, { "cell_type": "markdown", "id": "f75bd3cd", "metadata": {}, "source": [ "### Binary + Json Payload" ] }, { "cell_type": "markdown", "id": "40ef910e", "metadata": {}, "source": [ "We can also use binary+json as the payload format to get better performance for the inference call. The specification of this format is provided [here](https://github.com/triton-inference-server/server/blob/main/docs/protocol/extension_binary_data.md).\n", "\n", "**Note:** With the `binary+json` format, we have to specify the length of the request metadata in the header to allow Triton to correctly parse the binary payload. This is done using a custom Content-Type header `application/vnd.sagemaker-triton.binary+json;json-header-size={}`.\n", "\n", "Please note, this is different from using `Inference-Header-Content-Length` header on a stand-alone Triton server since custom headers are not allowed in SageMaker.\n", "\n", "The [tritonclient](https://github.com/triton-inference-server/client) package provides utility methods to generate the payload without having to know the details of the specification. We'll use the following methods to convert our inference request into a binary format which provides lower latencies for inference." ] }, { "cell_type": "code", "execution_count": null, "id": "2e12bcff", "metadata": { "tags": [] }, "outputs": [], "source": [ "import tritonclient.http as httpclient\n", "\n", "\n", "def get_sample_data_binary(data, output_name):\n", " inputs = []\n", " outputs = []\n", " batch_size = len(data)\n", " for col_name in data.columns:\n", " if col_name in STR_COLUMNS:\n", " np_data = np.expand_dims(data[col_name], axis=1).astype(\"object\")\n", " infer_input = httpclient.InferInput(col_name, [batch_size, 1], \"BYTES\")\n", " else:\n", " np_data = np.expand_dims(data[col_name], axis=1).astype(\"float32\")\n", " infer_input = httpclient.InferInput(col_name, [batch_size, 1], \"FP32\")\n", " infer_input.set_data_from_numpy(np_data, binary_data=True)\n", " inputs.append(infer_input)\n", " outputs.append(httpclient.InferRequestedOutput(output_name, binary_data=True))\n", " request_body, header_length = httpclient.InferenceServerClient.generate_request_body(\n", " inputs, outputs=outputs\n", " )\n", " return request_body, header_length" ] }, { "cell_type": "markdown", "id": "17a064ba", "metadata": {}, "source": [ "### Call Model A (optimized for CPU)" ] }, { "cell_type": "code", "execution_count": null, "id": "f4f20dfd", "metadata": { "tags": [] }, "outputs": [], "source": [ "import time\n", "\n", "output_name = \"predictions\"\n", "request_body, header_length = get_sample_data_binary(data_infer, output_name)\n", "start = time.time()\n", "response = client.invoke_endpoint(\n", " EndpointName=endpoint_name,\n", " ContentType=\"application/vnd.sagemaker-triton.binary+json;json-header-size={}\".format(\n", " header_length\n", " ),\n", " Body=request_body,\n", " TargetModel=\"model-cpu.tar.gz\"\n", ")\n", "end = time.time()\n", "print(end - start)\n", "\n", "# Parse json header size length from the response\n", "header_length_prefix = \"application/vnd.sagemaker-triton.binary+json;json-header-size=\"\n", "header_length_str = response[\"ContentType\"][len(header_length_prefix) :]\n", "\n", "# Read response body\n", "result = httpclient.InferenceServerClient.parse_response_body(\n", " response[\"Body\"].read(), header_length=int(header_length_str)\n", ")\n", "predictions = result.as_numpy(output_name)\n", "CLASS_LABELS = [\"NOT FRAUD\", \"FRAUD\"]\n", "predictions = [CLASS_LABELS[int(idx)] for idx in predictions]\n", "print(predictions)" ] }, { "cell_type": "markdown", "id": "4c383a89", "metadata": {}, "source": [ "### Call Model B (optimized for CPU)" ] }, { "cell_type": "code", "execution_count": null, "id": "acc15939", "metadata": { "tags": [] }, "outputs": [], "source": [ "import time\n", "\n", "output_name = \"predictions\"\n", "request_body, header_length = get_sample_data_binary(data_infer, output_name)\n", "start = time.time()\n", "response = client.invoke_endpoint(\n", " EndpointName=endpoint_name,\n", " ContentType=\"application/vnd.sagemaker-triton.binary+json;json-header-size={}\".format(\n", " header_length\n", " ),\n", " Body=request_body,\n", " TargetModel=\"model-gpu.tar.gz\"\n", ")\n", "end = time.time()\n", "print(end - start)\n", "\n", "# Parse json header size length from the response\n", "header_length_prefix = \"application/vnd.sagemaker-triton.binary+json;json-header-size=\"\n", "header_length_str = response[\"ContentType\"][len(header_length_prefix) :]\n", "\n", "# Read response body\n", "result = httpclient.InferenceServerClient.parse_response_body(\n", " response[\"Body\"].read(), header_length=int(header_length_str)\n", ")\n", "predictions = result.as_numpy(output_name)\n", "CLASS_LABELS = [\"NOT FRAUD\", \"FRAUD\"]\n", "predictions = [CLASS_LABELS[int(idx)] for idx in predictions]\n", "print(predictions)" ] }, { "cell_type": "markdown", "id": "1c49e417", "metadata": {}, "source": [ "## Terminate endpoint and clean up artifacts" ] }, { "cell_type": "code", "execution_count": null, "id": "a7da671c", "metadata": {}, "outputs": [], "source": [ "sm.delete_endpoint(EndpointName=endpoint_name)\n", "sm.delete_endpoint_config(EndpointConfigName=endpoint_config_name)\n", "sm.delete_model(ModelName=sm_model_name)" ] }, { "cell_type": "code", "execution_count": null, "id": "755295ca", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "b0bc68f5", "metadata": {}, "source": [ "## Conclusion\n", "\n", "In this lab2, we leveraged Triton Inference Server to create an ensemble to do Python preprocessing and used the XGBoost model to show how fraud can be detected using Triton and its corresponding Python and FIL backends. This example can further be used as a guide to create your own ensembles leveraging the other backends that Triton provides solving a wide variety of use cases that you may have that require scale and performance while using hardware for acceleration. " ] } ], "metadata": { "instance_type": "ml.c5.large", "kernelspec": { "display_name": "Python 3 (Data Science)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/datascience-1.0" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.10" }, "vscode": { "interpreter": { "hash": "b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e" } } }, "nbformat": 4, "nbformat_minor": 5 }