{ "cells": [ { "cell_type": "markdown", "id": "3be57d1e-17cc-4372-a0ac-0b52c984ff5b", "metadata": { "tags": [] }, "source": [ "# Register pretrained 🤗 models using SageMaker Model Registry - Deploy 🤗 Transformer models for inference\n", "***\n", "This notebooks is designed to run on `Python 3 Data Science 2.0` kernel in Amazon SageMaker Studio\n", "***\n", "\n", "In this notebook, we will use [Hugging Face Inference DLCs and Pytorch DLCs](https://github.com/aws/deep-learning-containers/blob/master/available_images.md) to deploy two pretrained transformer models for real-time inference. You will firstly register the models to Amazon SageMaker model registry and then deploy each model to a SageMaker real-time endpoint and invoke the endpoint with the test payload. \n", "This example will use [SageMaker boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html) (low level api). You can also use [SageMaker python sdk](https://github.com/aws/sagemaker-python-sdk) to achieve the same results.\n", "\n", "We will perform following steps:\n", "1. [Introduction](#Introduction) \n", "2. [Setup](#Setup)\n", "3. [Download and register HuggingFace Transformer models](#Download-and-register-HuggingFace-Transformer-models)\n", "4. [Deploy registered models for real-time inference](#Deploy-registered-models-for-real\\-time-inference)\n" ] }, { "cell_type": "markdown", "id": "f62cd912-bb8c-4603-a88c-c850e9710ffa", "metadata": {}, "source": [ "## Introduction\n", "\n", "For inference, you can use your trained Hugging Face model or one of the pretrained Hugging Face models to deploy an inference job with SageMaker. You can also run inference jobs without having to write any custom inference code. With custom inference code, you can customize the inference logic by providing your own Python script.\n", "\n", "### How to deploy an inference job using the Hugging Face Deep Learning Containers\n", "You have two options for running inference with SageMaker. You can run inference using a model that you trained, or deploy a pre-trained Hugging Face model.\n", "\n", "* Run inference with your trained model: You have two options for running inference with your own trained model. You can run inference with a model that you trained using an existing Hugging Face model with the SageMaker Hugging Face Deep Learning Containers, or you can bring your own existing Hugging Face model and deploy it using SageMaker. When you run inference with a model that you trained with the SageMaker Hugging Face Estimator, you can deploy the model immediately after training completes or you can upload the trained model to an Amazon S3 bucket and ingest it when running inference later. If you bring your own existing Hugging Face model, you must upload the trained model to an Amazon S3 bucket and ingest that bucket when running inference.\n", "\n", "* Run inference with a pre-trained HuggingFace model: You can use one of the thousands of pre-trained Hugging Face models to run your inference jobs with no additional training needed. We will see this in our lab today." ] }, { "cell_type": "markdown", "id": "1c4686d4-8bac-4363-8401-e144fbb394f6", "metadata": {}, "source": [ "## Setup" ] }, { "cell_type": "code", "execution_count": null, "id": "dce1fe04-755a-4b9a-bbf7-8153d04c8cfa", "metadata": { "tags": [] }, "outputs": [], "source": [ "%pip install -U transformers ipywidgets sagemaker torch -q" ] }, { "cell_type": "code", "execution_count": null, "id": "37d04ec9-7dc6-4dc9-89fd-2d0a5a76b0b1", "metadata": {}, "outputs": [], "source": [ "import datetime\n", "import json\n", "import os\n", "import shutil\n", "import sys\n", "import tarfile\n", "import time\n", "from pathlib import Path\n", "from uuid import uuid4\n", "\n", "import boto3\n", "import numpy as np\n", "import pandas as pd\n", "import sagemaker\n", "import torch\n", "from sagemaker import get_execution_role, image_uris\n", "from sagemaker.huggingface import HuggingFaceModel\n", "from sagemaker.s3 import S3Uploader, s3_path_join\n", "from transformers import AutoModel, AutoModelForSequenceClassification, AutoTokenizer, pipeline\n", "\n", "p = os.path.abspath(\"..\")\n", "if p not in sys.path:\n", " sys.path.append(p)\n", "import utils" ] }, { "cell_type": "markdown", "id": "e1b9bb79-90a3-42e3-a985-8a96ed6bd593", "metadata": {}, "source": [ "### Useful objects and variables\n", "Common objects to interact with SageMaker API" ] }, { "cell_type": "code", "execution_count": null, "id": "24d1208d-3cb4-44f5-b33f-5dcfeb20bd80", "metadata": {}, "outputs": [], "source": [ "sm_session = sagemaker.Session()\n", "role = get_execution_role()\n", "bucket = sm_session.default_bucket()\n", "region = sm_session.boto_region_name\n", "sm_client = sm_session.sagemaker_client\n", "sm_runtime = boto3.client(\"sagemaker-runtime\")\n", "prefix = \"sagemaker/huggingface-pytorch-sentiment-analysis\"\n", "deploy_instance_type = \"ml.m5.xlarge\"\n", "%store deploy_instance_type\n", "\n", "# The name of the Model Package Group in Amazon SageMaker Model Registry\n", "model_package_group_name = \"HuggingFaceModels\"\n", "%store model_package_group_name\n", "\n", "print(region)\n", "print(role)\n", "print(bucket)" ] }, { "cell_type": "markdown", "id": "f26ce817-5b87-416c-827e-59083f6bf65c", "metadata": {}, "source": [ "## Download and register HuggingFace Transformer models" ] }, { "cell_type": "code", "execution_count": null, "id": "ba4aeb09-660b-4bc3-bf86-2e47f6140dc8", "metadata": {}, "outputs": [], "source": [ "HF_TASK = \"sentiment-analysis\"\n", "%store HF_TASK" ] }, { "cell_type": "code", "execution_count": null, "id": "ee3c814c-ad2c-4e6d-925c-d8170a7d95e3", "metadata": {}, "outputs": [], "source": [ "HF_MODEL_ROBERTA = \"cardiffnlp/twitter-roberta-base-sentiment\"\n", "HF_MODEL_DISTILBERT = \"distilbert-base-uncased-finetuned-sst-2-english\"\n", "%store HF_MODEL_ROBERTA\n", "%store HF_MODEL_DISTILBERT" ] }, { "cell_type": "markdown", "id": "bfa9eddf-e8c2-43cb-91df-7cff36abe430", "metadata": {}, "source": [ "### Download Hugging Face models\n", "#### twitter-roberta-base-sentiment Pretrained Model\n", "\n", "In this example we are downloading a pre-trained HuggingFace model - `twitter-roberta-base-sentiment` from the HuggingFace library. We will use this model for classifying the text as `Labels: 0 -> Negative; 1 -> Neutral; 2 -> Positive`." ] }, { "cell_type": "code", "execution_count": null, "id": "da904172-415d-43d0-8ec0-2aa5a80d13ac", "metadata": {}, "outputs": [], "source": [ "MODEL = \"cardiffnlp/twitter-roberta-base-sentiment\"\n", "model = AutoModelForSequenceClassification.from_pretrained(HF_MODEL_ROBERTA)\n", "tokenizer = AutoTokenizer.from_pretrained(HF_MODEL_ROBERTA)\n", "model.save_pretrained(\"model_token_roberta\")\n", "tokenizer.save_pretrained(\"model_token_roberta\")" ] }, { "cell_type": "markdown", "id": "0f17723a-c086-497f-9de2-3ea2d7988c7d", "metadata": {}, "source": [ "### Package the saved model to tar.gz format\n", "Once the model is downloaded, we need to package (tokenizer and model weights) it to `.tar.gz` format as expected by Amazon SageMaker." ] }, { "cell_type": "code", "execution_count": null, "id": "0005e688-115d-4396-a1a5-21c4e5812490", "metadata": {}, "outputs": [], "source": [ "tar_file_roberta = \"model_roberta.tar.gz\"\n", "tar_size = utils.create_tar(tar_file_roberta, Path(\"model_token_roberta\"))\n", "print(f\"Created {tar_file_roberta}, size {tar_size:.2f} MB\")" ] }, { "cell_type": "markdown", "id": "a9c77181-15c2-4405-8856-d2eb6d558e13", "metadata": {}, "source": [ "#### Download distilbert-base-uncased-finetuned-sst-2-english by initiating a `Huggingface pipeline`\n", "\n", "The pipelines are a great and easy way to use models for inference. These pipelines are objects that abstract most of the complex code from the library, offering a simple API dedicated to several tasks, including Named Entity Recognition, Masked Language Modeling, Sentiment Analysis, Feature Extraction and Question Answering. See the [task summary](https://huggingface.co/transformers/task_summary.html) for examples of use." ] }, { "cell_type": "code", "execution_count": null, "id": "aea2d7db-fc43-45b6-a35f-ae332f6a31b4", "metadata": {}, "outputs": [], "source": [ "local_artifact_path = Path(\"model_token_distilbert\")\n", "local_artifact_path.mkdir(exist_ok=True, parents=True)\n", "tar_file_distilbert = \"model_distilbert.tar.gz\"" ] }, { "cell_type": "code", "execution_count": null, "id": "554e21c7-4ca4-4add-b0f6-ebb8f0d478c0", "metadata": {}, "outputs": [], "source": [ "sentiment_analysis = pipeline(HF_TASK, model=HF_MODEL_DISTILBERT)\n", "sentiment_analysis.save_pretrained(local_artifact_path)" ] }, { "cell_type": "markdown", "id": "05508b7f-175b-4e20-b63c-21a3f37d1897", "metadata": {}, "source": [ "#### Write the Inference Script\n", "\n", "To deploy a pretrained `PyTorch` model, you'll need to use the `PyTorch` estimator object to create a `PyTorchModel` object and set a different `entry_point`.\n", "\n", "You'll use the `PyTorchModel` object to deploy a `PyTorchPredictor`. This creates a `SageMaker` Endpoint -- a hosted prediction service that we can use to perform inference.\n", "\n", "An implementation of `model_fn` is required for inference script. We are going to use default implementations of `input_fn`, `predict_fn`, `output_fn` and `model_fn` defined in [sagemaker-pytorch-containers](https://github.com/aws/sagemaker-pytorch-containers).\n", "\n", "Here's an example of the inference script:" ] }, { "cell_type": "code", "execution_count": null, "id": "bf892b86-c441-46d0-9c5f-a9c5d1a4a1c4", "metadata": {}, "outputs": [], "source": [ "#!cat ../code/inference.py # uncomment this line of code to see the details in the py file" ] }, { "cell_type": "code", "execution_count": null, "id": "1bd78e9c-6008-4730-9e6b-d9f6a20ba4e0", "metadata": {}, "outputs": [], "source": [ "# !cat ../code/requirements.txt # uncomment this line to show the packages defined in the requirements.txt" ] }, { "cell_type": "markdown", "id": "5a24a55d-d647-46a4-910a-71b171569a3b", "metadata": {}, "source": [ "#### Create the directory structure for your model files\n", "\n", "The directory structure where you saved your PyTorch model should look something like the following:\n", "\n", "```\n", "| model\n", "| |--pytorch_model.bin\n", "| |--config.json\n", "| |--vocab.txt\n", "| |--tokenizer.json\n", "| |--tokenizer_config.json\n", "| |--special_tokens_map.json\n", "|\n", "| code\n", "| |--inference.py\n", "| |--requirements.txt\n", "```\n", "\n", "Where `requirements.txt` is an optional file that specifies dependencies on third-party libraries." ] }, { "cell_type": "markdown", "id": "cb3cdb41-2325-4898-8fc8-1fc472c01903", "metadata": {}, "source": [ "#### Copy code to the model directory and tar the model and code" ] }, { "cell_type": "code", "execution_count": null, "id": "1aaf3816-f047-4c22-9997-635eb6b6d609", "metadata": {}, "outputs": [], "source": [ "shutil.copytree(\"../code\", \"model_token_distilbert/code\", dirs_exist_ok=True)\n", "tar_size =utils.create_tar(tar_file_distilbert, local_artifact_path)\n", "print(f\"Created {tar_file_distilbert}, size {tar_size:.2f} MB\")" ] }, { "cell_type": "markdown", "id": "de519492-6383-435d-9df9-2063813f8577", "metadata": {}, "source": [ "#### Upload the model to S3\n", "\n", "We now have the model archives ready. We need to upload them to S3 before we can use them for hosting." ] }, { "cell_type": "code", "execution_count": null, "id": "30ddfc7e-f738-4a6e-8c63-ed2d5116e525", "metadata": {}, "outputs": [], "source": [ "model_data_path = s3_path_join(\"s3://\", bucket, prefix + \"/models\")\n", "print(f\"Uploading Models to {model_data_path}\")\n", "model_roberta_uri = S3Uploader.upload(\"model_roberta.tar.gz\", model_data_path)\n", "print(f\"Uploaded roberta model to {model_roberta_uri}\")\n", "model_distilbert_uri = S3Uploader.upload(\"model_distilbert.tar.gz\", model_data_path)\n", "print(f\"Uploaded distilbert model to {model_distilbert_uri}\")" ] }, { "cell_type": "code", "execution_count": null, "id": "fb0655ff-03f0-4f83-a04b-362f2543c6e5", "metadata": {}, "outputs": [], "source": [ "%store model_data_path\n", "%store model_roberta_uri\n", "%store model_distilbert_uri" ] }, { "cell_type": "markdown", "id": "e937c1c2-44ce-4103-bd5f-1fa1945e032c", "metadata": {}, "source": [ "### Register the models to SageMaker model registry\n", "To use the models, it is recommended to register the models into Model Registry. We start creating a `HuggingFaceModel` object using *boto3*. We also use `boto3` to register the model to include parameters required for future use:\n", "- Domain\n", "- Task\n", "- Framework\n", "- FrameworkVersion" ] }, { "cell_type": "code", "execution_count": null, "id": "b563fe34-86b5-4018-b237-62e434da9983", "metadata": {}, "outputs": [], "source": [ "# # uncomment the cell to list the domain, framework, task,\n", "# # and model name of standard machine learning models found in common model zoos.\n", "# df = utils.list_model_metadata_df()\n", "\n", "# display(df.sort_values(by=[\"Domain\", \"Task\", \"Framework\", \"FrameworkVersion\"]))" ] }, { "cell_type": "markdown", "id": "e97d6d2d-cbfb-4467-8bb0-1dd373d59261", "metadata": {}, "source": [ "In this example, as we are predicting Sentiment analysis with `HuggingFace` `BERT`, we select `NATURAL_LANGUAGE_PROCESSING` as the Domain, `OTHERs` as the Task, `PYTORCH` as the Framework, and `bert-base-uncased` as the Model." ] }, { "cell_type": "code", "execution_count": null, "id": "e87202b1-6bb8-409a-8665-b57c7593659a", "metadata": {}, "outputs": [], "source": [ "ml_domain = \"NATURAL_LANGUAGE_PROCESSING\"\n", "ml_task = \"OTHER\"\n", "ml_framework = \"PYTORCH\"\n", "framework_version = \"1.10.2\"\n", "nearest_model = \"bert-base-uncased\"" ] }, { "cell_type": "markdown", "id": "0be890f1-c53c-4587-a1ff-d81392f2f1a6", "metadata": {}, "source": [ "#### Prebuilt HuggingFace DLC\n", "You can choose to use a prebuilt HuggingFace DLC as the inference image, which has the [SageMaker huggingface inference toolkit](https://github.com/aws/sagemaker-huggingface-inference-toolkit) for serving 🤗 Transformers models on Amazon SageMaker. The inference toolkit leverages the pipeline for the transformer library to allow zero-code deployments of models, without requiring any code for pre- or post-processing. (see more information of the default [handler service](https://github.com/aws/sagemaker-huggingface-inference-toolkit/blob/main/src/sagemaker_huggingface_inference_toolkit/handler_service.py) provided bythe inference toolkit).\n", "\n", "In addition to zero-code deployment, the Inference Toolkit supports \"bring your own code\" methods, where you can override the default methods. You can learn more about \"bring your own code\" in the documentation [here](https://github.com/aws/sagemaker-huggingface-inference-toolkit#-user-defined-codemodules). In the second lab section, we will use the bring your own code method to deploy models." ] }, { "cell_type": "code", "execution_count": null, "id": "62b99233-b9ef-48a5-bfcb-d013367c9f5c", "metadata": {}, "outputs": [], "source": [ "framework = \"huggingface\"\n", "transformer_version = \"4.17.0\"\n", "py_version = \"py38\"\n", "instance_type = \"ml.g\"\n", "image_scope = \"inference\"\n", "\n", "inference_image_roberta = image_uris.retrieve(\n", " framework=framework,\n", " base_framework_version=ml_framework.lower() + framework_version,\n", " region=region,\n", " version=transformer_version,\n", " py_version=py_version,\n", " instance_type=instance_type,\n", " image_scope=image_scope,\n", ")\n", "\n", "print(inference_image_roberta)" ] }, { "cell_type": "code", "execution_count": null, "id": "5a4d9533-ab7c-413d-94fb-1d290b537a1a", "metadata": {}, "outputs": [], "source": [ "inference_image_hf_mme = image_uris.retrieve(\n", " framework=framework,\n", " base_framework_version=ml_framework.lower() + framework_version,\n", " region=region,\n", " version=transformer_version,\n", " py_version=py_version,\n", " instance_type=\"ml.c\",\n", " image_scope=image_scope,\n", ")\n", "\n", "print(inference_image_hf_mme)\n", "%store inference_image_hf_mme" ] }, { "cell_type": "markdown", "id": "0c39d157-3ab3-4e7a-8ee0-a2228d94563f", "metadata": {}, "source": [ "#### Prebuilt Pytorch DLC\n", "You can also use a SageMaker prebuilt [Pytorch DLC](https://github.com/aws/deep-learning-containers/tree/master/pytorch) to deploy the huggingface model. In this case, as the prebuilt Pytorch container doesn't have the transformer package, we have provided a `requirements.txt` file with the additional packages that are required to be installed to the container in the model package. See section [Create the directory structure for your model files](#Create-the-directory-structure-for-your-model-files). We also included the `inference.py` file to define the necessary functions for model loading and model serving." ] }, { "cell_type": "code", "execution_count": null, "id": "1be3bb16-d720-478e-9f3d-659f23d1014d", "metadata": {}, "outputs": [], "source": [ "inference_image_distilbert = image_uris.retrieve(\n", " framework=ml_framework.lower(),\n", " region=region,\n", " version=framework_version,\n", " py_version=py_version,\n", " instance_type=instance_type,\n", " image_scope=image_scope,\n", ")\n", "\n", "print(inference_image_distilbert)" ] }, { "cell_type": "markdown", "id": "80d6bc78-cd3b-42e2-aff5-850259b1d346", "metadata": {}, "source": [ "#### Create model package group and model packages" ] }, { "cell_type": "code", "execution_count": null, "id": "59f74cb9-537a-4ead-9651-da0836ad067d", "metadata": {}, "outputs": [], "source": [ "try:\n", " sm_client.describe_model_package_group(ModelPackageGroupName=model_package_group_name)\n", "except:\n", " model_pacakge_group_response = sm_client.create_model_package_group(\n", " ModelPackageGroupName=model_package_group_name,\n", " ModelPackageGroupDescription=\"My sample HuggingFace PyTorch model package group\",\n", " )\n", " print(model_pacakge_group_response)" ] }, { "cell_type": "code", "execution_count": null, "id": "6fe08748-c9da-4765-92d5-882d50153630", "metadata": {}, "outputs": [], "source": [ "roberta_model_package_response = sm_client.create_model_package(\n", " ModelPackageGroupName=str(model_package_group_name),\n", " ModelPackageDescription=f\"Hugging Face Roberta Model - sentiment analysis\",\n", " Domain=ml_domain,\n", " Task=ml_task,\n", " InferenceSpecification={\n", " \"Containers\": [\n", " {\n", " \"ContainerHostname\": \"huggingface-pytorch-roberta\",\n", " \"Image\": inference_image_roberta,\n", " \"ModelDataUrl\": model_roberta_uri,\n", " \"Framework\": ml_framework,\n", " \"NearestModelName\": nearest_model,\n", " \"Environment\": {\n", " \"SAGEMAKER_CONTAINER_LOG_LEVEL\": \"20\",\n", " \"SAGEMAKER_REGION\": region,\n", " \"SAGEMAKER_SUBMIT_DIRECTORY\": model_roberta_uri,\n", " \"HF_TASK\": HF_TASK,\n", " },\n", " },\n", " ],\n", " # \"SupportedRealtimeInferenceInstanceTypes\": [\n", " # \"ml.c5.large\",\n", " # \"ml.c5.xlarge\",\n", " # \"ml.c5.2xlarge\",\n", " # \"ml.m5.xlarge\",\n", " # \"ml.m5.2xlarge\",\n", " # ],\n", " \"SupportedContentTypes\": [\"application/json\"],\n", " \"SupportedResponseMIMETypes\": [\"application/json\"],\n", " },\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "c04cbe3f-4706-49e7-a9aa-0efa2fbd60cd", "metadata": {}, "outputs": [], "source": [ "distilbert_model_package_response = sm_client.create_model_package(\n", " ModelPackageGroupName=str(model_package_group_name),\n", " ModelPackageDescription=f\"Hugging Face Distilbert Model - sentiment analysis\",\n", " Domain=ml_domain,\n", " Task=ml_task,\n", " InferenceSpecification={\n", " \"Containers\": [\n", " {\n", " \"ContainerHostname\": \"huggingface-pytorch-distilbert\",\n", " \"Image\": inference_image_distilbert,\n", " \"ModelDataUrl\": model_distilbert_uri,\n", " \"Framework\": ml_framework,\n", " \"NearestModelName\": nearest_model,\n", " \"Environment\": {\n", " \"SAGEMAKER_CONTAINER_LOG_LEVEL\": \"20\",\n", " \"SAGEMAKER_PROGRAM\": \"inference.py\",\n", " \"SAGEMAKER_REGION\": region,\n", " \"SAGEMAKER_SUBMIT_DIRECTORY\": model_distilbert_uri,\n", " \"HF_TASK\": HF_TASK,\n", " },\n", " },\n", " ],\n", " # \"SupportedRealtimeInferenceInstanceTypes\": [\n", " # \"ml.c5.large\",\n", " # \"ml.c5.xlarge\",\n", " # \"ml.c5.2xlarge\",\n", " # \"ml.m5.xlarge\",\n", " # \"ml.m5.2xlarge\",\n", " # ],\n", " \"SupportedContentTypes\": [\"application/json\"],\n", " \"SupportedResponseMIMETypes\": [\"application/json\"],\n", " },\n", ")" ] }, { "cell_type": "markdown", "id": "48683d64-cfee-43c0-87bd-1be7ccb4fa4c", "metadata": {}, "source": [ "## Deploy registered models for real-time inference\n", "\n", "Next we will create a SageMaker real-time endpoint for each of the registered model version." ] }, { "cell_type": "code", "execution_count": null, "id": "e322a0c1-10f2-425a-9497-29de75d61dc5", "metadata": {}, "outputs": [], "source": [ "roberta_model_package_arn = roberta_model_package_response[\"ModelPackageArn\"]\n", "print(f\"ModelPackage Version ARN : {roberta_model_package_arn}\")\n", "%store roberta_model_package_arn" ] }, { "cell_type": "code", "execution_count": null, "id": "36a6fc4e-33d7-48c7-8665-cdb2e3994dce", "metadata": {}, "outputs": [], "source": [ "distilbert_model_package_arn = distilbert_model_package_response[\"ModelPackageArn\"]\n", "print(f\"ModelPackage Version ARN : {distilbert_model_package_arn}\")\n", "%store distilbert_model_package_arn" ] }, { "cell_type": "markdown", "id": "b2ec8d35-dc0c-4010-b917-1af4d750a405", "metadata": {}, "source": [ "### View Model Groups and Versions\n", "\n", "You can view details of a specific model version by using either the AWS SDK for Python (Boto3) or by using Amazon SageMaker Studio.\n", "To view the details of a model version by using Boto3, Call the `list_model_packages` method to view the model versions in a model group" ] }, { "cell_type": "code", "execution_count": null, "id": "bf0a8e69-73f2-4db6-9827-edb27f966571", "metadata": {}, "outputs": [], "source": [ "list_model_packages_response = sm_client.list_model_packages(\n", " ModelPackageGroupName=model_package_group_name\n", ")\n", "list_model_packages_response" ] }, { "cell_type": "code", "execution_count": null, "id": "3578df45-c68c-47e2-83b1-984fc1d86083", "metadata": {}, "outputs": [], "source": [ "roberta_model_version_arn = list_model_packages_response[\"ModelPackageSummaryList\"][1][\n", " \"ModelPackageArn\"\n", "]\n", "print(\"roberta model: {}\".format(roberta_model_version_arn))\n", "distilbert_model_version_arn = list_model_packages_response[\"ModelPackageSummaryList\"][0][\n", " \"ModelPackageArn\"\n", "]\n", "print(\"distilbert model: {}\".format(distilbert_model_version_arn))" ] }, { "cell_type": "markdown", "id": "4e317dd6-3426-45dc-beee-2d09e6ce1031", "metadata": {}, "source": [ "### View Model Version Details\n", "\n", "Call `describe_model_package` to see the details of the model version. You pass in the ARN of a model version that you got in the output of the call to list_model_packages." ] }, { "cell_type": "code", "execution_count": null, "id": "0b5d7590-ff56-40d5-b5ba-18702efc4492", "metadata": {}, "outputs": [], "source": [ "sm_client.describe_model_package(ModelPackageName=roberta_model_version_arn)" ] }, { "cell_type": "markdown", "id": "e6a73b89-f25e-4093-b3e2-14664b916ea3", "metadata": {}, "source": [ "### Update Model Approval Status\n", "\n", "After you create a model version, you typically want to evaluate its performance before you deploy it to a production endpoint. If it performs to your requirements, you can update the approval status of the model version to `Approved`. Setting the status to `Approved` can initiate CI/CD deployment for the model. If the model version does not perform to your requirements, you can update the approval status to `Rejected`." ] }, { "cell_type": "code", "execution_count": null, "id": "d2b3cb1b-cd3e-4d99-a4f0-3f8a8552ac3d", "metadata": {}, "outputs": [], "source": [ "model_package_update_input_dict = {\n", " \"ModelPackageArn\": roberta_model_package_arn,\n", " \"ModelApprovalStatus\": \"Approved\",\n", "}\n", "model_package_update_response1 = sm_client.update_model_package(**model_package_update_input_dict)\n", "model_package_update_response1" ] }, { "cell_type": "code", "execution_count": null, "id": "b3edbee9-c1ef-4ad7-9904-e89d38986dfb", "metadata": {}, "outputs": [], "source": [ "model_package_update_input_dict = {\n", " \"ModelPackageArn\": distilbert_model_package_arn,\n", " \"ModelApprovalStatus\": \"Approved\",\n", "}\n", "model_package_update_response2 = sm_client.update_model_package(**model_package_update_input_dict)\n", "model_package_update_response2" ] }, { "cell_type": "markdown", "id": "3c0310bb-f857-469d-9cfe-f93510a23681", "metadata": {}, "source": [ "### Deploy the Roberta Model from the Model Registry\n", "\n", "After you register a model version and approve it for deployment, deploy it to a SageMaker endpoint for real-time inference.\n", "\n", "When you create a `MLOps` project and choose a `MLOps` project template that includes model deployment, approved model versions in the model registry are automatically deployed to production. For information about using SageMaker `MLOps` projects, see [Automate `MLOps` with SageMaker Projects](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-projects.html).\n", "\n", "To deploy a model version using the AWS SDK for Python (Boto3) we'll create a model object from the model version by calling the [create_model](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html#SageMaker.Client.create_model) method. Pass the Amazon Resource Name (ARN) of the model version as part of the Containers for the model object." ] }, { "cell_type": "code", "execution_count": null, "id": "d8fa5867-43a2-4c24-82b0-6e3fc6127080", "metadata": {}, "outputs": [], "source": [ "# provide the consistent time stamp for model, endpoint config and endpoint\n", "now_roberta = f\"{datetime.datetime.now():%Y-%m-%d-%H-%M-%S}\"\n", "now_roberta" ] }, { "cell_type": "code", "execution_count": null, "id": "db6b9c57-0c01-4814-b975-3e3fccd15ee1", "metadata": {}, "outputs": [], "source": [ "roberta_model_name = f\"hf-pytorch-model-roberta-{now_roberta}\"\n", "print(\"Model name : {}\".format(roberta_model_name))\n", "%store roberta_model_name" ] }, { "cell_type": "code", "execution_count": null, "id": "e54d20ac-48bd-417f-850c-05cf1b0f34ec", "metadata": {}, "outputs": [], "source": [ "primary_container_roberta = {\n", " \"ModelPackageName\": roberta_model_version_arn,\n", "}\n", "\n", "create_model_roberta_respose = sm_client.create_model(\n", " ModelName=roberta_model_name, ExecutionRoleArn=role, PrimaryContainer=primary_container_roberta\n", ")\n", "\n", "print(\"Model arn : {}\".format(create_model_roberta_respose[\"ModelArn\"]))" ] }, { "cell_type": "markdown", "id": "bf118eff-24e8-4cde-ad4b-b60eef81160c", "metadata": {}, "source": [ "### Create an Endpoint Config from the model\n", "\n", "This will create an endpoint configuration that Amazon SageMaker hosting services uses to deploy models. In the configuration, you identify one or more models, created using the `CreateModel` API, to deploy and the resources that you want Amazon SageMaker to provision. Then you call the `CreateEndpoint` API.\n", "\n", "More info on `create_endpoint_config` can be found on the [Boto3 SageMaker documentation page](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html#SageMaker.Client.create_endpoint_config)." ] }, { "cell_type": "code", "execution_count": null, "id": "2c45a08c-0840-4928-94a8-97055a485803", "metadata": {}, "outputs": [], "source": [ "deploy_instance_type = \"ml.m5.xlarge\"\n", "roberta_endpoint_config_name = f\"hf-pytorch-endpoint-config-roberta-{now_roberta}\"\n", "roberta_endpoint_config_response = sm_client.create_endpoint_config(\n", " EndpointConfigName=roberta_endpoint_config_name,\n", " ProductionVariants=[\n", " {\n", " \"VariantName\": \"AllTrafficVariant\",\n", " \"ModelName\": roberta_model_name,\n", " \"InitialInstanceCount\": 1,\n", " \"InstanceType\": deploy_instance_type,\n", " \"InitialVariantWeight\": 1,\n", " },\n", " ],\n", ")\n", "\n", "roberta_endpoint_config_response" ] }, { "cell_type": "markdown", "id": "4d84b722-55e8-4963-ae9f-07827c2ac878", "metadata": {}, "source": [ "### Deploy the Endpoint Config to a real-time endpoint\n", "\n", "This will create an endpoint using the endpoint configuration specified in the request. Amazon SageMaker uses the endpoint to provision resources and deploy models. Note that you have already created the endpoint configuration with the `CreateEndpointConfig` API in the previous step.\n", "\n", "More info on `create_endpoint` can be found on the [Boto3 SageMaker documentation page](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html#SageMaker.Client.create_endpoint)." ] }, { "cell_type": "code", "execution_count": null, "id": "b14b68c0-23ee-4571-9f60-0bbd14aa0e12", "metadata": {}, "outputs": [], "source": [ "roberta_endpoint_name = f\"hf-pytorch-endpoint-roberta-{now_roberta}\"\n", "roberta_create_endpoint_response = sm_client.create_endpoint(\n", " EndpointName=roberta_endpoint_name,\n", " EndpointConfigName=roberta_endpoint_config_name,\n", ")\n", "\n", "roberta_create_endpoint_response" ] }, { "cell_type": "markdown", "id": "81bda614-2f85-4ec0-a5a9-826d3927054e", "metadata": {}, "source": [ "### Wait for Endpoint to be ready" ] }, { "cell_type": "code", "execution_count": null, "id": "fc409884-a0e7-46ba-a7c3-96e7f25241eb", "metadata": {}, "outputs": [], "source": [ "%%time\n", "utils.endpoint_creation_wait(roberta_endpoint_name)" ] }, { "cell_type": "markdown", "id": "26c3cc05-c963-463f-a4ec-187ede033bea", "metadata": {}, "source": [ "### Invoke Endpoint with `boto3`\n", "\n", "After you deploy a model into production using Amazon SageMaker hosting services, your client applications use this API to get inferences from the model hosted at the specified endpoint.\n", "\n", "For an overview of Amazon SageMaker, [see How It Works](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works.html).\n", "\n", "Amazon SageMaker strips all POST headers except those supported by the API. Amazon SageMaker might add additional headers. You should not rely on the behavior of headers outside those enumerated in the request syntax.\n", "\n", "Calls to `InvokeEndpoint` are authenticated by using AWS Signature Version 4. For information, see Authenticating Requests (AWS Signature Version 4) in the Amazon S3 API Reference.\n", "\n", "A customer's model containers must respond to requests within 60 seconds. The model itself can have a maximum processing time of 60 seconds before responding to invocations. If your model is going to take 50-60 seconds of processing time, the SDK socket timeout should be set to be 70 seconds.\n", "\n", "More info on `invoke_endpoint` can be found on the [Boto3 `SageMakerRuntime` documentation page](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker-runtime.html#SageMakerRuntime.Client.invoke_endpoint)." ] }, { "cell_type": "code", "execution_count": null, "id": "b3a961d7-77ab-471f-b6c0-c1c0faf3198f", "metadata": {}, "outputs": [], "source": [ "test_data = pd.read_csv(\"../sample_payload/test_data.csv\", header=None, names=[\"inputs\"])\n", "json_data = dict({\"inputs\": test_data.iloc[:, 0].to_list()})\n", "print(json_data)\n", "test_data.head()" ] }, { "cell_type": "code", "execution_count": null, "id": "b6366091-5040-4dd3-9bda-c9c371dd311a", "metadata": {}, "outputs": [], "source": [ "%%time\n", "response = sm_runtime.invoke_endpoint(\n", " EndpointName=roberta_endpoint_name,\n", " Body=test_data.to_csv(header=True, index=False),\n", " ContentType=\"text/csv\",\n", ")\n", "\n", "print(response[\"Body\"].read())" ] }, { "cell_type": "code", "execution_count": null, "id": "ed98116a-decb-48bb-a22b-b0a2e9f88d5f", "metadata": {}, "outputs": [], "source": [ "%%time\n", "response = sm_runtime.invoke_endpoint(\n", " EndpointName=roberta_endpoint_name,\n", " Body=json.dumps(json_data),\n", " ContentType=\"application/json\",\n", ")\n", "\n", "print(response[\"Body\"].read())" ] }, { "cell_type": "markdown", "id": "b9858345-0a8a-441f-b152-81f2632206e5", "metadata": {}, "source": [ "### Deploy the distilbert model to an endpoint\n", "\n", "we will follow similar steps to deploy the registered Distilbert model to a real-time endpoint for inference." ] }, { "cell_type": "code", "execution_count": null, "id": "5702fcae-dded-435c-b805-b446f2bd6028", "metadata": {}, "outputs": [], "source": [ "now_distilbert = f\"{datetime.datetime.now():%Y-%m-%d-%H-%M-%S}\"\n", "now_distilbert" ] }, { "cell_type": "code", "execution_count": null, "id": "75a05c72-96b8-481a-89b0-fd1dab77d13b", "metadata": {}, "outputs": [], "source": [ "distilbert_model_name = f\"hf-pytorch-model-distilbert-{now_distilbert}\"\n", "print(\"Model name : {}\".format(distilbert_model_name))\n", "%store distilbert_model_name\n", "\n", "primary_container = {\n", " \"ModelPackageName\": distilbert_model_version_arn,\n", "}\n", "\n", "create_model_respose = sm_client.create_model(\n", " ModelName=distilbert_model_name, ExecutionRoleArn=role, PrimaryContainer=primary_container\n", ")\n", "\n", "print(\"Model arn : {}\".format(create_model_respose[\"ModelArn\"]))" ] }, { "cell_type": "code", "execution_count": null, "id": "b7735940-6131-4480-9873-3465fa158d67", "metadata": {}, "outputs": [], "source": [ "distilbert_endpoint_config_name = f\"hf-pytorch-endpoint-config-distilbert-{now_distilbert}\"\n", "\n", "distilbert_endpoint_config_response = sm_client.create_endpoint_config(\n", " EndpointConfigName=distilbert_endpoint_config_name,\n", " ProductionVariants=[\n", " {\n", " \"VariantName\": \"AllTrafficVariant\",\n", " \"ModelName\": distilbert_model_name,\n", " \"InitialInstanceCount\": 1,\n", " \"InstanceType\": deploy_instance_type,\n", " \"InitialVariantWeight\": 1,\n", " },\n", " ],\n", ")\n", "\n", "distilbert_endpoint_config_response" ] }, { "cell_type": "code", "execution_count": null, "id": "d639a519-f59a-4974-9503-961cfcf1c160", "metadata": {}, "outputs": [], "source": [ "%%time\n", "distilbert_endpoint_name = f\"hf-pytorch-endpoint-distilbert-{now_distilbert}\"\n", "\n", "distilbert_create_endpoint_response = sm_client.create_endpoint(\n", " EndpointName=distilbert_endpoint_name,\n", " EndpointConfigName=distilbert_endpoint_config_name,\n", ")\n", "utils.endpoint_creation_wait(distilbert_endpoint_name)" ] }, { "cell_type": "code", "execution_count": null, "id": "8193bcc9-63ba-4fc7-90ec-0729a6a30edc", "metadata": {}, "outputs": [], "source": [ "%%time\n", "response = sm_runtime.invoke_endpoint(\n", " EndpointName=distilbert_endpoint_name,\n", " Body=test_data.to_csv(header=False, index=False),\n", " ContentType=\"text/csv\",\n", ")\n", "\n", "print(response[\"Body\"].read())" ] }, { "cell_type": "code", "execution_count": null, "id": "b45f3448-0cc5-49e0-8d0c-862410f4be75", "metadata": {}, "outputs": [], "source": [ "%%time\n", "response = sm_runtime.invoke_endpoint(\n", " EndpointName=distilbert_endpoint_name,\n", " Body=json.dumps(json_data),\n", " ContentType=\"application/json\",\n", ")\n", "\n", "print(response[\"Body\"].read())" ] }, { "cell_type": "markdown", "id": "3370dca3-7b83-423c-b2c6-b1ef9dab30c3", "metadata": {}, "source": [ "## Delete the endpoint (Optional)\n", "\n", "If you do not plan to use this endpoint further, you should delete the endpoint to avoid incurring additional charges." ] }, { "cell_type": "code", "execution_count": null, "id": "e357519b-9583-470d-b517-5f3d9aa462e9", "metadata": {}, "outputs": [], "source": [ "sm_session.delete_endpoint(roberta_endpoint_name)\n", "sm_session.delete_endpoint(distilbert_endpoint_name)" ] }, { "cell_type": "code", "execution_count": null, "id": "f2b5e850-fe22-49f6-9b31-171eb8579496", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (Data Science 2.0)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/sagemaker-data-science-38" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.13" } }, "nbformat": 4, "nbformat_minor": 5 }