{ "cells": [ { "cell_type": "markdown", "id": "5c1c08a9-6046-41da-baf5-9c9ea48f5fdd", "metadata": {}, "source": [ "# Classifying news headlines (SageMaker Version)\n", "\n", "> This notebook works well with the `Python 3 (PyTorch 1.13 Python 3.9 CPU Optimized)` kernel on SageMaker Studio\n", "\n", "In this example, you'll train a news headline classifier model using a custom script and the [Hugging Face Transformers](https://huggingface.co/docs/transformers/index) framework.\n", "\n", "This \"SageMaker\" notebook will demonstrate training the model on an Amazon SageMaker Training Job, and deploying it to a managed real-time inference endpoint.\n", "\n", "> ⚠️ We assume you've already run the companion [\"Headline Classifier Local\" notebook](Headline%20Classifier%20Local.ipynb), which demonstrates how you'd run training and inference here on the notebook itself." ] }, { "cell_type": "markdown", "id": "57514090-30b2-46c3-bee6-cb013b2038cb", "metadata": {}, "source": [ "## Installation and setup\n", "\n", "As in the local notebook, we'll make sure that the widgets library is set up before starting out.\n", "\n", "🟢 But **Unlike** the local notebook, note that we **do not need to install HF Transformers**: Because the actual training and inference will be happening in containerized jobs and not in this kernel.\n", "\n", "> ℹ️ (In fact, when you start multiple SageMaker Studio notebooks on the same kernel image and same instance, they share an environment: So assuming you ran the local notebook and selected the same kernel, everything is installed already)." ] }, { "cell_type": "code", "execution_count": null, "id": "d2cc7de2-59b4-49a7-8823-6953e2531cea", "metadata": { "scrolled": true, "tags": [] }, "outputs": [], "source": [ "%pip install \"ipywidgets<8\" \"sagemaker>2.140,<3\"" ] }, { "cell_type": "markdown", "id": "d83e5d2d-d062-468f-96dc-d51018cc2450", "metadata": {}, "source": [ "With installs complete, we'll load the libraries and Python built-ins to be used in the rest of the notebook.\n", "\n", "The [%autoreload magic](https://ipython.readthedocs.io/en/stable/config/extensions/autoreload.html) is useful when working with local .py files, because re-loading libraries on each cell execution lets you consume locally edited/updated scripts without having to restart your notebook kernel.\n", "\n", "🟢 This time, we'll be using some **AWS libraries** we didn't need in the local notebook:\n", "\n", "- `boto3`, the [general-purpose AWS SDK for Python](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html)\n", "- `sagemaker`, the [high-level Python SDK for Amazon SageMaker](https://sagemaker.readthedocs.io/en/stable/)\n", "\n", "Both of these libraries are open-source, published on PyPI and GitHub." ] }, { "cell_type": "code", "execution_count": null, "id": "efbe751a-dce5-4dd1-b96c-35738d28fe1e", "metadata": { "tags": [] }, "outputs": [], "source": [ "%load_ext autoreload\n", "%autoreload 2\n", "\n", "# Python Built-Ins:\n", "import os # Operating system utils e.g. file paths\n", "\n", "# External Dependencies:\n", "import boto3 # General AWS SDK for Python\n", "import ipywidgets as widgets # Interactive prediction widget\n", "import pandas as pd # Utilities for working with data tables (dataframes)\n", "import sagemaker # High-level Python SDK for Amazon SageMaker\n", "\n", "local_dir = \"data\"" ] }, { "cell_type": "markdown", "id": "f05b6e3e-1f50-4337-8339-fc05d71af52f", "metadata": {}, "source": [ "## Prepare and upload the dataset\n", "\n", "This example will download the **FastAi AG News** dataset from the [Registry of Open Data on AWS](https://registry.opendata.aws/fast-ai-nlp/) public repository. This dataset contains a table of news headlines and their corresponding topic classes." ] }, { "cell_type": "code", "execution_count": null, "id": "131a0145-9033-4112-a67f-5e229679229e", "metadata": { "tags": [] }, "outputs": [], "source": [ "%%time\n", "# Download the AG News data from the Registry of Open Data on AWS.\n", "!mkdir -p {local_dir}\n", "!aws s3 cp s3://fast-ai-nlp/ag_news_csv.tgz {local_dir} --no-sign-request\n", "\n", "# Un-tar the AG News data.\n", "!tar zxf {local_dir}/ag_news_csv.tgz -C {local_dir}/ --strip-components=1 --no-same-owner\n", "\n", "# Push data partitions to separate subfolders, which is useful for local script debugging later\n", "os.renames(f\"{local_dir}/test.csv\", f\"{local_dir}/test/test.csv\")\n", "os.renames(f\"{local_dir}/train.csv\", f\"{local_dir}/train/train.csv\")\n", "print(\"Done!\")" ] }, { "cell_type": "markdown", "id": "6dba225c-14bd-4941-8fa0-a68fe7f5fe6f", "metadata": {}, "source": [ "With the data downloaded and extracted, we can explore some of the examples as shown below:" ] }, { "cell_type": "code", "execution_count": null, "id": "5369b9a2-c6f8-46ce-9b98-9451817be488", "metadata": { "tags": [] }, "outputs": [], "source": [ "column_names = [\"CATEGORY\", \"TITLE\", \"CONTENT\"]\n", "# we use the train.csv only\n", "df = pd.read_csv(f\"{local_dir}/train/train.csv\", names=column_names, header=None, delimiter=\",\")\n", "# shuffle the DataFrame rows\n", "df = df.sample(frac=1, random_state=1337)\n", "\n", "# Make the (1-indexed) category classes more readable:\n", "class_names = [\"Other\", \"World\", \"Sports\", \"Business\", \"Sci/Tech\"]\n", "idx2label = {ix: name for ix, name in enumerate(class_names)}\n", "label2idx = {name: ix for ix, name in enumerate(class_names)}\n", "\n", "df = df.replace({\"CATEGORY\": idx2label})\n", "df.head()" ] }, { "cell_type": "markdown", "id": "51f56818-09f1-48fc-bd5b-21530e7ef9c5", "metadata": {}, "source": [ "For this exercise we'll **only use**:\n", "\n", "- The **title** (Headline) of the news story, as our input\n", "- The **category**, as our target variable to predict\n", "\n", "This dataset contains 4 evenly distributed topic classes, as shown below.\n", "\n", "> ℹ️ **What about 'Other'?:** Since the raw dataset represents categories with a number from 1-4, and our model will expect numbers starting from 0, we've inserted the un-used 'Other' class to keep data preparation simple and avoid introducing an extra, confusing, numeric representation of the classes." ] }, { "cell_type": "code", "execution_count": null, "id": "1a70b377-ee49-4780-8b32-57db4ee149ac", "metadata": { "tags": [] }, "outputs": [], "source": [ "df[\"CATEGORY\"].value_counts()" ] }, { "cell_type": "markdown", "id": "8db84ebf-e768-4d32-ad34-fc627cfe9597", "metadata": {}, "source": [ "So far, nothing new...\n", "\n", "🟢 The key difference for training on SageMaker, is that we'll need to **upload our datasets** [somewhere the training job will have access to them](https://docs.aws.amazon.com/sagemaker/latest/dg/model-access-training-data.html).\n", "\n", "Here we'll upload the data to [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) using the [SageMaker default bucket](https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning-ex-bucket.html). You can customize the bucket and folder prefix, if you'd like. It will be helpful to separate training and test data into separate S3 folders, rather than just two files in the same folder." ] }, { "cell_type": "code", "execution_count": null, "id": "0ec896d2-e210-4987-aac5-d21d968816cf", "metadata": { "tags": [] }, "outputs": [], "source": [ "bucket_name = sagemaker.Session().default_bucket()\n", "s3_prefix = \"sm101/news\"\n", "\n", "s3 = boto3.resource(\"s3\")\n", "\n", "s3.Bucket(bucket_name).upload_file(f\"{local_dir}/train/train.csv\", f\"{s3_prefix}/train/train.csv\")\n", "train_s3_uri = f\"s3://{bucket_name}/{s3_prefix}/train\"\n", "print(f\"train_s3_uri: {train_s3_uri}\")\n", "\n", "s3.Bucket(bucket_name).upload_file(f\"{local_dir}/test/test.csv\", f\"{s3_prefix}/test/test.csv\")\n", "test_s3_uri = f\"s3://{bucket_name}/{s3_prefix}/test\"\n", "print(f\"test_s3_uri: {test_s3_uri}\")" ] }, { "cell_type": "markdown", "id": "a628acab-0b0b-4b77-9800-8dc154c55249", "metadata": { "tags": [] }, "source": [ "## Define training parameters\n", "\n", "We'll be fine-tuning a (relatively small) pre-trained model from the [Hugging Face Hub](https://huggingface.co/models), and using their high-level [Trainer API](https://huggingface.co/docs/transformers/main_classes/trainer) rather than writing a low-level training loop from scratch.\n", "\n", "🟢 Our training script will ultimately use similar parameters as before, but this time we'll be passing them **through the training job API**.\n", "\n", "We'll define **JSON-serializable parameters** here in the notebook, and then use those to build the `transformers.TrainingArguments` later:" ] }, { "cell_type": "code", "execution_count": null, "id": "21191118-ecc3-4b42-bf84-76882582ce31", "metadata": { "tags": [] }, "outputs": [], "source": [ "hyperparameters = {\n", " \"model_id\": \"amazon/bort\", # ID of the pre-trained model to start from\n", " \"class_names\": \",\".join(class_names), # Comma-separated list of category names\n", " \"num_train_epochs\": 3, # This time, we'll actually train for a full 3 epochs\n", " \"per_device_train_batch_size\": 32, # Note this is higher than we could set on local hardware\n", " \"per_device_eval_batch_size\": 64, # Note this is higher than we could set on local hardware\n", " \"warmup_steps\": 500, # Higher than we could set with the reduced local training\n", "}\n", "hyperparameters" ] }, { "cell_type": "markdown", "id": "60a37d97-07f7-4985-8b6a-19aa751d1ab4", "metadata": {}, "source": [ "## Define metrics\n", "\n", "We'd like to define how to measure the quality of our trained model, and make this information visible to SageMaker to enable features like metric logging, automatic model tuning and leaderboards.\n", "\n", "🟢 We'll have our training code print metrics as usual, and [use regular expressions](https://docs.aws.amazon.com/sagemaker/latest/dg/training-metrics.html#define-train-metrics) to define how SageMaker should scrape structured metrics from the job logs:" ] }, { "cell_type": "code", "execution_count": null, "id": "935497f0-1b44-450e-a725-8b65509231a0", "metadata": { "tags": [] }, "outputs": [], "source": [ "metric_definitions = [\n", " {\"Name\": \"Epoch\", \"Regex\": r\"'epoch': ([0-9\\.\\-e]+)\"},\n", " {\"Name\": \"Train:Loss\", \"Regex\": r\"'loss': ([0-9\\.\\-e]+)\"},\n", " {\"Name\": \"Train:LearningRate\", \"Regex\": r\"'learning_rate': ([0-9\\.\\-e]+)\"},\n", " {\"Name\": \"Validation:Loss\", \"Regex\": r\"'eval_loss': ([0-9\\.\\-e]+)\"},\n", " {\"Name\": \"Validation:Accuracy\", \"Regex\": r\"'eval_accuracy': ([0-9\\.\\-e]+)\"},\n", " {\"Name\": \"Validation:F1\", \"Regex\": r\"'eval_f1': ([0-9\\.\\-e]+)\"},\n", " {\"Name\": \"Validation:Precision\", \"Regex\": r\"'eval_precision': ([0-9\\.\\-e]+)\"},\n", " {\"Name\": \"Validation:Recall\", \"Regex\": r\"'eval_recall': ([0-9\\.\\-e]+)\"},\n", " {\"Name\": \"Validation:Runtime\", \"Regex\": r\"'eval_runtime': ([0-9\\.\\-e]+)\"},\n", " {\"Name\": \"Validation:SamplesPerSecond\", \"Regex\": r\"'eval_samples_per_second': ([0-9\\.\\-e]+)\"},\n", " {\"Name\": \"Validation:StepsPerSecond\", \"Regex\": r\"'eval_steps_per_second': ([0-9\\.\\-e]+)\"},\n", "]\n", "metric_definitions" ] }, { "cell_type": "markdown", "id": "d8a2df25-93e0-4f15-89de-844b291d6862", "metadata": { "tags": [] }, "source": [ "## Train and validate the model on SageMaker\n", "\n", "This time, we'll create a [SageMaker training job](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-training.html) to run our training process on a separate instance from the notebook itself: Allowing us to right-size temporary training infrastructure independently from the long-lived notebook environment.\n", "\n", "🟢 We've factored the actual training code out of the notebook into **[scripts/train.py](scripts/train.py)**, and will use the pre-built [Hugging Face Framework Container through the SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/index.html) to train and deploy the model from this script." ] }, { "cell_type": "markdown", "id": "b70294b2-ee08-4c90-a7f7-d88ed52e07cd", "metadata": { "tags": [] }, "source": [ "### How Amazon SageMaker runs your script with pre-built containers\n", "\n", "AWS provides a pre-packaged set of Docker images to help you accelerate building your projects on major ML frameworks: The [SageMaker Framework Containers](https://docs.aws.amazon.com/sagemaker/latest/dg/docker-containers-prebuilt.html).\n", "\n", "These containers take care of basic setup like GPU drivers, serving stack implementation, core libraries, and so on - leaving us free to simply inject some Python scripts for the training process and any inference behaviour overrides. We can even provide a *requirements.txt* file to specify additional dependencies to be dynamically installed at start-up - without having to build these into the container image.\n", "\n", "**As a result, our first task is to understand the interfaces** between our script(s) and the runtime: How will the script read input data? Parameters? Where should it store results?" ] }, { "cell_type": "markdown", "id": "870a6b41-a844-42e2-b3a7-d07252a394f1", "metadata": { "tags": [] }, "source": [ "#### Your container during training\n", "\n", "When the training job container is started, your **code and input data** are downloaded to **local files** under the `/opt/ml` directory. You'll also **save your trained model** and any other file outputs to the local filesystem - as shown below:\n", "\n", "```\n", " /opt/ml\n", " |-- code\n", " | `-- \n", " |-- input\n", " | |-- config\n", " | | |-- hyperparameters.json\n", " | | `-- resourceConfig.json\n", " | `-- data\n", " | `-- \n", " | `-- \n", " |-- model\n", " | `-- \n", " `-- output\n", " `-- failure\n", "```\n", "\n", "##### The input\n", "\n", "* `/opt/ml/input/config` contains information to control how your program runs. `hyperparameters.json` is a JSON-formatted dictionary of hyperparameter names to values. These values will always be strings, so you may need to convert them. `resourceConfig.json` is a JSON-formatted file that describes the network layout used for distributed training. Since scikit-learn doesn't support distributed training, we'll ignore it here.\n", "* `/opt/ml/input/data//` (for File mode) contains the input data for that channel. The channels are created based on the call to CreateTrainingJob but it's generally important that channels match what the algorithm expects. The files for each channel will be copied from S3 to this directory, preserving the tree structure indicated by the S3 key structure. \n", "* `/opt/ml/input/data/_` (for Pipe mode) is the pipe for a given epoch. Epochs start at zero and go up by one each time you read them. There is no limit to the number of epochs that you can run, but you must close each pipe before reading the next epoch.\n", "\n", "##### The output\n", "\n", "* `/opt/ml/model/` is the directory where you write the model that your algorithm generates. Your model can be in any format that you want. It can be a single file or a whole directory tree. SageMaker will package any files in this directory into a compressed tar archive file. This file will be available at the S3 location returned in the `DescribeTrainingJob` result.\n", "* `/opt/ml/output` is a directory where the algorithm can write a file `failure` that describes why the job failed. The contents of this file will be returned in the `FailureReason` field of the `DescribeTrainingJob` result. For jobs that succeed, there is no reason to write this file as it will be ignored." ] }, { "cell_type": "markdown", "id": "591f290d-7eb9-4acf-b9d4-5df4a06e3f57", "metadata": { "tags": [] }, "source": [ "#### Further information\n", "\n", "For more information, you can refer to:\n", "\n", "- The [SageMaker Python SDK guide for Hugging Face](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/index.html) and [API doc](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/sagemaker.huggingface.html) for HF framework classes. (The equivalent pages for PyTorch may also be useful).\n", "- The [AWS Deep Learning Containers repository](https://github.com/aws/deep-learning-containers) on GitHub, which defines the underlying container images.\n", "- The open source [SageMaker Training Toolkit](https://github.com/aws/sagemaker-training-toolkit) and [SageMaker Inference Toolkit](https://github.com/aws/sagemaker-inference-toolkit) for more details on the framework code for training and serving. (Some frameworks use variants on these toolkits, e.g. the [sagemaker-pytorch-training-toolkit](https://github.com/aws/sagemaker-pytorch-training-toolkit))" ] }, { "cell_type": "markdown", "id": "c25455f6-b3a6-450a-a7be-b5fa8f94f79d", "metadata": {}, "source": [ "### (Optional) Testing your script\n", "\n", "> ℹ️ **Note:** This step is optional because in this example, the training script has already been built and tested for you!\n", "\n", "Although the job script [train.py](scripts/train.py) is mainly the same logic and process as previously done in the notebook itself, of course it would be good to **test** the adaptations we made to prepare it for SageMaker.\n", "\n", "For initial functional testing and debugging of your script, you may not want to spin up a full SageMaker training job each time: Because of the short delay while each new job spins up its on-demand compute resources.\n", "\n", "There are multiple ways you can speed up this process. We'd usually recommend [SageMaker Warm Pools](https://docs.aws.amazon.com/sagemaker/latest/dg/train-warm-pools.html) or [SageMaker Local Mode](https://sagemaker.readthedocs.io/en/stable/overview.html?#local-mode), but these aren't available in the standard workshop environment. Instead, you can **simulate a training job within the notebook** by invoking your training script through CLI.\n", "\n", "You can un-comment (with `Ctrl`+`/`) and run the cell below to try this - ⚠️ but watch out: It's quite memory-intensive, so you'll want to shut down or restart the kernel from the previous [Headline Classifier Local notebook](Headline%20Classifier%20Local.ipynb) first." ] }, { "cell_type": "code", "execution_count": null, "id": "f14873a8-9a90-41a7-b5fc-9a88041a577d", "metadata": { "scrolled": true, "tags": [] }, "outputs": [], "source": [ "# class_names_str = \",\".join(class_names) # Comma-separated list for CLI\n", "# !python3 scripts/train.py \\\n", "# --train data/train \\\n", "# --test data/test \\\n", "# --output_data_dir data/local-output \\\n", "# --model_dir data/local-model \\\n", "# --model_id=amazon/bort --class_names={class_names_str} --train_max_steps=20 \\\n", "# --train_batch_size=8 --eval_batch_size=16 --fp16=0" ] }, { "cell_type": "markdown", "id": "e2d648ba-8214-423e-926d-1417d56e413c", "metadata": {}, "source": [ "### Creating the job\n", "\n", "The actual [SageMaker CreateTrainingJob API](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateTrainingJob.html) requires several low-level details that the high-level [high-level 'Estimator' classes](https://sagemaker.readthedocs.io/en/stable/overview.html) in the SageMaker Python SDK help to simplify. In particular:\n", "\n", "- Instead of specifying the exact container image URI, the SDK will look this up for us based on the selected framework and version(s)\n", "- The SDK will transparently compress and upload our `scripts` bundle to S3, and configure the training job to load it from there.\n", "\n", "So first, we'll create an `estimator` object configuring the job and what infrastructure (how many compute instances and what type) it should run on:\n", "\n", "> ℹ️ Like other services that run jobs on your behalf, the training job will assume an [IAM role](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) to allow it to access resources, like your input training data on S3. Since SageMaker notebooks themselves already run with an assumed role, we'll set the training job role the same as the notebook role for simplicity." ] }, { "cell_type": "code", "execution_count": null, "id": "58f6dbe0-6c13-4199-b219-5fe4e4543d03", "metadata": { "tags": [] }, "outputs": [], "source": [ "from sagemaker.huggingface.estimator import HuggingFace as HuggingFaceEstimator\n", "\n", "nb_role = sagemaker.get_execution_role()\n", "\n", "estimator = HuggingFaceEstimator(\n", " transformers_version=\"4.26\",\n", " pytorch_version=\"1.13\",\n", " py_version=\"py39\",\n", "\n", " source_dir=\"scripts\", # Local folder where fine-tuning script is stored\n", " entry_point=\"train.py\", # Actual script the training job should run\n", "\n", " base_job_name=\"news-classifier\", # Prefix for the training job name (timestamp will be added)\n", " instance_count=1, # Number of instances train on (need to prepare your script for using >1!)\n", " instance_type=\"ml.p3.2xlarge\", # Type of compute instance to use: p* and g* include GPUs\n", " role=nb_role, # IAM role the job will use to access AWS resources (e.g. data on S3)\n", "\n", " hyperparameters=hyperparameters, # Training job parameters, as we set up earlier\n", " metric_definitions=metric_definitions, # RegEx to extract metric data from training job logs\n", ")" ] }, { "cell_type": "markdown", "id": "c5c804ca-7678-4277-9f74-f3851d7efa5a", "metadata": {}, "source": [ "Once the configuration is done, you can start the actual training job by running `estimator.fit()` and specifying your input data location(s).\n", "\n", "The number, names, and types of your data input \"channels\" are [up to you](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateTrainingJob.html#sagemaker-CreateTrainingJob-request-InputDataConfig): Just make sure your notebook configures the same channels that your script expects.\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "id": "f351c2e7-70ee-4d02-8b28-b1792073b8df", "metadata": { "tags": [] }, "outputs": [], "source": [ "%%time\n", "\n", "estimator.fit(\n", " {\n", " \"train\": train_s3_uri,\n", " \"test\": test_s3_uri,\n", " },\n", " wait=True, # Wait for the training to complete (default=True)\n", " logs=True, # Stream training job logs to the notebook (default=True, requires wait=True)\n", ")" ] }, { "cell_type": "markdown", "id": "ea7c940a-77cc-472e-8263-fe2a0c236f9b", "metadata": {}, "source": [ "> ⏰ This training job should take around 10 minutes to complete, but should reach significantly higher accuracy than the 'local' model\n", "\n", "Training itself should be much faster than the previous 'local' example, due to running on a GPU-accelerated instance rather than a small CPU-only notebook. However, it will likely take a couple of minutes for the job to provision the infrastructure and start up.\n", "\n", "You can also check on the status of current and past jobs in *Training jobs* page of the [AWS Console for Amazon SageMaker](https://console.aws.amazon.com/sagemaker/home?#/jobs), and in the **Experiments** UI here in SageMaker Studio (From the 🏠 **Home** button on the left sidebar).\n", "\n", "🟢 Although the default behaviour of waiting and streaming logs gives a local-like experience, the training job doesn't depend on the notebook:\n", "\n", "- If you disconnect or shut down your notebook, the training job will still continue\n", "- A notebook could kick off multiple training jobs in parallel, by setting `wait=False`\n", "- If you ever need to link a restarted notebook to an old training job, you can `.attach()` by training job name as shown below:" ] }, { "cell_type": "code", "execution_count": null, "id": "b6156d24-b630-4df3-a912-18cf1ed002f2", "metadata": {}, "outputs": [], "source": [ "# estimator = HuggingFaceEstimator.attach(\"news-hf-2020-01-01-12-00-00-000\")" ] }, { "cell_type": "markdown", "id": "cf511626-84a6-4579-9a77-39d0fc048cc9", "metadata": {}, "source": [ "Once the training job completes, the contents of the container's model output folder will be archived to S3 automatically.\n", "\n", "You can refer to this file as shown below, and also import models trained outside of SageMaker for deployment by preparing them in a similar tarball format:" ] }, { "cell_type": "code", "execution_count": null, "id": "c490cab6-7ae5-450d-b7ae-ff8156e5817d", "metadata": { "tags": [] }, "outputs": [], "source": [ "estimator.latest_training_job.describe()[\"ModelArtifacts\"][\"S3ModelArtifacts\"]" ] }, { "cell_type": "markdown", "id": "2150a348-e0c3-40c7-9d5e-14c8c6986033", "metadata": {}, "source": [ "## Use the model for inference\n", "\n", "Once the model is trained, we're ready to use it for inference on new data.\n", "\n", "SageMaker offers multiple fully-managed options for [deploying models for on-demand inference](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints.html) or [running batch inference jobs](https://docs.aws.amazon.com/sagemaker/latest/dg/batch-transform.html).\n", "\n", "> ℹ️ **Remember:** Choose the right inference option for your use-case - You don't need to deploy a real-time endpoint if you only want to process a batch of data!\n", ">\n", "> See [Using SageMaker Batch Transform](https://sagemaker.readthedocs.io/en/stable/overview.html#sagemaker-batch-transform) for more details on how to run batch inference through the same high-level SageMaker Python SDK we've been using so far.\n", "\n", "For this example we'll deploy the model to a [real-time inference endpoint](https://docs.aws.amazon.com/sagemaker/latest/dg/deploy-model.html), which will allow us to classify headlines on-demand.\n", "\n", "We'll again specify what type of infrastructure we'd like to run the endpoint on, so the start-up will take a few minutes. Note that we can use a **different type** of instance from training - since this test endpoint will handle very little traffic, so can use smaller/cheaper infrastructure:" ] }, { "cell_type": "code", "execution_count": null, "id": "795f1525-b0f1-449a-9c17-e3b587e77a7d", "metadata": { "tags": [] }, "outputs": [], "source": [ "predictor = estimator.deploy(\n", " initial_instance_count=1,\n", " instance_type=\"ml.m5.large\",\n", ")" ] }, { "cell_type": "markdown", "id": "7322a7f0-e2c1-45ec-9a96-eb64036c8384", "metadata": {}, "source": [ "After deployment, you should be able to find your endpoint in the *Endpoints* page of the [AWS Console for Amazon SageMaker](https://console.aws.amazon.com/sagemaker/home?#/endpoints) - as well as the **Deployments > Endpoints** section of the SageMaker Studio UI (From the 🏠 **Home** button on the left sidebar).\n", "\n", "As with training jobs, endpoints are decoupled from the notebook itself. You can attach a notebook to a previously-deployed endpoint as follows:" ] }, { "cell_type": "code", "execution_count": null, "id": "1371515a-ed8a-41a2-87f8-3fdf63dcfb9b", "metadata": { "tags": [] }, "outputs": [], "source": [ "# from sagemaker.huggingface import HuggingFacePredictor\n", "# predictor = HuggingFacePredictor(\"news-classifier-2023-03-24-13-31-09-895\")" ] }, { "cell_type": "markdown", "id": "887e58b2-f792-46e1-b7a3-7613bb9f3184", "metadata": {}, "source": [ "### Your model should now be in production as a RESTful API!\n", "\n", "The [Predictor](https://sagemaker.readthedocs.io/en/stable/api/inference/predictors.html) doesn't load your model into memory here, but instead wraps HTTPS API calls to the deployed endpoint.\n", "\n", "Here we're using the default `application/json` serialization support provided by the Hugging Face framework, but different frameworks have different default formats and it's possible to set up pretty much any request or response format you like with custom [serializers](https://sagemaker.readthedocs.io/en/stable/api/inference/serializers.html) and [deserializers](https://sagemaker.readthedocs.io/en/stable/api/inference/deserializers.html) (on the client/`predictor` side) and custom [`input_fn`s and `output_fn`s](https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/using_pytorch.html#process-model-input) (on the endpoint container side): No need to write your own serving stacks from scratch.\n", "\n", "Since request de/serialization and processing is already handled for us by the [HuggingFacePredictor](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/sagemaker.huggingface.html#hugging-face-predictor) and the pre-built inference container, calling our deployed model from the notebook is just as easy as calling the local in-memory model was:" ] }, { "cell_type": "code", "execution_count": null, "id": "d658acce-6ba1-401c-bf2a-027c227c4db7", "metadata": { "tags": [] }, "outputs": [], "source": [ "def classify(text: str) -> dict:\n", " \"\"\"Classify a headline and print the results\"\"\"\n", " return predictor.predict({\"inputs\":[text]})[0]\n", "\n", "\n", "# Either try out the interactive widget:\n", "interaction = widgets.interact_manual(\n", " classify,\n", " text=widgets.Text(\n", " value=\"The markets were bullish after news of the merger\",\n", " placeholder=\"Type a news headline...\",\n", " description=\"Headline:\",\n", " layout=widgets.Layout(width=\"99%\"),\n", " ),\n", ")\n", "interaction.widget.children[1].description = \"Classify!\"" ] }, { "cell_type": "markdown", "id": "2fe1d76f-4a2e-41ab-b498-e3b8c1611e9d", "metadata": {}, "source": [ "Alternatively (if e.g. you're struggling with the UI widget library), you can call the endpoint direct from code:" ] }, { "cell_type": "code", "execution_count": null, "id": "d11b0acb-f5b9-48b5-b7b3-404ad8158175", "metadata": { "tags": [] }, "outputs": [], "source": [ "classify(\"Retailers are expanding after the recent economic growth\")" ] }, { "cell_type": "markdown", "id": "e8ebdcac-2f7c-43c6-8af0-a595960c9d07", "metadata": {}, "source": [ "## Clean-up\n", "\n", "Note that while SageMaker jobs (like training, processing, and batch inference) use on-demand compute only for the duration they run, deployed real-time inference endpoints continue to consume resources until you turn them off.\n", "\n", "When you're done experimenting, delete endpoints that are no longer needed to avoid unnecessary costs:" ] }, { "cell_type": "code", "execution_count": null, "id": "b26b2cec-d728-4b90-ba01-98b5bd5c0585", "metadata": {}, "outputs": [], "source": [ "# predictor.delete_endpoint(delete_endpoint_config=True)" ] }, { "cell_type": "markdown", "id": "f8ba7286-50b7-45e7-9a4c-db5d1ba96bcc", "metadata": {}, "source": [ "## Review\n", "\n", "In this notebook, we showed how you could train and deploy a text classification model using Hugging Face transformers on Amazon SageMaker.\n", "\n", "Some benefits of this approach as compared to the companion [Headline Classifier Local notebook](Headline%20Classifier%20Local.ipynb) are:\n", "\n", "- We can automatically provision specialist computing resources (e.g. high-performance, or GPU-accelerated instances) for **only** the duration of the training job: Getting good performance in training, without leaving resources sitting around under-utilized\n", "- The history of training jobs (including parameters, metrics, outputs, etc.) is automatically tracked - unlike local notebook experiments where the user needs to keep notes on what worked and what didn't\n", "- Our trained model can be deployed to a secure, production-ready web endpoint with just one SDK call: No container or web application packaging required, unless we want to deeply customize the behavior\n", "\n", "By comparing the local notebook with this SageMaker version and the accompanying [scripts/train.py](scripts/train.py) script file, you can get an idea of how to migrate your own or open-source in-notebook ML workflows into SageMaker \"script mode\" training jobs.\n", "\n", "In the next \"migration challenge\" exercise of this workshop, you'll try to repeat this process for a different \"local\" notebook on your own.\n", "\n", "You might also be interested in [aws-samples/amazon-sagemaker-from-idea-to-production](https://github.com/aws-samples/amazon-sagemaker-from-idea-to-production), which shows further steps like connecting your SageMaker jobs together into pipelines, and automating workflows with CI/CD." ] } ], "metadata": { "availableInstances": [ { "_defaultOrder": 0, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.t3.medium", "vcpuNum": 2 }, { "_defaultOrder": 1, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.t3.large", "vcpuNum": 2 }, { "_defaultOrder": 2, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.t3.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 3, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.t3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 4, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5.large", "vcpuNum": 2 }, { "_defaultOrder": 5, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 6, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 7, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 8, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 9, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 10, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 11, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 12, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5d.large", "vcpuNum": 2 }, { "_defaultOrder": 13, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5d.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 14, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5d.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 15, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5d.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 16, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5d.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 17, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5d.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 18, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5d.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 19, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 20, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": true, "memoryGiB": 0, "name": "ml.geospatial.interactive", "supportedImageNames": [ "sagemaker-geospatial-v1-0" ], "vcpuNum": 0 }, { "_defaultOrder": 21, "_isFastLaunch": true, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.c5.large", "vcpuNum": 2 }, { "_defaultOrder": 22, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.c5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 23, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.c5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 24, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.c5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 25, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 72, "name": "ml.c5.9xlarge", "vcpuNum": 36 }, { "_defaultOrder": 26, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 96, "name": "ml.c5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 27, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 144, "name": "ml.c5.18xlarge", "vcpuNum": 72 }, { "_defaultOrder": 28, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.c5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 29, "_isFastLaunch": true, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g4dn.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 30, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g4dn.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 31, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g4dn.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 32, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g4dn.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 33, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g4dn.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 34, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g4dn.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 35, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 61, "name": "ml.p3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 36, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 244, "name": "ml.p3.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 37, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 488, "name": "ml.p3.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 38, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.p3dn.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 39, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.r5.large", "vcpuNum": 2 }, { "_defaultOrder": 40, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.r5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 41, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.r5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 42, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.r5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 43, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.r5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 44, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.r5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 45, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 512, "name": "ml.r5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 46, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.r5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 47, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 48, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 49, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 50, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 51, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 52, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 53, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.g5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 54, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.g5.48xlarge", "vcpuNum": 192 }, { "_defaultOrder": 55, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 1152, "name": "ml.p4d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 56, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 1152, "name": "ml.p4de.24xlarge", "vcpuNum": 96 } ], "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (PyTorch 1.13 Python 3.9 CPU Optimized)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/pytorch-1.13-cpu-py39" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.16" } }, "nbformat": 4, "nbformat_minor": 5 }