{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# End-to-End NLP: News Headline Classifier (SageMaker Version)\n", "\n", "_**Train a Keras-based model to classify news headlines between four domains**_\n", "\n", "This notebook works well with the `Python 3 (TensorFlow 2.3 Python 3.7 CPU Optimized)` kernel on SageMaker Studio, or `conda_tensorflow2_p37` on classic SageMaker Notebook Instances.\n", "\n", "\n", "---\n", "\n", "Following on from the previous 'local version' notebook, we show here how to trigger the model training and deployment on separate infrastructure - to make better use of resources.\n", "\n", "Note that you can safely ignore the WARNING about the pip version." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# First install some libraries which might not be available across all kernels (e.g. in Studio):\n", "!pip install \"ipywidgets<8\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Download News Aggregator Dataset\n", "\n", "We will download **FastAI AG News** dataset from the [Registry of Open Data on AWS](https://registry.opendata.aws/fast-ai-nlp/) public repository. This dataset contains a table of news headlines and their corresponding classes.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "local_dir = \"data\"\n", "# Download the AG News data from the Registry of Open Data on AWS.\n", "!mkdir -p {local_dir}\n", "!aws s3 cp s3://fast-ai-nlp/ag_news_csv.tgz {local_dir} --no-sign-request\n", "\n", "# Un-tar the AG News data.\n", "!tar zxf {local_dir}/ag_news_csv.tgz -C {local_dir}/ --strip-components=1 --no-same-owner\n", "print(\"Done!\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Let's visualize the dataset\n", "\n", "We will load the ag_news_csv/train.csv file to a Pandas dataframe for our data processing work." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%load_ext autoreload\n", "%autoreload 2\n", "\n", "import os\n", "import re\n", "\n", "import numpy as np\n", "import pandas as pd\n", "import util.preprocessing" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "column_names = [\"CATEGORY\", \"TITLE\", \"CONTENT\"]\n", "# we use the train.csv only\n", "df = pd.read_csv(f\"{local_dir}/train.csv\", names=column_names, header=None, delimiter=\",\")\n", "# shuffle the DataFrame rows\n", "df = df.sample(frac=1, random_state=1337)\n", "# make the category classes more readable\n", "mapping = {1: \"World\", 2: \"Sports\", 3: \"Business\", 4: \"Sci/Tech\"}\n", "df = df.replace({\"CATEGORY\": mapping})\n", "df.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For this exercise we'll **only use**:\n", "\n", "- The **title** (Headline) of the news story, as our input\n", "- The **category**, as our target variable\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df[\"CATEGORY\"].value_counts()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The dataset has **four article categories** with equal weighting:\n", "\n", "- Business\n", "- Sci/Tech\n", "- Sports\n", "- World\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Natural Language Pre-Processing\n", "\n", "We'll do some basic processing of the text data to convert it into numerical form that the algorithm will be able to consume to create a model.\n", "\n", "We will do typical pre processing for NLP workloads such as: dummy encoding the labels, tokenizing the documents and set fixed sequence lengths for input feature dimension, padding documents to have fixed length input vectors.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Dummy Encode the Labels\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "encoded_y, labels = util.preprocessing.dummy_encode_labels(df, \"CATEGORY\")\n", "print(labels)\n", "print(encoded_y)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df[\"CATEGORY\"].iloc[0]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "encoded_y[0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Tokenize and Set Fixed Sequence Lengths\n", "\n", "We want to describe our inputs at the more meaningful word level (rather than individual characters), and ensure a fixed length of the input feature dimension.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_cell_guid": "7bcf422f-0e75-4d49-b3b1-12553fcaf4ff", "_uuid": "46b7fc9aef5a519f96a295e980ba15deee781e97" }, "outputs": [], "source": [ "processed_docs, tokenizer = util.preprocessing.tokenize_and_pad_docs(df, \"TITLE\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df[\"TITLE\"].iloc[0]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "processed_docs[0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Import Word Embeddings\n", "\n", "To represent our words in numeric form, we'll use pre-trained vector representations for each word in the vocabulary: In this case we'll be using [pre-trained word embeddings from FastText](https://fasttext.cc/docs/en/crawl-vectors.html), which are also available for a broad range of languages other than English.\n", "\n", "You could also explore training custom, domain-specific word embeddings using SageMaker's built-in [BlazingText algorithm](https://docs.aws.amazon.com/sagemaker/latest/dg/blazingtext.html). See the official [blazingtext_word2vec_text8 sample](https://github.com/awslabs/amazon-sagemaker-examples/tree/master/introduction_to_amazon_algorithms/blazingtext_word2vec_text8) for an example notebook showing how.\n", "\n", "> ⚠️ You may sometimes see a file format error if you run this cell at the same time as the `get_word_embeddings()` cell in the local notebook. If this happens, try deleting the `data/embeddings` folder before re-running this cell." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "embedding_matrix = util.preprocessing.get_word_embeddings(tokenizer, f\"{local_dir}/embeddings\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "_cell_guid": "593bfd0a-b703-4e87-96dd-a7eb98e6940e", "_uuid": "f71c5f0b731d3418d3cb83be758233b5030da29d" }, "outputs": [], "source": [ "np.save(\n", " file=f\"{local_dir}/embeddings/docs-embedding-matrix\",\n", " arr=embedding_matrix,\n", " allow_pickle=False,\n", ")\n", "vocab_size = embedding_matrix.shape[0]\n", "print(embedding_matrix.shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Split Train and Test Sets\n", "\n", "Finally we need to divide our data into model training and evaluation sets:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.model_selection import train_test_split\n", "\n", "X_train, X_test, y_train, y_test = train_test_split(\n", " processed_docs,\n", " encoded_y,\n", " test_size=0.2,\n", " random_state=42,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "os.makedirs(f\"{local_dir}/train\", exist_ok=True)\n", "np.save(f\"{local_dir}/train/train_X.npy\", X_train)\n", "np.save(f\"{local_dir}/train/train_Y.npy\", y_train)\n", "os.makedirs(f\"{local_dir}/test\", exist_ok=True)\n", "np.save(f\"{local_dir}/test/test_X.npy\", X_test)\n", "np.save(f\"{local_dir}/test/test_Y.npy\", y_test)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Set Up Execution Role, Session and S3 Bucket\n", "\n", "The primary data source for a SageMaker training job is (nearly) always Amazon S3. We'll need to upload our pre-processed data there to make it available for SageMaker training jobs. \n", "\n", "Let's start by specifying:\n", "- The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting. If you don't specify a bucket, SageMaker SDK will create a default bucket following a pre-defined naming convention in the same region. \n", "- The IAM role ARN used to give SageMaker access to your data. It can be fetched using the **get_execution_role** method from sagemaker python SDK." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import sagemaker\n", "from sagemaker import get_execution_role\n", "\n", "role = get_execution_role()\n", "print(role)\n", "sess = sagemaker.Session()\n", "bucket_name = sess.default_bucket()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Upload Data to Amazon S3\n", "One way to upload data to S3 is by using the high-level [aws s3 sync](https://docs.aws.amazon.com/cli/latest/userguide/cli-services-s3-commands.html#using-s3-commands-managing-objects-sync) command that synchronizes the contents of the target bucket and source directory. This command also supports options such as:\n", "\n", "- ```--delete```, to remove any objects from the target that aren't present in the source\n", "- ```--exclude``` and ```--include```, to filter files/objects instead of copying everything from the source folder.\n", "\n", "Although it's possible to work with S3 via Python as well (for example with [boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html) or the [SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable/api/utility/s3.html)), the S3 sync CLI is also natively *multi-threaded*: Which helps deliver fast transfers from notebooks without introducing more complex code." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!aws s3 sync --quiet --delete {local_dir} s3://{bucket_name}/news --exclude \"*\" --include \"*.npy\"\n", "print(\"Done!\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Data Input (\"Channels\") Configuration\n", "The local version of the notebook ([Headline Classifier Local.ipynb](Headline%20Classifier%20Local.ipynb)) has three data inputs -- train, test and embeddings. In Sagemaker terminology, each input data is a \"channel\".\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train_channel = f\"s3://{bucket_name}/news/train\"\n", "test_channel = f\"s3://{bucket_name}/news/test\"\n", "embeddings_channel = f\"s3://{bucket_name}/news/embeddings\"\n", "\n", "inputs = {\"train\": train_channel, \"test\": test_channel, \"embeddings\": embeddings_channel}\n", "print(inputs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Train with Differentiated Infrastructure on Sagemaker\n", "\n", "This time, we've packaged the model build and train code from our previous notebook ([Headline Classifier Local.ipynb](Headline%20Classifier%20Local.ipynb)) into the [**main.py**](src/main.py) script in the **src** directory.\n", "\n", "We'll use the high-level [TensorFlow Framework Container through the SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable/frameworks/tensorflow/using_tf.html) to train and deploy the model from this script file." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### How Amazon SageMaker runs your Tensorflow script with pre-built containers\n", "\n", "AWS provides a pre-packaged set of Docker images to help you accelerate building your projects on major ML frameworks: The [SageMaker Framework Containers](https://docs.aws.amazon.com/sagemaker/latest/dg/docker-containers-prebuilt.html).\n", "\n", "These containers take care of basic setup like GPU drivers, serving stack implementation, core libraries, and so on - leaving us free to simply inject some Python scripts for the training process and any inference behaviour overrides. We can even provide a *requirements.txt* file to specify additional dependencies to be dynamically installed at start-up - without having to build these into the container image.\n", "\n", "**As a result, our first task is to understand the interfaces** between our script(s) and the runtime: How will the script read input data? Parameters? Where should it store results?\n", "\n", "#### Running your container during training\n", "\n", "When Amazon SageMaker runs training, your training script (entry_point input) is run just like a regular Python program. A number of files are laid out for your use, under a `/opt/ml` directory. These will be locations that you can access from within your script. You will see an example of the use of this in our [**main.py**](src/main.py):\n", "\n", " /opt/ml\n", " |-- code\n", " | `-- \n", " |-- input\n", " | |-- config\n", " | | |-- hyperparameters.json\n", " | | `-- resourceConfig.json\n", " | `-- data\n", " | `-- \n", " | `-- \n", " |-- model\n", " | `-- \n", " `-- output\n", " `-- failure\n", "\n", "##### The input\n", "\n", "* `/opt/ml/input/config` contains information to control how your program runs. `hyperparameters.json` is a JSON-formatted dictionary of hyperparameter names to values. These values will always be strings, so you may need to convert them. `resourceConfig.json` is a JSON-formatted file that describes the network layout used for distributed training. Since scikit-learn doesn't support distributed training, we'll ignore it here.\n", "* `/opt/ml/input/data//` (for File mode) contains the input data for that channel. The channels are created based on the call to CreateTrainingJob but it's generally important that channels match what the algorithm expects. The files for each channel will be copied from S3 to this directory, preserving the tree structure indicated by the S3 key structure. \n", "* `/opt/ml/input/data/_` (for Pipe mode) is the pipe for a given epoch. Epochs start at zero and go up by one each time you read them. There is no limit to the number of epochs that you can run, but you must close each pipe before reading the next epoch.\n", "\n", "##### The output\n", "\n", "* `/opt/ml/model/` is the directory where you write the model that your algorithm generates. Your model can be in any format that you want. It can be a single file or a whole directory tree. SageMaker will package any files in this directory into a compressed tar archive file. This file will be available at the S3 location returned in the `DescribeTrainingJob` result.\n", "* `/opt/ml/output` is a directory where the algorithm can write a file `failure` that describes why the job failed. The contents of this file will be returned in the `FailureReason` field of the `DescribeTrainingJob` result. For jobs that succeed, there is no reason to write this file as it will be ignored.\n", "\n", "#### Further information\n", "\n", "For more information, you can refer to:\n", "\n", "- The [SageMaker Python SDK guide for TensorFlow](https://sagemaker.readthedocs.io/en/stable/using_tf.html) and [API doc](https://sagemaker.readthedocs.io/en/stable/sagemaker.tensorflow.html) for TensorFlow framework classes.\n", "- The [AWS Deep Learning Containers repository](https://github.com/aws/deep-learning-containers) on GitHub, which defines the underlying container images.\n", "- The open source SageMaker [TensorFlow Training Toolkit](https://github.com/aws/sagemaker-tensorflow-training-toolkit) and [TensorFlow Serving Container](https://github.com/aws/sagemaker-tensorflow-serving-container) for more details on the framework code for training and serving." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import sagemaker\n", "from sagemaker.tensorflow import TensorFlow as TensorFlowEstimator" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Although the script will run on a separate container, we can pass whatever parameters it needs through SageMaker:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "hyperparameters = {\"epochs\": 5, \"max_seq_len\": 40, \"num_classes\": 4}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We have our `TensorFlow` estimator object, we have set the hyper-parameters for this object and we have our data channels linked with the algorithm. The only remaining thing to do is to train the algorithm. The following command will run this training, which involves a few steps:\n", "\n", "- Firstly, the instance that we requested while creating the `TensorFlow` estimator classes is provisioned and is setup with the appropriate libraries.\n", "- Then, the data from our channels are downloaded into the instance.\n", "- Once this is done, the training job begins running your code.\n", "\n", "The provisioning and data downloading will take some time, depending on the size of the data and the container image. \n", "\n", "Once the job has finished a \"Job complete\" message will be printed. The trained model can be found in the S3 bucket that was setup as `output_path` in the estimator.\n", "\n", "Here we run the training job using the compute-optimized (CPU) `ml.c5.xlarge` instance type. For larger deep learning models or datasets, you may want to use GPU-accelerated types like `ml.p*` or `ml.g*` to speed up training further." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "%%time\n", "\n", "metric_definitions = [\n", " {\"Name\": \"loss\", \"Regex\": \"loss: ([0-9\\\\.]+)\"},\n", " {\"Name\": \"accuracy\", \"Regex\": \"acc: ([0-9\\\\.]+)\"},\n", " {\"Name\": \"validation:loss\", \"Regex\": \"Validation.*loss=([0-9\\\\.]+)\"},\n", " {\"Name\": \"validation:accuracy\", \"Regex\": \"Validation.*acc=([0-9\\\\.]+)\"},\n", "]\n", "\n", "estimator = TensorFlowEstimator(\n", " framework_version=\"2.4\",\n", " py_version=\"py37\",\n", " instance_count=1,\n", " instance_type=\"ml.c5.xlarge\",\n", " role=role,\n", " entry_point=\"main.py\",\n", " source_dir=\"./src\",\n", " distribution={\"parameter_server\": {\"enabled\": True}},\n", " hyperparameters=hyperparameters,\n", " metric_definitions=metric_definitions,\n", " base_job_name=\"news-keras\",\n", " max_run=20 * 60, # Maximum allowed active runtime\n", " ## Here Spot is left OFF by default to avoid delays, but easy to turn on by un-commenting:\n", " # use_spot_instances=True, # Use spot instances to reduce cost\n", " # max_wait=30*60, # Maximum clock time (including spot wait time)\n", ")\n", "\n", "estimator.fit(inputs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "While the training job is running take a minute to look at the `main.py` script. You can see how we have adapted the our original local code from [Headline Classifier Local.ipynb](Headline%20Classifier%20Local.ipynb) to run on Sagemaker." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Use the Model: Hosting / Inference\n", "Once the training is done, we can deploy the trained model as an Amazon SageMaker hosted [real-time inference endpoint](https://docs.aws.amazon.com/sagemaker/latest/dg/deploy-model.html). This will allow us to query predictions (or inferences) from the model as-needed.\n", "\n", "Note that:\n", "\n", "- For batch inference situations (such as calculating metrics on a test dataset), it may be advisable to use [SageMaker Batch Transform](https://docs.aws.amazon.com/sagemaker/latest/dg/batch-transform.html) instead. See [Using SageMaker Batch Transform](https://sagemaker.readthedocs.io/en/stable/overview.html#sagemaker-batch-transform) in the SageMaker Python SDK docs for more details.\n", "- We _don't need to use the same **type**_ of instance that we trained on. Since this test endpoint will handle very little traffic - we can go ahead and use a cheaper type:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "predictor = estimator.deploy(\n", " initial_instance_count=1,\n", " instance_type=\"ml.t2.medium\",\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Your model should now be in production as a RESTful API!\n", "\n", "Let's evaluate our model with some example headlines...\n", "\n", "If you struggle with the widget, you can always simply call the `classify()` function from Python.\n", "\n", "You can be creative with your headlines!\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import ipywidgets as widgets\n", "from IPython import display\n", "from tensorflow.keras.preprocessing.sequence import pad_sequences\n", "\n", "\n", "def classify(text):\n", " \"\"\"Classify a headline and print the results\"\"\"\n", " encoded_example = tokenizer.texts_to_sequences([text])\n", " # Pad documents to a max length of 40 words\n", " max_length = 40\n", " padded_example = pad_sequences(encoded_example, maxlen=max_length, padding=\"post\")\n", " result = predictor.predict(padded_example.tolist())\n", " print(result)\n", " ix = np.argmax(result[\"predictions\"])\n", " print(f\"Predicted class: '{labels[ix]}' with confidence {result['predictions'][0][ix]:.2%}\")\n", "\n", "\n", "# Either try out the interactive widget:\n", "interaction = widgets.interact_manual(\n", " classify,\n", " text=widgets.Text(\n", " value=\"The markets were bullish after news of the merger\",\n", " placeholder=\"Type a news headline...\",\n", " description=\"Headline:\",\n", " layout=widgets.Layout(width=\"99%\"),\n", " ),\n", ")\n", "interaction.widget.children[1].description = \"Classify!\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Or just use the function to classify your own headline:\n", "classify(\"Retailers are expanding after the recent economic growth\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Clean up\n", "\n", "Unlike training jobs (which destroy their resources as soon as training is finished), real-time endpoint deployments provision instances until we specifically shut the endpoint down...\n", "\n", "So let's be frugal with resources, and delete resources when we don't need them anymore. You can un-comment and run the cell below to delete the endpoint:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# predictor.delete_endpoint(delete_endpoint_config=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## (Optional) Automatic Hyperparameter Optimization - HPO\n", "\n", "Rather than manually tweak parameters to tune the model performance, we can get SageMaker to help us out.\n", "\n", "We'll simply tell SageMaker:\n", "\n", "- The type and allowable range of each parameter,\n", "- The metric we want to optimize for, and\n", "- Strategy and resource constraints\n", "\n", "...and the service will set up jobs for us to find the best combination.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### (Hyper-)Parameter Definitions\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.tuner import (\n", " CategoricalParameter,\n", " ContinuousParameter,\n", " HyperparameterTuner,\n", " IntegerParameter,\n", ")\n", "\n", "hyperparameter_ranges = {\n", " \"epochs\": IntegerParameter(2, 7),\n", " \"learning_rate\": ContinuousParameter(0.01, 0.2),\n", "}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Objective Metric\n", "\n", "'Metrics' in SageMaker are scraped from the console output of jobs, by way of regular expressions.\n", "\n", "We can define multiple metrics to monitor, but HPO requires us to specify that exactly one of them is the **objective** metric to optimize:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# (See metric_definitions above)\n", "objective_metric_name = \"validation:accuracy\"\n", "objective_type = \"Maximize\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Start the Tuning Job\n", "\n", "We already defined our Estimator above, so we'll just re-use the configuration with minor adjustments.\n", "\n", "Note that the Estimator's `hyperparameters` will be used as base values, and overridden by the HyperParameterTuner where appropriate.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Keep per-job resources modest, so that parallel jobs don't hit any limits:\n", "estimator.instance_type = \"ml.c5.xlarge\"\n", "estimator.instance_count = 1" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tuner = HyperparameterTuner(\n", " estimator,\n", " objective_metric_name,\n", " hyperparameter_ranges,\n", " metric_definitions,\n", " base_tuning_job_name=\"news-hpo-keras\",\n", " max_jobs=6,\n", " max_parallel_jobs=2,\n", " objective_type=objective_type,\n", ")\n", "\n", "tuner.fit(inputs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Check On Progress\n", "\n", "HPO jobs can take a long time to complete, and can run multiple training jobs in parallel - each on multiple instances... Which is why the `fit()` call above won't show us a potentially-confusing consolidated log stream, and may not wait for completion if we add the `wait=False` parameter.\n", "\n", "Go to the Training > Hyperparameter Tuning Jobs page of the [**SageMaker Console**](https://console.aws.amazon.com/sagemaker/home#/hyper-tuning-jobs) and select the job from the list.\n", "\n", "You can see all the training jobs triggered for the HPO run, as well as overall summary metrics.\n", "\n", "This information can be accessed via the API/SDKs too of course. For example we can wait for HPO to finish like the below:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import time\n", "\n", "import boto3\n", "\n", "# Wait until HPO is finished\n", "hpo_state = None\n", "smclient = boto3.Session().client(\"sagemaker\")\n", "\n", "while hpo_state is None or hpo_state == \"InProgress\":\n", " if hpo_state is not None:\n", " print(\"-\", end=\"\")\n", " time.sleep(60) # Poll once every 1 min\n", " hpo_state = smclient.describe_hyper_parameter_tuning_job(\n", " HyperParameterTuningJobName=tuner.latest_tuning_job.job_name\n", " )[\"HyperParameterTuningJobStatus\"]\n", "\n", "print(\"\\nHPO state:\", hpo_state)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Using the Model\n", "\n", "Just like with our `estimator`, we can call `tuner.deploy()` to create an endpoint and `predictor` from the best-performing model found in the HPO run.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Review\n", "\n", "In this notebook, we refactored our local code to train and deploy the same Keras model using SageMaker.\n", "\n", "Some benefits of this approach are:\n", "\n", "- We can automatically provision specialist computing resources (e.g. high-performance, or GPU-accelerated instances) for **only** the duration of the training job: Getting good performance in training, without leaving resources sitting around under-utilized\n", "- The history of training jobs (including parameters, metrics, outputs, etc.) is automatically tracked - unlike local notebook experiments where the user needs to keep notes on what worked and what didn't\n", "- Our trained model can be deployed to a secure, production-ready web endpoint with just one SDK call: No container or web application packaging required, unless we want to customize the behaviour\n" ] } ], "metadata": { "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (TensorFlow 2.3 Python 3.7 CPU Optimized)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:ap-southeast-1:492261229750:image/tensorflow-2.3-cpu-py37-ubuntu18.04-v1" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.10" } }, "nbformat": 4, "nbformat_minor": 4 }