{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## Distributed ResNet Training with MXNet and Gluon\n", "\n", "[ResNet_V2](https://arxiv.org/abs/1512.03385) is an architecture for deep convolution networks. In this example, we train a 34 layer network to perform image classification using the CIFAR-10 dataset. CIFAR-10 consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. \n", "\n", "### Setup\n", "\n", "This example requires the `scikit-image` library. Use jupyter's [conda tab](/tree#conda) to install it." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import timeit\n", "start_time = timeit.default_timer()\n", "print(start_time)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "import boto3\n", "import sagemaker\n", "from sagemaker.mxnet import MXNet\n", "from mxnet import gluon\n", "from sagemaker import get_execution_role\n", "\n", "sagemaker_session = sagemaker.Session()\n", "\n", "role = get_execution_role()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Download training and test data\n", "\n", "We use the helper scripts to download CIFAR10 training data and sample images." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from cifar10_utils import download_training_data\n", "download_training_data()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Uploading the data\n", "\n", "We use the `sagemaker.Session.upload_data` function to upload our datasets to an S3 location. The return value `inputs` identifies the location -- we will use this later when we start the training job." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "inputs = sagemaker_session.upload_data(path='data', key_prefix='data/DEMO-gluon-cifar10')\n", "print('input spec (in this case, just an S3 path): {}'.format(inputs))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Implement the training function\n", "\n", "We need to provide a training script that can run on the SageMaker platform. The training scripts are essentially the same as one you would write for local training, except that you need to provide a `train` function. When SageMaker calls your function, it will pass in arguments that describe the training environment. Check the script below to see how this works.\n", "\n", "The network itself is a pre-built version contained in the [Gluon Model Zoo](https://mxnet.incubator.apache.org/versions/master/api/python/gluon/model_zoo.html)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!cat 'cifar10.py'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Run the training script on SageMaker\n", "\n", "The ```MXNet``` class allows us to run our training function as a distributed training job on SageMaker infrastructure. We need to configure it with our training script, an IAM role, the number of training instances, and the training instance type. In this case we will run our training job on two `ml.p2.xlarge` instances.\n", "\n", "**Note:** you may need to request a limit increase in order to use two ``ml.p2.xlarge`` instances. If you \n", "want to try the example without requesting an increase, just change the ``train_instance_count`` value to ``1``." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "m = MXNet(\"cifar10.py\", \n", " role=role, \n", " train_instance_count=1, \n", " train_instance_type=\"ml.p2.xlarge\",\n", " framework_version=\"1.2.1\",\n", " hyperparameters={'batch_size': 128, \n", " 'epochs': 50, \n", " 'learning_rate': 0.1, \n", " 'momentum': 0.9})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After we've constructed our `MXNet` object, we can fit it using the data we uploaded to S3. SageMaker makes sure our data is available in the local filesystem, so our training script can simply read the data from disk.\n", "\n", "The below training took 38 minutes with 1 ml.p2.xlarge." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "m.fit(inputs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Prediction\n", "\n", "After training, we use the MXNet estimator object to create and deploy a hosted prediction endpoint. We can use a CPU-based instance for inference (in this case an `ml.m4.xlarge`), even though we trained on GPU instances.\n", "\n", "The predictor object returned by `deploy` lets us call the new endpoint and perform inference on our sample images. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "predictor = m.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### CIFAR10 sample images\n", "\n", "We'll use these CIFAR10 sample images to test the service:\n", "\n", "classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# load the CIFAR10 samples, and convert them into format we can use with the prediction endpoint\n", "from cifar10_utils import read_images\n", "\n", "# classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')\n", "filenames = ['images/airplane1.png',\n", " 'images/automobile1.png',\n", " 'images/bird1.png',\n", " 'images/cat1.png',\n", " 'images/deer1.png',\n", " 'images/dog1.png',\n", " 'images/frog1.png',\n", " 'images/horse1.png',\n", " 'images/ship1.png',\n", " 'images/truck1.png']\n", "\n", "image_data = read_images(filenames)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The predictor runs inference on our input data and returns the predicted class label (as a float value, so we convert to int for display)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "for i, img in enumerate(image_data):\n", " response = predictor.predict(img)\n", " print('image {}: class: {}'.format(i, int(response)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Cleanup\n", "\n", "After you have finished with this example, remember to delete the prediction endpoint to release the instance(s) associated with it." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sagemaker.Session().delete_endpoint(predictor.endpoint)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "end_time = timeit.default_timer()\n", "elapsed = end_time - start_time\n", "print(end_time)\n", "print(elapsed/60)" ] } ], "metadata": { "kernelspec": { "display_name": "conda_mxnet_p27", "language": "python", "name": "conda_mxnet_p27" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.15" }, "notice": "Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the \"License\"). You may not use this file except in compliance with the License. A copy of the License is located at http://aws.amazon.com/apache2.0/ or in the \"license\" file accompanying this file. This file is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.", "widgets": { "application/vnd.jupyter.widget-state+json": { "state": {}, "version_major": 2, "version_minor": 0 } } }, "nbformat": 4, "nbformat_minor": 2 }