{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "This notebook demonstrates how to leverage transfer learning to use your own image dataset to build and train an image classification model using MXNet and Amazon SageMaker." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We use, as an example, the creation of a trash classification model which, given some image, classifies it into one of three classes: compost, landfill, recycle. This is based on the [Show Before You Throw](https://www.youtube.com/watch?v=Ut1VGG6TOOw) project from an AWS DeepLens hackathon and the [Smart Recycle Arm](https://www.youtube.com/watch?v=QF0QjRjBwFs) project presented at the AWS Public Sector Summit 2019" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "--- \n", "\n", "1. [Prerequisites](#Prerequisites)\n", "1. [Download Data](#Download-data)\n", "1. [Fine-tuning the Image Classification Model](#Fine-tuning-the-Image-classification-model)\n", "1. [Start the Training](#Start-the-training)\n", "1. [Test your Model](#Inference)\n", "1. [Deploy your Model to AWS DeepLens](#Deploy)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Prequisites\n", "\n", "- Amazon Sagemaker notebook should have internet access to download images needed for testing this notebook. This is turned ON by default. To explore aoptions review this link : [Sagemaker routing options](https://aws.amazon.com/blogs/machine-learning/understanding-amazon-sagemaker-notebook-instance-networking-configurations-and-advanced-routing-options/)\n", "- The IAM role assigned to this notebook should have permissions to create a bucket (if it does not exist)\n", " - [IAM role for Amazon Sagemaker](https://docs.aws.amazon.com/glue/latest/dg/create-an-iam-role-sagemaker-notebook.html)\n", " - [S3 create bucket permissions](https://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html#using-with-s3-actions-related-to-buckets)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Permissions and environment variables\n", "\n", "Here we set up the linkage and authentication to AWS services. There are 2 parts to this:\n", "\n", "* The roles used to give learning and hosting access to your data. This will automatically be obtained from the role used to start the notebook\n", "* The Amazon sagemaker image classification docker image which need not be changed" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "import urllib.request\n", "import boto3, botocore\n", "\n", "\n", "import sagemaker\n", "from sagemaker import get_execution_role\n", "\n", "import mxnet as mx\n", "mxnet_path = mx.__file__[ : mx.__file__.rfind('/')]\n", "print(mxnet_path)\n", "\n", "role = get_execution_role()\n", "print(role)\n", "\n", "sess = sagemaker.Session()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Amazon S3 bucket info\n", "Enter your Amazon S3 Bucket name where your data will be stored, make sure that your SageMaker notebook has access to this S3 Bucket by granting `S3FullAccess` in the SageMaker role attached to this instance. See [here](https://docs.aws.amazon.com/sagemaker/latest/dg/gs-config-permissions.html) for more info.\n", "\n", "DeepLens-compatible buckets must start with `deeplens`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "BUCKET = 'deeplens-'\n", "PREFIX = 'deeplens-trash-test'" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.amazon.amazon_estimator import get_image_uri\n", "training_image = get_image_uri(sess.boto_region_name, 'image-classification', repo_version=\"latest\")\n", "print (training_image)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We are going to check if we have the right bucket and if we have the right permissions.\n", "\n", "Please make sure that the result from this cell is \"Bucket access is Ok\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_data = 'TestData'\n", "s3 = boto3.resource('s3')\n", "object = s3.Object(BUCKET, PREFIX+\"/test.txt\")\n", "try:\n", " object.put(Body=test_data)\n", "except botocore.exceptions.ClientError as e:\n", " if e.response['Error']['Code'] == \"AccessDenied\":\n", " #cannot write on the bucket\n", " print(\"Bucket \"+BUCKET+\"is not writeable, make sure you have the right permissions\")\n", " else:\n", " if e.response['Error']['Code'] == \"NoSuchBucket\":\n", " #Bucket does not exist\n", " print(\"Bucket\"+BUCKET+\" does not exist\")\n", " else:\n", " raise\n", "else:\n", " print(\"Bucket access is Ok\")\n", " object.delete(BUCKET, PREFIX+\"/test.txt\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Prepare data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It is assumed that your custom dataset's images are present in an S3 bucket and that different classes are separated by named folders, as shown in the following directory structure:\n", "```\n", "|-deeplens-bucket\n", " |-deeplens-trash\n", "\n", " |-images\n", " \n", " |-Compost \n", " \n", " |-Landfill\n", " \n", " |-Recycle\n", " ```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since we are providing the data for you in this example, first we'll download the example data, unzip it and upload it to your bucket." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!wget https://deeplens-public.s3.amazonaws.com/samples/deeplens-trash/trash-images.zip" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!rm -rf data/ && mkdir -p data\n", "!mkdir -p data/images\n", "!unzip -qq trash-images.zip -d data/images\n", "!rm trash-images.zip" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "%matplotlib inline\n", "\n", "def show_images(item_name, images_to_show=-1):\n", " _im_list = !ls $IMAGES_DIR/$item_name\n", "\n", " NUM_COLS = 3\n", " if images_to_show == -1:\n", " IM_COUNT = len(_im_list)\n", " else:\n", " IM_COUNT = images_to_show\n", " \n", " print('Displaying images category ' + item_name + ' count: ' + str(IM_COUNT) + ' images.')\n", " \n", " NUM_ROWS = int(IM_COUNT / NUM_COLS)\n", " if ((IM_COUNT % NUM_COLS) > 0):\n", " NUM_ROWS += 1\n", "\n", " fig, axarr = plt.subplots(NUM_ROWS, NUM_COLS)\n", " fig.set_size_inches(10.0, 10.0, forward=True)\n", "\n", " curr_row = 0\n", " for curr_img in range(IM_COUNT):\n", " # fetch the url as a file type object, then read the image\n", " f = IMAGES_DIR + item_name + '/' + _im_list[curr_img]\n", " a = plt.imread(f)\n", "\n", " # find the column by taking the current index modulo 3\n", " col = curr_img % NUM_ROWS\n", " # plot on relevant subplot\n", " if NUM_ROWS == 1:\n", " axarr[curr_row].imshow(a)\n", " else:\n", " axarr[col, curr_row].imshow(a)\n", " if col == (NUM_ROWS - 1):\n", " # we have finished the current row, so increment row counter\n", " curr_row += 1\n", "\n", " fig.tight_layout() \n", " plt.show()\n", " \n", " # Clean up\n", " plt.clf()\n", " plt.cla()\n", " plt.close()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "IMAGES_DIR = 'data/images/'\n", "show_images(\"Compost\", images_to_show=3)\n", "show_images(\"Landfill\", images_to_show=3)\n", "show_images(\"Recycling\", images_to_show=3)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "DEST_BUCKET = 's3://'+BUCKET+'/'+PREFIX+'/images/'" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!aws s3 cp --recursive data/images $DEST_BUCKET --quiet" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Ensure that the newly created directories containing the downloaded data are structured as shown at the beginning of this tutorial." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!aws s3 ls $DEST_BUCKET" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Prepare \"list\" files with train-val split\n", "\n", "The image classification algorithm can take two types of input formats. The first is a [RecordIO format](https://mxnet.apache.org/api/faq/recordio) (content type: application/x-recordio) and the other is a Image list format (.lst file). These file formats allows for efficient loading of images when training the model. In this example we will be using the Image list format (.lst file). A .lst file is a tab-separated file with three columns that contains a list of image files. The first column specifies the image index, the second column specifies the class label index for the image, and the third column specifies the relative path of the image file. The RecordIO file contains the actual pixel data for the images.\n", "\n", "To be able to create the .rec files, we first need to split the data into training and validation sets (after shuffling) and create two list files for each. Here our split into train, validation and test (specified by the `0.7` parameter below for test). We keep 0.02% to test the model." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The image and lst files will be converted to RecordIO file internally by the image classification algorithm. But if you want do the conversion, the following cell shows how to do it using the [im2rec](https://github.com/apache/incubator-mxnet/blob/master/tools/im2rec.py) tool. Note that this is just an example of creating RecordIO files. We are **_not_** using them for training in this notebook." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!python $mxnet_path/tools/im2rec.py --list --recursive --test-ratio=0.02 --train-ratio 0.7 trash data/images" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Save lst files to S3\n", "Training models is easy with Amazon SageMaker. When you’re ready to train in SageMaker, simply specify the location of your data in Amazon S3, and indicate the type and quantity of SageMaker ML instances you need. SageMaker sets up a distributed compute cluster, performs the training, outputs the result to Amazon S3, and tears down the cluster when complete. \n", "To use Amazon Sagemaker training we must first transfer our input data to Amazon S3." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "s3train_lst = 's3://{}/{}/train_lst/'.format(BUCKET, PREFIX)\n", "s3validation_lst = 's3://{}/{}/validation_lst/'.format(BUCKET, PREFIX)\n", "\n", "# upload the lst files to train_lst and validation_lst channels\n", "!aws s3 cp trash_train.lst $s3train_lst --quiet\n", "!aws s3 cp trash_val.lst $s3validation_lst --quiet" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Retrieve dataset size" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's see the size of train, validation and test datasets" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "f = open('trash_train.lst', 'r')\n", "train_samples = sum(1 for line in f)\n", "f.close()\n", "f = open('trash_val.lst', 'r')\n", "val_samples = sum(1 for line in f)\n", "f.close()\n", "f = open('trash_test.lst', 'r')\n", "test_samples = sum(1 for line in f)\n", "f.close()\n", "print('train_samples:', train_samples)\n", "print('val_samples:', val_samples)\n", "print('test_samples:', test_samples)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This marks the end of the data preparation phase." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Train the model\n", "\n", "Training a good model from scratch can take a long time. Fortunately, we're able to use transfer learning to fine-tune a model that has been trained on millions of images. Transfer learning allows us to train a model to recognize new classes in minutes instead of hours or days that it would normally take to train the model from scratch. Transfer learning requires a lot less data to train a model than from scratch (hundreds instead of tens of thousands)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Fine-tuning the Image Classification Model\n", "Now that we are done with all the setup that is needed, we are ready to train our trash detector. To begin, let us create a ``sageMaker.estimator.Estimator`` object. This estimator will launch the training job.\n", "### Training parameters\n", "There are two kinds of parameters that need to be set for training. The first one are the parameters for the training job. These include:\n", "\n", "* **Training instance count**: This is the number of instances on which to run the training. When the number of instances is greater than one, then the image classification algorithm will run in distributed settings. \n", "* **Training instance type**: This indicates the type of machine on which to run the training. Typically, we use GPU instances for these training \n", "* **Output path**: This the s3 folder in which the training output is stored" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "s3_output_location = 's3://{}/{}/output'.format(BUCKET, PREFIX)\n", "ic = sagemaker.estimator.Estimator(training_image,\n", " role, \n", " train_instance_count=1, \n", " train_instance_type='ml.p2.xlarge',\n", " train_volume_size = 50,\n", " train_max_run = 360000,\n", " input_mode= 'File',\n", " output_path=s3_output_location,\n", " sagemaker_session=sess,\n", " base_job_name='ic-trash')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Apart from the above set of parameters, there are hyperparameters that are specific to the algorithm. These are:\n", "\n", "* **num_layers**: The number of layers (depth) for the network. We use 18 in this samples but other values such as 50, 152 can be used.\n", "* **use_pretrained_model**: Set to 1 to use pretrained model for transfer learning.\n", "* **image_shape**: The input image dimensions,'num_channels, height, width', for the network. It should be no larger than the actual image size. The number of channels should be same as the actual image.\n", "* **num_classes**: This is the number of output classes for the new dataset. For us, we have \n", "* **num_training_samples**: This is the total number of training samples. It is set to 15240 for caltech dataset with the current split.\n", "* **mini_batch_size**: The number of training samples used for each mini batch. In distributed training, the number of training samples used per batch will be N * mini_batch_size where N is the number of hosts on which training is run.\n", "* **epochs**: Number of training epochs.\n", "* **learning_rate**: Learning rate for training.\n", "* **top_k**: Report the top-k accuracy during training.\n", "* **resize**: Resize the image before using it for training. The images are resized so that the shortest side is of this parameter. If the parameter is not set, then the training data is used as such without resizing.\n", "* **precision_dtype**: Training datatype precision (default: float32). If set to 'float16', the training will be done in mixed_precision mode and will be faster than float32 mode\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ic.set_hyperparameters(num_layers=18,\n", " use_pretrained_model=1,\n", " image_shape = \"3,224,224\",\n", " num_classes=3,\n", " mini_batch_size=128,\n", " epochs=10,\n", " learning_rate=0.01,\n", " top_k=2,\n", " num_training_samples=train_samples,\n", " resize = 224,\n", " precision_dtype='float32')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Input data specification\n", "Set the data type and channels used for training" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "s3images = 's3://{}/{}/images/'.format(BUCKET, PREFIX)\n", "\n", "train_data = sagemaker.session.s3_input(s3images, distribution='FullyReplicated', \n", " content_type='application/x-image', s3_data_type='S3Prefix')\n", "validation_data = sagemaker.session.s3_input(s3images, distribution='FullyReplicated', \n", " content_type='application/x-image', s3_data_type='S3Prefix')\n", "train_data_lst = sagemaker.session.s3_input(s3train_lst, distribution='FullyReplicated', \n", " content_type='application/x-image', s3_data_type='S3Prefix')\n", "validation_data_lst = sagemaker.session.s3_input(s3validation_lst, distribution='FullyReplicated', \n", " content_type='application/x-image', s3_data_type='S3Prefix')\n", "\n", "data_channels = {'train': train_data, 'validation': validation_data, \n", " 'train_lst': train_data_lst, 'validation_lst': validation_data_lst}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Start the training\n", "Start training by calling the fit method in the estimator" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ic.fit(inputs=data_channels, logs=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### The output from the above command will have the model accuracy and the time it took to run the training. \n", "#### You can also view these details by navigating to ``Training -> Training Jobs -> job_name -> View logs`` in the Amazon SageMaker console " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The model trained above can now be found in the `s3:////output` path." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "MODEL_PATH = ic.model_data\n", "print(MODEL_PATH)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Deploy to a Sagemaker endpoint\n", "After training your model is complete, you can test your model by asking it to predict the class of a sample trash image that the model has not seen before. This step is called inference.\n", "\n", "Amazon SageMaker provides an HTTPS endpoint where your machine learning model is available to provide inferences. For more information see the [Amazon SageMaker documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-hosting.html)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ic_infer = ic.deploy(initial_instance_count=1, instance_type='local')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Test the images against the endpoint\n", "We will use the test images that were kept aside for testing." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "object_categories = ['Compost', 'Landfill', 'Recycling']" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from IPython.display import Image, display\n", "import json\n", "import numpy as np\n", "\n", "\n", "def test_model():\n", " preds = []\n", " acts = []\n", " num_errors = 0\n", " with open('trash_test.lst', 'r') as f:\n", " for line in f:\n", " stripped_line = str(line.strip()).split(\"\\t\")\n", " file_path = stripped_line[2]\n", " category = int(float(stripped_line[1]))\n", " with open(IMAGES_DIR + stripped_line[2], 'rb') as f:\n", " payload = f.read()\n", " payload = bytearray(payload)\n", "\n", " ic_infer.content_type = 'application/x-image'\n", " result = json.loads(ic_infer.predict(payload))\n", " # the result will output the probabilities for all classes\n", " # find the class with maximum probability and print the class index\n", " index = np.argmax(result)\n", " act = object_categories[category]\n", " pred = object_categories[index]\n", " conf = result[index]\n", " print(\"Result: Predicted: {}, Confidence: {:.2f}, Actual: {} \".format(pred, conf, act))\n", " acts.append(category)\n", " preds.append(index)\n", " if (pred != act):\n", " num_errors += 1\n", " print('ERROR on image -- Predicted: {}, Confidence: {:.2f}, Actual: {}'.format(pred, conf, act))\n", " display(Image(filename=IMAGES_DIR + stripped_line[2], width=100, height=100))\n", "\n", " return num_errors, preds, acts\n", "num_errors, preds, acts = test_model()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.metrics import confusion_matrix\n", "import numpy as np\n", "import itertools\n", "COLOR = 'green'\n", "plt.rcParams['text.color'] = COLOR\n", "plt.rcParams['axes.labelcolor'] = COLOR\n", "plt.rcParams['xtick.color'] = COLOR\n", "plt.rcParams['ytick.color'] = COLOR\n", "def plot_confusion_matrix(cm, classes,\n", " class_name_list,\n", " normalize=False,\n", " title='Confusion matrix',\n", " cmap=plt.cm.GnBu):\n", " plt.figure(figsize=(7,7))\n", " plt.grid(False)\n", "\n", " plt.imshow(cm, interpolation='nearest', cmap=cmap)\n", " plt.title(title)\n", " tick_marks = np.arange(len(classes))\n", " plt.xticks(tick_marks, classes, rotation=45)\n", " plt.yticks(tick_marks, classes)\n", " fmt = '.2f' if normalize else 'd'\n", " thresh = cm.max() / 2.\n", " for i, j in itertools.product(range(cm.shape[0]), \n", " range(cm.shape[1])):\n", " plt.text(j, i, format(cm[i, j], fmt),\n", " horizontalalignment=\"center\",\n", " color=\"white\" if cm[i, j] > thresh else \"black\")\n", " plt.tight_layout()\n", " plt.gca().set_xticklabels(class_name_list)\n", " plt.gca().set_yticklabels(class_name_list)\n", " plt.ylabel('True label')\n", " plt.xlabel('Predicted label')\n", "\n", "def create_and_plot_confusion_matrix(actual, predicted):\n", " cnf_matrix = confusion_matrix(actual, np.asarray(predicted),labels=range(len(object_categories)))\n", " plot_confusion_matrix(cnf_matrix, classes=range(len(object_categories)), class_name_list=object_categories)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Display confusion matrix showing 'true' and 'predicted' labels\n", "\n", "A confusion matrix is a table that is often used to describe the performance of a classification model (or \"classifier\") on a set of test data for which the true values are known. It's a table with two dimensions (\"actual\" and \"predicted\"), and identical sets of \"classes\" in both dimensions (each combination of dimension and class is a variable in the contingency table). The diagonal values in the table indicate a match between the predicted class and the actual class. \n", "\n", "For more details go to [Confusion matrix](https://en.wikipedia.org/wiki/Confusion_matrix) (Wikipedia)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "create_and_plot_confusion_matrix(acts, preds)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Approximate costs" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As of 03/11/2020 and based on the pricing information displayed on the page: https://aws.amazon.com/sagemaker/pricing/, here's the costs you can expect in a 24 hour period:\n", "\n", " - Notebook instance cost **\\\\$6** Assuming you choose ml.t3.xlarge (\\\\$0.233/hour) instance. This can vary based on the size of instance you choose.\n", " - Training costs **\\\\$1.05** : Assuming you will run about 10 training runs in a 24 hour period using the sample dataset provided. The notebook uses a p2.xlarge (\\\\$1.26/hour) instance\n", " - Model hosting **\\\\$6.72** : Assuming you use the ml.m4.xlarge (\\\\$0.28/hour) instance running for 24 hours. \n", " \n", "*NOTE* : To save on costs, stop your notebook instances and delete the model edpoint when not in use" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## (Optional) Clean-up" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you're ready to be done with this notebook, please run the cell below. This will remove the hosted endpoint you created and avoid any charges from a stray instance being left on." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sess.delete_endpoint(ic_infer.endpoint)\n", "print(\"Completed\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Rename model to deploy to AWS DeepLens\n", "The MxNet model that is stored in the S3 bucket contains 2 files: the params file and a symbol.json file. To simplify deployment to AWS DeepLens, we'll modify the params file so that you do not need to specify the number of epochs the model was trained for." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import glob" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!rm -rf data/$PREFIX/tmp && mkdir -p data/$PREFIX/tmp\n", "!aws s3 cp $MODEL_PATH data/$PREFIX/tmp\n", "!tar -xzvf data/$PREFIX/tmp/model.tar.gz -C data/$PREFIX/tmp" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "params_file_name = glob.glob('./data/' + PREFIX + '/tmp/*.params')[0]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!mv $params_file_name data/$PREFIX/tmp/image-classification-0000.params" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!tar -cvzf ./model.tar.gz -C data/$PREFIX/tmp ./image-classification-0000.params ./image-classification-symbol.json" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!aws s3 cp model.tar.gz $MODEL_PATH" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Next steps\n", "\n", "At this point, you have completed:\n", "* Training a model with Amazon Sagemaker using transfer learning\n", "\n", "Next you'll deploy this model to AWS DeepLens. If you have started this notebook as part of a tutorial, please go back to the next step in the tutorial. If you have found this notebook through other channels, please go to [awsdeeplens.recipes](http://awsdeeplens.recipes) and select the Trash Detector tutorial to continue." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "conda_mxnet_p36", "language": "python", "name": "conda_mxnet_p36" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.5" } }, "nbformat": 4, "nbformat_minor": 4 }