{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# IoT Greengrass image classification model training and retraining\n", "\n", "1. [Part 1: initial training](#Part-1:-Initial-Training)\n", " 1. [Prerequisites and preprocessing](#Prequisites-and-Preprocessing)\n", " 1. [Permissions and environment variables](#Permissions-and-environment-variables)\n", " 2. [Data preparation](#Data-preparation)\n", " 3. [Create S3 folders for field data](#Create-S3-folders-for-field-data)\n", " 2. [Training parameters](#Training-parameters)\n", " 3. [Training](#Training)\n", "2. [Part 2: Retraining the model](#Part-2:-Retraining-the-model)\n", " 1. [Data preparation](#data-preparation)\n", " 2. [Retraining](#Retraining)\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Welcome to the \"Machine learning at the edge - using and retraining image classification with AWS IoT Greengrass\" notebook. This should serve as a resource alongside the blog post. This notebook will walk you through step by step how to:\n", "1. Configure a model for image classification using the [Caltech-256 dataset](http://www.vision.caltech.edu/Image_Datasets/Caltech256/). \n", "2. Retrain a model with images you capture on your IoT Greengrass core device.\n", "\n", "Both of these correspond to parts 1 and 2 of the blog post.\n", "\n", "*Note: This notebook is a modified version of Amazon SageMaker's image classification sample notebook. Please refer to the SageMaker example notebooks for more details about using the service.*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 1: Initial training\n", "\n", "### Prequisites and preprocessing\n", "\n", "#### Permissions and environment variables\n", "\n", "Here we set up the linkage and authentication to AWS services. There are three parts to this:\n", "* The roles used to give learning and hosting access to your data. This will automatically be obtained from the role used to start the notebook\n", "* The S3 bucket that you want to use for training and model data\n", "* The Amazon sagemaker image classification docker image which need not be changed" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "%%time\n", "import boto3\n", "import sagemaker\n", "from sagemaker import get_execution_role\n", "from sagemaker.amazon.amazon_estimator import get_image_uri\n", "\n", "role = get_execution_role()\n", "print(role)\n", "\n", "sess = sagemaker.Session()\n", "bucket=sess.default_bucket()\n", "print(bucket)\n", "\n", "training_image = get_image_uri(boto3.Session().region_name, 'image-classification')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Data preparation\n", "The Caltech 256 dataset consists of images from 257 categories (the last one being a clutter category) and has 30k images with a minimum of 80 images and a maximum of about 800 images per category.\n", "\n", "We will leverage a subset of the Caltech dataset for our example (beer-mug, wine-bottle, coffee-mug, soda-can, and clutter). The following will download the full dataset, extract the subset of categories, and create our model in the [lst format](https://mxnet.incubator.apache.org/how_to/recordio.html?highlight=im2rec) (content type: application/x-image).\n", "\n", "A .lst file is a tab-separated file with three columns that contains a list of image files. The first column specifies the image index, the second column specifies the class label index for the image, and the third column specifies the relative path of the image file. The image index in the first column should be unique across all of the images. Here we make an image list file using the [im2rec](https://github.com/apache/incubator-mxnet/blob/master/tools/im2rec.py) tool from MXNet. In order to train with the lst format interface, passing the lst file for both training and validation in the appropriate format is mandatory. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "import urllib.request\n", "\n", "def download(url):\n", " filename = url.split(\"/\")[-1]\n", " if not os.path.exists(filename):\n", " urllib.request.urlretrieve(url, filename)\n", "\n", "# Caltech-256 image files\n", "download('http://www.vision.caltech.edu/Image_Datasets/Caltech256/256_ObjectCategories.tar')\n", "!tar -xf 256_ObjectCategories.tar\n", "\n", "# Tool for creating lst file\n", "download('https://raw.githubusercontent.com/apache/incubator-mxnet/master/tools/im2rec.py')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%bash\n", "\n", "# Extract the subset of categories used for this example. We\n", "# will only need beer-mug, coffee-mug, wine-bottle, soda-can\n", "\n", "# Clean up any existing folders left behind by previous runs\n", "rm -rf category_subset\n", "rm -rf caltech_256_train_60\n", "\n", "# Re indexes the given folders and sub image files. This\n", "# will be useful when we add more data and/or more \n", "# classes during model retraining\n", "reindex_categories() {\n", " folder_index=0\n", " for category_folder in $1/*; do \n", " category_name=`basename $category_folder | cut -d'.' -f2`\n", " new_folder_index=`printf '%03d' $folder_index`\n", " new_folder_name='category_subset/'$new_folder_index'.'$category_name\n", " mv $category_folder $new_folder_name\n", " image_index=0\n", " for image_file in $new_folder_name/*; do\n", " new_image_name=`printf '%04d' $image_index`\n", " new_image_name=$new_folder_index'_'$new_image_name'.jpg'\n", " mv $image_file $new_folder_name/$new_image_name\n", " ((image_index++))\n", " done\n", " ((folder_index++))\n", " done\n", "}\n", "\n", "mkdir -p category_subset\n", "\n", "# The caltech dataset is properly formatted for 257 categories. We will\n", "# only be using 4 for our example. Copy the 4 categories to a new folder\n", "# and rename them to have the proper indicies in their names - i.e\n", "# 010.beer-mug -> 000.beer-mug (and sub files)\n", "# 041.coffee-mug -> 001.coffee-mug (and sub files)\n", "cp -r 256_ObjectCategories/010.beer-mug/. category_subset/beer-mug/\n", "cp -r 256_ObjectCategories/041.coffee-mug/. category_subset/coffee-mug/\n", "cp -r 256_ObjectCategories/195.soda-can/. category_subset/soda-can/\n", "cp -r 256_ObjectCategories/246.wine-bottle/. category_subset/wine-bottle/\n", "cp -r 256_ObjectCategories/257.clutter/. category_subset/clutter/\n", "reindex_categories category_subset\n", "\n", "# Take 60 images from each category and put them in a folder\n", "# dedicated to training images. Use the remaining images in\n", "# each folder for validation.\n", "mkdir -p caltech_256_train_60\n", "for i in category_subset/*; do\n", " c=`basename $i`\n", " mkdir -p caltech_256_train_60/$c\n", " for j in `ls $i/*.jpg | shuf | head -n 60`; do\n", " mv $j caltech_256_train_60/$c/\n", " done\n", "done\n", "\n", "python im2rec.py --list --recursive caltech-256-60-train caltech_256_train_60/\n", "python im2rec.py --list --recursive caltech-256-60-val category_subset/" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A sample of the lst file we created can be viewed by running below. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!head -n 15 ./caltech-256-60-val.lst > example.lst\n", "f = open('example.lst','r')\n", "lst_content = f.read()\n", "print(lst_content)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Once we have the data available in the correct format for training, the next step is to upload the image and .lst file to your S3 bucket." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Four channels: train, validation, train_lst, and validation_lst\n", "s3train = 's3://{}/image-classification/train/'.format(bucket)\n", "s3validation = 's3://{}/image-classification/validation/'.format(bucket)\n", "s3train_lst = 's3://{}/image-classification/train_lst/'.format(bucket)\n", "s3validation_lst = 's3://{}/image-classification/validation_lst/'.format(bucket)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# upload the image files to train and validation channels\n", "!aws s3 cp caltech_256_train_60 $s3train --recursive --quiet\n", "!aws s3 cp category_subset $s3validation --recursive --quiet\n", "\n", "# upload the lst files to train_lst and validation_lst channels\n", "!aws s3 cp caltech-256-60-train.lst $s3train_lst --quiet\n", "!aws s3 cp caltech-256-60-val.lst $s3validation_lst --quiet" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Create S3 folders for field data\n", "In part 2 we will collect data in the field. These images start as unlabeled in the raw_field_data folder in the S3 bucket. You can label these images by moving them to the correct folders in the /labeled_field_data folder. The following cell creates placeholders for these folders." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Folders for S3 field data\n", "s3fielddata = 's3://{}/image-classification/labeled_field_data/'.format(bucket)\n", "\n", "# Set up for retraining. empty.tmp is added to each bucket to allow us to create\n", "# a visible folder in S3.\n", "!mkdir -p field_data/beer-mug && touch field_data/beer-mug/empty.tmp \n", "!mkdir -p field_data/coffee-mug && touch field_data/coffee-mug/empty.tmp\n", "!mkdir -p field_data/soda-can && touch field_data/soda-can/empty.tmp\n", "!mkdir -p field_data/wine-bottle && touch field_data/wine-bottle/empty.tmp\n", "!mkdir -p field_data/clutter && touch field_data/clutter/empty.tmp\n", "\n", "!aws s3 cp --recursive field_data $s3fielddata" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Training parameters\n", "The following parameters are defined below to configure our training job. These values are consumed in the following section when the training_params object is constructed." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "isConfigCell": true }, "outputs": [], "source": [ "# The algorithm supports multiple network depth (number of layers). They are 18, 34, 50, \n", "#101, 152 and 200. For this training, we will use 18 layers.\n", "num_layers = 18\n", "# The input image dimensions,'num_channels, height, width', for the network. It should be \n", "# no larger than the actual image size. The number of channels should be same as the actual\n", "# image.\n", "image_shape = \"3,224,224\"\n", "# This is the total number of training samples. It is set to 300 (60 samples * 5 categories)\n", "num_training_samples = 300\n", "# This is the number of output classes for the new dataset: beer-mug, clutter, coffee-mug, wine-bottle, soda-can,\n", "num_classes = 5\n", "# The number of training samples used for each mini batch. In distributed training, the \n", "# number of training samples used per batch will be N * mini_batch_size where N is the number \n", "# of hosts on which training is run.\n", "mini_batch_size = 128\n", "# Number of training epochs.\n", "epochs = 6\n", "# Learning rate for training.\n", "learning_rate = 0.01\n", "# Report the top-k accuracy during training.\n", "top_k = 5\n", "# Resize the image before using it for training. The images are resized so that the shortest \n", "# side is of this parameter. If the parameter is not set, then the training data is used as \n", "# such without resizing.\n", "resize = 256\n", "# period to store model parameters (in number of epochs), in this case, we will save parameters \n", "# from epoch 2, 4, and 6\n", "checkpoint_frequency = 2\n", "# Since we are using transfer learning, we set use_pretrained_model to 1 so that weights can be \n", "# initialized with pre-trained weights. We aren't using a large number of input samples. Therefore, \n", "# we can benefit from using transfer learning to leverage pre-trained weights that have been \n", "# collected on a much larger dataset.\n", "# See: https://docs.aws.amazon.com/sagemaker/latest/dg/IC-HowItWorks.html\n", "use_pretrained_model = 1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Training\n", "Below creates three functions that will support the configuration and execution of our training jobs throughout the rest of this notebook (initial training and retraining)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "import time\n", "import boto3\n", "from time import gmtime, strftime\n", "\n", "s3 = boto3.client('s3')\n", "sagemaker = boto3.client(service_name='sagemaker')\n", "\n", "JOB_NAME_PREFIX = 'greengrass-imageclassification-training'\n", " \n", "def create_unique_job_name():\n", " '''\n", " Creates a job name in the following format:\n", " greengrass-imageclassification-training-[year]-[month]-[day]-[hour]-[minute]-[second]\n", " '''\n", " timestamp = time.strftime('-%Y-%m-%d-%H-%M-%S', time.gmtime())\n", " job_name = JOB_NAME_PREFIX + timestamp\n", " return job_name\n", "\n", "def create_training_params(unique_job_name):\n", " '''\n", " Constructs training parameters for the train function\n", " below.\n", " '''\n", " training_params = \\\n", " {\n", " # specify the training docker image\n", " \"AlgorithmSpecification\": {\n", " \"TrainingImage\": training_image,\n", " \"TrainingInputMode\": \"File\"\n", " },\n", " \"RoleArn\": role,\n", " \"OutputDataConfig\": {\n", " \"S3OutputPath\": 's3://{}/{}/output'.format(bucket, JOB_NAME_PREFIX)\n", " },\n", " \"ResourceConfig\": {\n", " \"InstanceCount\": 1,\n", " \"InstanceType\": \"ml.p2.xlarge\",\n", " \"VolumeSizeInGB\": 50\n", " },\n", " \"TrainingJobName\": unique_job_name,\n", " \"HyperParameters\": {\n", " \"image_shape\": image_shape,\n", " \"num_layers\": str(num_layers),\n", " \"num_training_samples\": str(num_training_samples),\n", " \"num_classes\": str(num_classes),\n", " \"mini_batch_size\": str(mini_batch_size),\n", " \"epochs\": str(epochs),\n", " \"learning_rate\": str(learning_rate),\n", " \"top_k\": str(top_k),\n", " \"resize\": str(resize),\n", " \"checkpoint_frequency\": str(checkpoint_frequency),\n", " \"use_pretrained_model\": str(use_pretrained_model) \n", " },\n", " \"StoppingCondition\": {\n", " \"MaxRuntimeInSeconds\": 360000\n", " },\n", " #Training data should be inside a subdirectory called \"train\"\n", " #Validation data should be inside a subdirectory called \"validation\"\n", " #The algorithm currently only supports fullyreplicated model (where data is copied onto each machine)\n", " \"InputDataConfig\": [\n", " {\n", " \"ChannelName\": \"train\",\n", " \"DataSource\": {\n", " \"S3DataSource\": {\n", " \"S3DataType\": \"S3Prefix\",\n", " \"S3Uri\": s3train,\n", " \"S3DataDistributionType\": \"FullyReplicated\"\n", " }\n", " },\n", " \"ContentType\": \"application/x-image\",\n", " \"CompressionType\": \"None\"\n", " },\n", " {\n", " \"ChannelName\": \"validation\",\n", " \"DataSource\": {\n", " \"S3DataSource\": {\n", " \"S3DataType\": \"S3Prefix\",\n", " \"S3Uri\": s3validation,\n", " \"S3DataDistributionType\": \"FullyReplicated\"\n", " }\n", " },\n", " \"ContentType\": \"application/x-image\",\n", " \"CompressionType\": \"None\"\n", " },\n", " {\n", " \"ChannelName\": \"train_lst\",\n", " \"DataSource\": {\n", " \"S3DataSource\": {\n", " \"S3DataType\": \"S3Prefix\",\n", " \"S3Uri\": s3train_lst,\n", " \"S3DataDistributionType\": \"FullyReplicated\"\n", " }\n", " },\n", " \"ContentType\": \"application/x-image\",\n", " \"CompressionType\": \"None\"\n", " },\n", " {\n", " \"ChannelName\": \"validation_lst\",\n", " \"DataSource\": {\n", " \"S3DataSource\": {\n", " \"S3DataType\": \"S3Prefix\",\n", " \"S3Uri\": s3validation_lst,\n", " \"S3DataDistributionType\": \"FullyReplicated\"\n", " }\n", " },\n", " \"ContentType\": \"application/x-image\",\n", " \"CompressionType\": \"None\"\n", " }\n", " ]\n", " }\n", " return training_params\n", "\n", "def train(job_name, training_params):\n", " '''\n", " Creates a training job, job_name, configured with\n", " training_params.\n", " '''\n", " # create the Amazon SageMaker training job\n", " sagemaker.create_training_job(**training_params)\n", "\n", " # confirm that the training job has started\n", " status = sagemaker.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']\n", " print('Training job current status: {}'.format(status))\n", "\n", " try:\n", " # wait for the job to finish and report the ending status\n", " sagemaker.get_waiter('training_job_completed_or_stopped').wait(TrainingJobName=job_name)\n", " training_info = sagemaker.describe_training_job(TrainingJobName=job_name)\n", " status = training_info['TrainingJobStatus']\n", " print(\"Training job ended with status: \" + status)\n", " except:\n", " print('Training failed to start')\n", " # if exception is raised, that means it has failed\n", " message = sagemaker.describe_training_job(TrainingJobName=job_name)['FailureReason']\n", " print('Training failed with the following error: {}'.format(message))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Create a training job and execute\n", "initial_training_job_name = create_unique_job_name()\n", "initial_training_params = create_training_params(initial_training_job_name)\n", "print('Training job name: {}'.format(initial_training_job_name))\n", "\n", "train(initial_training_job_name, initial_training_params)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can monitor the status of the training job by running the code below. You can also view information about and the status of a training job using the AWS SageMaker console. Just click on the \"Jobs\" tab. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "training_info = sagemaker.describe_training_job(TrainingJobName=initial_training_job_name)\n", "status = training_info['TrainingJobStatus']\n", "print(\"Training job ended with status: \" + status)\n", "print(training_info)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you see the message,\n", "\n", "> `Training job ended with status: Completed`\n", "\n", "then that means training sucessfully completed and the output model was stored in the output path specified by `training_params['OutputDataConfig']`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***This is the end of Part 1. Please return to the blog post and continue from there.***" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 2: Retraining the model\n", "\n", "At this point we have an IoT Greengrass core device capable of capturing images, performing inference, and uploading results to S3. In part 2, we will retrain our model to use the new data captured in the field using our IoT Greengrass core device.\n", "\n", "Note, in this example we will be creating a new model with a combination of our original and new training data. Alternatively iterative training can be used. See the [SageMaker Image Classification Algorithm Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/image-classification.html) for more details." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Data preparation\n", "In this step we will access our S3 bucket and pull down the training data collected in the field. We will add this data to our original dataset and regenerate our training/validation image files." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# sync s3 labeled field data with local fieldData folder\n", "!aws s3 sync $s3fielddata ./field_data\n", "\n", "# remove empty.tmp from the local field_data folder\n", "!rm -f field_data/beer-mug/empty.tmp\n", "!rm -f field_data/coffee-mug/empty.tmp\n", "!rm -f field_data/soda-can/empty.tmp\n", "!rm -f field_data/wine-bottle/empty.tmp\n", "!rm -f field_data/clutter/empty.tmp" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%bash\n", "# Re indexes the given folders and sub image files. This\n", "# will be useful when we add more data and/or more \n", "# classes during model retraining\n", "reindex_categories() {\n", " folder_index=0\n", " for category_folder in $1/*; do \n", " category_name=`basename $category_folder | cut -d'.' -f2`\n", " new_folder_index=`printf '%03d' $folder_index`\n", " new_folder_name='category_subset/'$new_folder_index'.'$category_name\n", " mv $category_folder $new_folder_name\n", " image_index=0\n", " for image_file in $new_folder_name/*; do\n", " new_image_name=`printf '%04d' $image_index`\n", " new_image_name=$new_folder_index'_'$new_image_name'.jpg'\n", " mv $image_file $new_folder_name/$new_image_name\n", " ((image_index++))\n", " done\n", " ((folder_index++))\n", " done\n", "}\n", "\n", "# Clean up any existing folders left behind by previous runs\n", "rm -rf category_subset\n", "rm -rf caltech_256_train_60\n", "\n", "# Copy over category subset again\n", "mkdir -p category_subset\n", "cp -r 256_ObjectCategories/010.beer-mug/. category_subset/beer-mug/\n", "cp -r 256_ObjectCategories/041.coffee-mug/. category_subset/coffee-mug/\n", "cp -r 256_ObjectCategories/195.soda-can/. category_subset/soda-can/\n", "cp -r 256_ObjectCategories/246.wine-bottle/. category_subset/wine-bottle/\n", "cp -r 256_ObjectCategories/257.clutter/. category_subset/clutter/\n", "\n", "# Copy contents of field data into category subset\n", "cp -r field_data/beer-mug/. category_subset/beer-mug/\n", "cp -r field_data/coffee-mug/. category_subset/coffee-mug/\n", "cp -r field_data/soda-can/. category_subset/soda-can/\n", "cp -r field_data/wine-bottle/. category_subset/wine-bottle/\n", "cp -r field_data/clutter/. category_subset/clutter/\n", "\n", "reindex_categories category_subset\n", "\n", "# Take 60 images from each category and put them in a folder\n", "# dedicated to training images. Use the remaining images in\n", "# each folder for validation.\n", "mkdir -p caltech_256_train_60\n", "for i in category_subset/*; do\n", " c=`basename $i`\n", " mkdir -p caltech_256_train_60/$c\n", " for j in `ls $i/*.jpg | shuf | head -n 60`; do\n", " mv $j caltech_256_train_60/$c/\n", " done\n", "done\n", "\n", "python im2rec.py --list --recursive caltech-256-60-train caltech_256_train_60/\n", "python im2rec.py --list --recursive caltech-256-60-val category_subset/" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# cleanup existing training data in S3\n", "!aws s3 rm $s3train\n", "!aws s3 rm $s3validation\n", "!aws s3 rm $s3train_lst\n", "!aws s3 rm $s3validation_lst\n", "\n", "# upload the image files to train and validation channels\n", "!aws s3 cp caltech_256_train_60 $s3train --recursive\n", "!aws s3 cp category_subset $s3validation --recursive\n", "\n", "# upload the lst files to train_lst and validation_lst channels\n", "!aws s3 cp caltech-256-60-train.lst $s3train_lst\n", "!aws s3 cp caltech-256-60-val.lst $s3validation_lst" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Retraining" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Create a new training job and execute\n", "re_training_job_name = create_unique_job_name()\n", "re_training_params = create_training_params(re_training_job_name)\n", "print('Training job name: {}'.format(re_training_job_name))\n", "print('\\nInput Data Location: {}'.format(re_training_params['InputDataConfig'][0]['DataSource']['S3DataSource']))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train(re_training_job_name, re_training_params)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The code in this section can be rerun at any time to generate a new model using the field data uploaded to S3.\n", "\n", "**Return to the blog post to continue!**" ] } ], "metadata": { "kernelspec": { "display_name": "conda_mxnet_p36", "language": "python", "name": "conda_mxnet_p36" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.5" }, "notice": "Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the \"License\"). You may not use this file except in compliance with the License. A copy of the License is located at http://aws.amazon.com/apache2.0/ or in the \"license\" file accompanying this file. This file is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." }, "nbformat": 4, "nbformat_minor": 2 }