{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Train and Host a Keras Sequential Model\n", "## Using Pipe Mode datasets and distributed training with Horovod\n", "This notebook shows how to train and host a Keras Sequential model on SageMaker. The model used for this notebook is a simple deep CNN that was extracted from [the Keras examples](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The dataset\n", "The [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html) is one of the most popular machine learning datasets. It consists of 60,000 32x32 images belonging to 10 different classes (6,000 images per class). Here are the classes in the dataset, as well as 10 random images from each:\n", "\n", "![cifar10](https://maet3608.github.io/nuts-ml/_images/cifar10.png)\n", "\n", "In this tutorial, we will train a deep CNN to recognize these images.\n", "\n", "We'll compare trainig with file mode, pipe mode datasets and distributed training with Horovod" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Set up the environment" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "import sagemaker\n", "from sagemaker import get_execution_role\n", "\n", "sagemaker_session = sagemaker.Session()\n", "\n", "role = get_execution_role()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Download the CIFAR-10 dataset\n", "Downloading the test and training data takes around 5 minutes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!python generate_cifar10_tfrecords.py --data-dir ./data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create a training job using the sagemaker.TensorFlow estimator, running locally\n", "To test that the code will work in SageMaker, we'll first use SageMaker local mode." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.tensorflow import TensorFlow\n", "\n", "import subprocess\n", "instance_type = 'local'\n", "\n", "if subprocess.call('nvidia-smi') == 0:\n", " ## Set type to GPU if one is present\n", " instance_type = 'local_gpu'\n", " \n", "local_hyperparameters = {'epochs': 2, 'batch-size' : 64}\n", "\n", "source_dir = os.path.join(os.getcwd(), 'source_dir')\n", "estimator = TensorFlow(entry_point='cifar10_keras_main.py',\n", " source_dir=source_dir,\n", " role=role,\n", " framework_version='1.12.0',\n", " py_version='py3',\n", " hyperparameters=local_hyperparameters,\n", " train_instance_count=1, train_instance_type=instance_type)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "local_inputs = {'train' : 'file://'+os.getcwd()+'/data/train', \n", " 'validation' : 'file://'+os.getcwd()+'/data/validation', \n", " 'eval' : 'file://'+os.getcwd()+'/data/eval'}\n", "estimator.fit(local_inputs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Run on SageMaker cloud" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Uploading the data to s3" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dataset_location = sagemaker_session.upload_data(path='data', key_prefix='data/DEMO-cifar10-tf')\n", "display(dataset_location)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Configuring metrics from the job logs\n", "SageMaker can get training metrics directly from the logs and send them to CloudWatch metrics." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "keras_metric_definition = [\n", " {'Name': 'train:loss', 'Regex': '.*loss: ([0-9\\\\.]+) - acc: [0-9\\\\.]+.*'},\n", " {'Name': 'train:accuracy', 'Regex': '.*loss: [0-9\\\\.]+ - acc: ([0-9\\\\.]+).*'},\n", " {'Name': 'validation:accuracy', 'Regex': '.*step - loss: [0-9\\\\.]+ - acc: [0-9\\\\.]+ - val_loss: [0-9\\\\.]+ - val_acc: ([0-9\\\\.]+).*'},\n", " {'Name': 'validation:loss', 'Regex': '.*step - loss: [0-9\\\\.]+ - acc: [0-9\\\\.]+ - val_loss: ([0-9\\\\.]+) - val_acc: [0-9\\\\.]+.*'},\n", " {'Name': 'sec/steps', 'Regex': '.* - \\d+s (\\d+)[mu]s/step - loss: [0-9\\\\.]+ - acc: [0-9\\\\.]+ - val_loss: [0-9\\\\.]+ - val_acc: [0-9\\\\.]+'}\n", "]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Train image classification based on the cifar10 dataset" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "hyperparameters = {'epochs': 10, 'batch-size' : 256}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.tensorflow import TensorFlow\n", "\n", "source_dir = os.path.join(os.getcwd(), 'source_dir')\n", "estimator = TensorFlow(base_job_name='cifar10-tf',\n", " entry_point='cifar10_keras_main.py',\n", " source_dir=source_dir,\n", " role=role,\n", " framework_version='1.12.0',\n", " py_version='py3',\n", " hyperparameters=hyperparameters,\n", " train_instance_count=1, train_instance_type='ml.p3.2xlarge',\n", " tags = [{'Key' : 'Project', 'Value' : 'cifar10'},{'Key' : 'TensorBoard', 'Value' : 'file'}],\n", " metric_definitions=keras_metric_definition)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "remote_inputs = {'train' : dataset_location+'/train', 'validation' : dataset_location+'/validation', 'eval' : dataset_location+'/eval'}\n", "estimator.fit(remote_inputs, wait=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### View the job training metrics\n", "SageMaker used the regular expression configured above, to send the job metrics to CloudWatch metrics.\n", "You can now view the job metrics directly from the SageMaker console. \n", "\n", "login to the [SageMaker console](https://console.aws.amazon.com/sagemaker/home) choose the latest training job, scroll down to the monitor section. \n", "Using CloudWatch metrics, you can change the period and configure the statistics" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from IPython.core.display import Markdown\n", "\n", "link = 'https://console.aws.amazon.com/cloudwatch/home?region='+sagemaker_session.boto_region_name+'#metricsV2:query=%7B/aws/sagemaker/TrainingJobs,TrainingJobName%7D%20'+estimator.latest_training_job.job_name\n", "display(Markdown('CloudWatch metrics: [link]('+link+')'))\n", "display(Markdown('After you choose a metric, change the period to 1 Minute (Graphed Metrics -> Period)'))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Run on SageMaker with Pipe Mode input\n", "SageMaker Pipe Mode is a mechanism for providing S3 data to a training job via Linux fifos. Training programs can read from the fifo and get high-throughput data transfer from S3, without managing the S3 access in the program itself.\n", "Pipe Mode is covered in more detail in the SageMaker [documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-training-algo.html#your-algorithms-training-algo-running-container-inputdataconfig)\n", "\n", "in out script, we enabled Pipe Mode using the following code:\n", "```python\n", "from sagemaker_tensorflow import PipeModeDataset\n", "dataset = PipeModeDataset(channel=channel_name, record_format='TFRecord')\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.tensorflow import TensorFlow\n", "\n", "source_dir = os.path.join(os.getcwd(), 'source_dir')\n", "estimator_pipe = TensorFlow(base_job_name='pipe-cifar10-tf',\n", " entry_point='cifar10_keras_main.py',\n", " source_dir=source_dir,\n", " role=role,\n", " framework_version='1.12.0',\n", " py_version='py3',\n", " hyperparameters=hyperparameters,\n", " train_instance_count=1, train_instance_type='ml.p3.2xlarge',\n", " tags = [{'Key' : 'Project', 'Value' : 'cifar10'},{'Key' : 'TensorBoard', 'Value' : 'pipe'}],\n", " metric_definitions=keras_metric_definition,\n", " input_mode='Pipe')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this example, we set ```wait=False``` if you want to see the output logs, change this to ```wait=True```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "remote_inputs = {'train' : dataset_location+'/train/train.tfrecords', 'validation' : dataset_location+'/validation', 'eval' : dataset_location+'/eval'}\n", "estimator_pipe.fit(remote_inputs, wait=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that we specified the exact filename of training data rather than just the folder name." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Distributed training with Horovod\n", "Horovod is a distributed training framework based on MPI. Horovod is only available with TensorFlow version 1.12 or newer. You can find more details at [Horovod README](https://github.com/horovod/horovod/blob/master/README.rst).\n", "\n", "To enable Horovod, we need to add the following code to our script:\n", "```python\n", "import horovod.keras as hvd\n", "hvd.init()\n", "config = tf.ConfigProto()\n", "config.gpu_options.allow_growth = True\n", "config.gpu_options.visible_device_list = str(hvd.local_rank())\n", "K.set_session(tf.Session(config=config))\n", "```\n", "\n", "Add the following callbacks:\n", "```python\n", "hvd.callbacks.BroadcastGlobalVariablesCallback(0)\n", "hvd.callbacks.MetricAverageCallback()\n", "```\n", "\n", "Configure the optimizer:\n", "```python\n", "opt = Adam(lr=learning_rate * size, decay=weight_decay)\n", "opt = hvd.DistributedOptimizer(opt)\n", "```\n", "Choose to save checkpoints and send TensorBoard logs only from the ```python hvd.rank() == 0``` instance.\n", "\n", "To start a distributed training job with Horovod, configure the job distribution:\n", "```python\n", "distributions = {'mpi': {\n", " 'enabled': True,\n", " 'processes_per_host': # Number of Horovod processes per host\n", " }\n", " }\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.tensorflow import TensorFlow\n", "\n", "train_instance_type='ml.p3.8xlarge'\n", "train_instance_count = 2\n", "gpus_per_host = 4" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "distributions = {'mpi': {\n", " 'enabled': True,\n", " 'processes_per_host': gpus_per_host\n", " }\n", " }\n", "\n", "keras_metric_definition = [\n", " {'Name': 'train:loss', 'Regex': '.*loss: ([0-9\\\\.]+) - acc: [0-9\\\\.]+.*'},\n", " {'Name': 'train:accuracy', 'Regex': '.*loss: [0-9\\\\.]+ - acc: ([0-9\\\\.]+).*'},\n", " {'Name': 'validation:accuracy', 'Regex': '.*step - loss: [0-9\\\\.]+ - acc: [0-9\\\\.]+ - val_loss: [0-9\\\\.]+ - val_acc: ([0-9\\\\.]+).*'},\n", " {'Name': 'validation:loss', 'Regex': '.*step - loss: [0-9\\\\.]+ - acc: [0-9\\\\.]+ - val_loss: ([0-9\\\\.]+) - val_acc: [0-9\\\\.]+.*'},\n", " {'Name': 'sec/steps', 'Regex': '.* - \\d+s (\\d+)[mu]s/step - loss: [0-9\\\\.]+ - acc: [0-9\\\\.]+ - val_loss: [0-9\\\\.]+ - val_acc: [0-9\\\\.]+'}\n", "]\n", "\n", "hyperparameters = {'epochs': 20, 'batch-size' : 256}\n", "\n", "input_mode = 'File'" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%load_ext autoreload\n", "%autoreload 2\n", "\n", "from shard import do_shard\n", "from sagemaker.session import s3_input\n", "\n", "def shard_data_and_upload(local_data_dir, gpus_per_host, num_of_instances):\n", " do_shard(local_data_dir, gpus_per_host, num_of_instances)\n", " dataset_location = sagemaker_session.upload_data(path='data', key_prefix='data/DEMO-cifar10-tf')\n", " display(dataset_location)\n", "\n", " shuffle_config = sagemaker.session.ShuffleConfig(234)\n", " train_s3_uri_prefix = dataset_location\n", "\n", " remote_inputs = {}\n", "\n", " for idx in range(gpus_per_host):\n", " train_s3_uri = f'{train_s3_uri_prefix}/train/{idx}/'\n", " train_s3_input = s3_input(train_s3_uri, shuffle_config=shuffle_config, distribution='ShardedByS3Key')\n", " remote_inputs[f'train_{idx}'] = train_s3_input\n", " \n", " remote_inputs['validation_{}'.format(idx)] = '{}/validation'.format(dataset_location)\n", "\n", " remote_inputs['validation'] = '{}/validation'.format(dataset_location)\n", " remote_inputs['eval'] = '{}/eval'.format(dataset_location)\n", " \n", " return remote_inputs" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Data split for Horovod and upload to S3\n", "\n", "For Horovod, we need a dedicated input channel for each Horovod worker. In this example, we will use 2 instances with 4 GPUs (**ml.p3.8xlarge**). So we will shard the train data into eight tfrecord files as below so that each training worker train the model using its own tfrecord file." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "remote_inputs = shard_data_and_upload('./data', gpus_per_host, train_instance_count)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "remote_inputs" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "source_dir = os.path.join(os.getcwd(), 'source_dir')\n", "estimator_dist = TensorFlow(base_job_name='horovod-cifar10-tf',\n", " entry_point='cifar10_keras_main-tf2.py',\n", " source_dir=source_dir,\n", " role=role,\n", " framework_version='2.1.0',\n", " py_version='py3',\n", " hyperparameters=hyperparameters,\n", " train_instance_count=train_instance_count,\n", " train_instance_type=train_instance_type,\n", " tags = [{'Key' : 'Project', 'Value' : 'cifar10'},{'Key' : 'TensorBoard', 'Value' : 'horovod'}],\n", " metric_definitions=keras_metric_definition,\n", " distributions=distributions,\n", " input_mode=input_mode)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this example, we set ```wait=False``` if you want to see the output logs, change this to ```wait=True```" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "estimator_dist.fit(remote_inputs, wait=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Distributed training with Horovod and Pipe Mode input\n", "Ditributed training with Horovod can also utilize SageMaker Pipe Mode." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.tensorflow import TensorFlow\n", "\n", "train_instance_type='ml.p3.8xlarge'\n", "train_instance_count = 2\n", "gpus_per_host = 4" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "distributions = {'mpi': {\n", " 'enabled': True,\n", " 'processes_per_host': gpus_per_host\n", " }\n", " }\n", "\n", "keras_metric_definition = [\n", " {'Name': 'train:loss', 'Regex': '.*loss: ([0-9\\\\.]+) - acc: [0-9\\\\.]+.*'},\n", " {'Name': 'train:accuracy', 'Regex': '.*loss: [0-9\\\\.]+ - acc: ([0-9\\\\.]+).*'},\n", " {'Name': 'validation:accuracy', 'Regex': '.*step - loss: [0-9\\\\.]+ - acc: [0-9\\\\.]+ - val_loss: [0-9\\\\.]+ - val_acc: ([0-9\\\\.]+).*'},\n", " {'Name': 'validation:loss', 'Regex': '.*step - loss: [0-9\\\\.]+ - acc: [0-9\\\\.]+ - val_loss: ([0-9\\\\.]+) - val_acc: [0-9\\\\.]+.*'},\n", " {'Name': 'sec/steps', 'Regex': '.* - \\d+s (\\d+)[mu]s/step - loss: [0-9\\\\.]+ - acc: [0-9\\\\.]+ - val_loss: [0-9\\\\.]+ - val_acc: [0-9\\\\.]+'}\n", "]\n", "\n", "hyperparameters = {'epochs': 20, 'batch-size' : 256}\n", "\n", "input_mode = 'Pipe'" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "remote_inputs = shard_data_and_upload('./data', gpus_per_host, train_instance_count)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "remote_inputs" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "source_dir = os.path.join(os.getcwd(), 'source_dir')\n", "estimator_dist = TensorFlow(base_job_name='horovod-pipe-cifar10-tf',\n", " entry_point='cifar10_keras_main-tf2.py',\n", " source_dir=source_dir,\n", " role=role,\n", " framework_version='2.1.0',\n", " py_version='py3',\n", " hyperparameters=hyperparameters,\n", " train_instance_count=train_instance_count,\n", " train_instance_type=train_instance_type,\n", " tags = [{'Key' : 'Project', 'Value' : 'cifar10'},{'Key' : 'TensorBoard', 'Value' : 'horovod-pipe'}],\n", " metric_definitions=keras_metric_definition,\n", " distributions=distributions,\n", " input_mode=input_mode)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "estimator_dist.fit(remote_inputs, wait=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Local TensorBoard command\n", "Using TensorBoard we can compare the jobs we ran. The following command prints the TensorBoard command.\n", "Run it in any environment where you have TensorBoard installed. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!python generate_tensorboard_command.py" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can install [TensorBoard](https://github.com/tensorflow/tensorboard) locally using `pip install tensorboard`. \n", "To access an S3 log directory, configure the TensorBoard default region. You can do this by configuring an environment variable named AWS_REGION, and setting the value of the environment variable to the AWS region your training jobs run in. \n", "For example, `export AWS_REGION = 'us-east-1'`\n", "\n", "You can access TensorBoard locally at http://localhost:6006\n", "\n", "Based on the TensorBoard metrics, we can see that:\n", "1. All jobs run for 10 epochs (0 - 9).\n", "2. File mode and Pipe mode runs for ~1 minute - Pipemode doesn't effect training performance.\n", "3. Distributed mode runs for 45 seconds.\n", "4. All of the training jobs resulted in similar validation accuracy.\n", "\n", "This example uses a small dataset (179 MB). For larger datasets, pipemode can significantly reduce training time because it does not copy the entire dataset into local memory." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Deploy the trained model\n", "\n", "The deploy() method creates an endpoint that serves prediction requests in real-time.\n", "The model saves keras artifacts, to use TensorFlow serving for deployment, you'll need to save the artifacts in SavedModel format.\n", "\n", "We are using the solutions from the [deploy trained keras or tensorflow models using amazon sagemaker](https://aws.amazon.com/blogs/machine-learning/deploy-trained-keras-or-tensorflow-models-using-amazon-sagemaker/) blog post." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "predictor = estimator.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Make some predictions\n", "To verify the that the endpoint functions properly, we generate random data in the correct shape and get a prediction." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Creating fake prediction data\n", "import numpy as np\n", "data = np.random.randn(1, 32, 32, 3)\n", "print(\"Predicted class is {}\".format(np.argmax(predictor.predict(data)['predictions'])))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Calculating accuracy and create a confusion matrix based on the test dataset\n", "\n", "Our endpoint works as expected, we'll now use the test dataset for predictions and calculate our model accuracy" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from keras.datasets import cifar10\n", "from keras.preprocessing.image import ImageDataGenerator\n", "from sklearn.metrics import confusion_matrix\n", "datagen = ImageDataGenerator()\n", "\n", "(x_train, y_train), (x_test, y_test) = cifar10.load_data()\n", "\n", "def predict(data):\n", " predictions = predictor.predict(data)['predictions']\n", " return predictions" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "batch_size = 128\n", "predicted = []\n", "actual = []\n", "batches = 0\n", "for data in datagen.flow(x_test,y_test,batch_size=batch_size):\n", " for i,prediction in enumerate(predict(data[0])):\n", " predicted.append(np.argmax(prediction))\n", " actual.append(data[1][i][0])\n", " batches += 1\n", " if batches >= len(x_test) / batch_size:\n", " break" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.metrics import accuracy_score, confusion_matrix\n", "\n", "accuracy = accuracy_score(y_pred=predicted,y_true=actual)\n", "display('Average accuracy: {}%'.format(round(accuracy*100,2)))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%matplotlib inline\n", "import seaborn as sn\n", "import pandas as pd\n", "import matplotlib.pyplot as plt\n", "\n", "cm = confusion_matrix(y_pred=predicted,y_true=actual)\n", "cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\n", "sn.set(rc={'figure.figsize':(11.7,8.27)})\n", "sn.set(font_scale=1.4)#for label size\n", "sn.heatmap(cm, annot=True,annot_kws={\"size\": 10})# font size" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Using this heatmap we can calculate the accuracy of each one of the labels" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Cleaning up\n", "To avoid incurring charges to your AWS account for the resources used in this tutorial you need to delete the SageMaker Endpoint:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sagemaker_session.delete_endpoint(predictor.endpoint)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "conda_tensorflow_p36", "language": "python", "name": "conda_tensorflow_p36" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.10" }, "notice": "Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the \"License\"). You may not use this file except in compliance with the License. A copy of the License is located at http://aws.amazon.com/apache2.0/ or in the \"license\" file accompanying this file. This file is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.", "pycharm": { "stem_cell": { "cell_type": "raw", "metadata": { "collapsed": false }, "source": [] } } }, "nbformat": 4, "nbformat_minor": 4 }