{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Game servers autopilot\n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "This notebook's CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook. \n", "\n", "![This us-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-2/reinforcement_learning|rl_game_server_autopilot|sagemaker|rl_gamerserver_ray.ipynb)\n", "\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Multiplayer game publishers often need to either over-provision resources or manually manage compute resource allocation when launching a large-scale worldwide game, to avoid the long player-wait in the game lobby. Game publishers need to develop, config, and deploy tools that helped them to monitor and control the compute allocation.\n", "\n", "This notebook demonstrates Game server autopilot, a new machine learning-based example tool that makes it easy for game publishers to reduce the time players wait for compute to spawn, while still avoiding compute over-provisioning. It also eliminates manual configuration decisions and changes publishers need to make and reduces the opportunity for human errors.\n", "\n", "We heard from customers that optimizing compute resource allocation is not trivial. This is because it often takes substantial time to allocate and prepare EC2 instances. The time needed to spin up an EC2 instance and install game binaries and other assets must be learned and accounted for in the allocation algorithm. Ever-changing usage patterns require a model that is adaptive to emerging player habits. Finally, the system also performs scale down in concert with new server allocation as needed.\n", "\n", "We describe a reinforcement learning-based system that learns to allocate resources in response to player usage patterns. The hosted model directly predicts the required number of game-servers so as to allow EKS the time to allocate instances to reduce player wait time. The training process integrates with the game eco-system, and requires minimal manual configuration." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Pre-requisites \n", "\n", "### Imports\n", "\n", "To get started, we'll import the Python libraries we need, set up the environment with a few prerequisites for permissions and configurations." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "import sagemaker\n", "import boto3\n", "import sys\n", "import os\n", "import glob\n", "import re\n", "import subprocess\n", "import numpy as np\n", "from IPython.display import HTML\n", "import time\n", "from time import gmtime, strftime\n", "\n", "sys.path.append(\"common\")\n", "from misc import get_execution_role, wait_for_s3_object\n", "from docker_utils import build_and_push_docker_image\n", "from sagemaker.rl import RLEstimator, RLToolkit, RLFramework" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Setup S3 bucket\n", "\n", "Set up the linkage and authentication to the S3 bucket that you want to use for checkpoint and the metadata. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sage_session = sagemaker.session.Session()\n", "s3_bucket = sage_session.default_bucket()\n", "s3_output_path = \"s3://{}/\".format(s3_bucket)\n", "print(\"S3 bucket path: {}\".format(s3_output_path))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Define Variables \n", "\n", "We define variables such as the job prefix, instance type and frameworks for the training jobs, to fetch the latest docker container to train your RL agent. You can also provide an image path for a custom container (only when this is BYOC).\n", "\n", "Set `framework` to `'tf'` as this notebook *only* supports TensorFlow." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# create a descriptive job name\n", "job_name_prefix = \"rl-game-server-autopilot\"\n", "\n", "framework = \"tf\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Pick the instance type\n", "# instance_type = \"ml.c5.xlarge\" # 4 cpus\n", "# instance_type = \"ml.c5.9xlarge\" # 36 cpus\n", "instance_type = \"ml.c5.2xlarge\" # 8 cpus\n", "\n", "num_cpus_per_instance = 8" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Parameters\n", "\n", "Adding new parameters for the job require update in the training section that invokes the RLEstimator." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "job_duration_in_seconds = 60 * 60 * 8\n", "train_instance_count = 1\n", "cloudwatch_namespace = \"rl-game-server-autopilot\"\n", "min_servers = 10\n", "max_servers = 100\n", "# over provisionning factor. use 5 for optimal.\n", "over_prov_factor = 5\n", "# gamma is the discount factor\n", "gamma = 0.9\n", "# if local inference is set gs_inventory_url=local and populate learning_freq\n", "gs_inventory_url = \"https://4bfiebw6ui.execute-api.us-west-2.amazonaws.com/api/currsine1h/\"\n", "# gs_inventory_url = 'local'\n", "# sleep time in seconds between step() executions\n", "learning_freq = 65\n", "# actions are normelized between 0 and 1, action factor the number of game servers needed e.g. 100 will be 100*action and clipped to the min and max servers parameters above\n", "action_factor = 100" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Create an IAM role\n", "\n", "Either get the execution role when running from a SageMaker notebook instance `role = sagemaker.get_execution_role()` or, when running from local notebook instance, use utils method `role = get_execution_role()` to create an execution role. In this example, the env thru the training job, publishes cloudwatch custom metrics as well as put values in DynamoDB table. Therefore, an appropriate role is required to be set to the role arn below." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "try:\n", " role = sagemaker.get_execution_role()\n", "except:\n", " role = get_execution_role()\n", "\n", "print(\"Using IAM role arn: {}\".format(role))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Set up the environment\n", "\n", "The environment is defined in a Python file called gameserver_env.py and the file is uploaded on /src directory.\n", "The environment also implements the init(), step() and reset() functions that describe how the environment behaves. This is consistent with Open AI Gym interfaces for defining an environment. It also implements help functions for custom CloudWatch metrics (populate_cloudwatch_metric()) and a simple sine demand simulator (get_curr_sine1h())\n", "\n", "1. init() - initialize the environment in a pre-defined state\n", "2. step() - take an action on the environment\n", "3. reset()- restart the environment on a new episode\n", "4. get_curr_sine1h() - return the sine value based on the current second.\n", "5. populate_cloudwatch_metric(namespace,metric_value,metric_name) - populate the metric_name with metric_value in namespace. \n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pygmentize src/gameserver_env.py" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Configure the presets for RL algorithm\n", "\n", "The presets that configure the RL training jobs are defined in the train_gameserver_ppo.py file which is also uploaded on the /src directory. Using the preset file, you can define agent parameters to select the specific agent algorithm. You can also set the environment parameters, define the schedule and visualization parameters, and define the graph manager. The schedule presets will define the number of heat up steps, periodic evaluation steps, training steps between evaluations.\n", "It can be used to define custom hyperparameters." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pygmentize src/train_gameserver_ppo.py" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Train the RL model using the Python SDK Script mode\n", "\n", "The RLEstimator is used for training RL jobs. \n", "\n", "1. The entry_point value indicates the script that invokes the GameServer RL environment.\n", "2. source_dir indicates the location of environment code which currently includes train-gameserver-ppo.py and game_server_env.py. \n", "3. Specify the choice of RL toolkit and framework. This automatically resolves to the ECR path for the RL Container. \n", "4. Define the training parameters such as the instance count, job name, S3 path for output and job name. \n", "5. Specify the hyperparameters for the RL agent algorithm. The RLCOACH_PRESET or the RLRAY_PRESET can be used to specify the RL agent algorithm you want to use. \n", "6. Define the metrics definitions that you are interested in capturing in your logs. These can also be visualized in CloudWatch and SageMaker Notebooks. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "metric_definitions = [\n", " {\n", " \"Name\": \"episode_reward_mean\",\n", " \"Regex\": \"episode_reward_mean: ([-+]?[0-9]*\\\\.?[0-9]+([eE][-+]?[0-9]+)?)\",\n", " },\n", " {\n", " \"Name\": \"episode_reward_max\",\n", " \"Regex\": \"episode_reward_max: ([-+]?[0-9]*\\\\.?[0-9]+([eE][-+]?[0-9]+)?)\",\n", " },\n", " {\n", " \"Name\": \"episode_len_mean\",\n", " \"Regex\": \"episode_len_mean: ([-+]?[0-9]*\\\\.?[0-9]+([eE][-+]?[0-9]+)?)\",\n", " },\n", " {\"Name\": \"entropy\", \"Regex\": \"entropy: ([-+]?[0-9]*\\\\.?[0-9]+([eE][-+]?[0-9]+)?)\"},\n", " {\n", " \"Name\": \"episode_reward_min\",\n", " \"Regex\": \"episode_reward_min: ([-+]?[0-9]*\\\\.?[0-9]+([eE][-+]?[0-9]+)?)\",\n", " },\n", " {\"Name\": \"vf_loss\", \"Regex\": \"vf_loss: ([-+]?[0-9]*\\\\.?[0-9]+([eE][-+]?[0-9]+)?)\"},\n", " {\"Name\": \"policy_loss\", \"Regex\": \"policy_loss: ([-+]?[0-9]*\\\\.?[0-9]+([eE][-+]?[0-9]+)?)\"},\n", "]\n", "\n", "metric_definitions" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "%%time\n", "\n", "estimator = RLEstimator(\n", " entry_point=\"train_gameserver_ppo.py\",\n", " source_dir=\"src\",\n", " dependencies=[\"common/sagemaker_rl\"],\n", " toolkit=RLToolkit.RAY,\n", " toolkit_version=\"1.6.0\",\n", " framework=RLFramework.TENSORFLOW,\n", " role=role,\n", " instance_type=instance_type,\n", " instance_count=train_instance_count,\n", " output_path=s3_output_path,\n", " base_job_name=job_name_prefix,\n", " metric_definitions=metric_definitions,\n", " max_run=job_duration_in_seconds,\n", " hyperparameters={\n", " \"cloudwatch_namespace\": cloudwatch_namespace,\n", " \"gs_inventory_url\": gs_inventory_url,\n", " \"learning_freq\": learning_freq,\n", " \"time_total_s\": job_duration_in_seconds,\n", " \"min_servers\": min_servers,\n", " \"max_servers\": max_servers,\n", " \"gamma\": gamma,\n", " \"action_factor\": action_factor,\n", " \"over_prov_factor\": over_prov_factor,\n", " \"save_model\": 1,\n", " },\n", ")\n", "\n", "estimator.fit(wait=False)\n", "job_name = estimator.latest_training_job.job_name\n", "print(\"Training job: %s\" % job_name)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Store intermediate training output and model checkpoints\n", "\n", "The output from the training job above is stored in a S3." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "\n", "job_name = estimator._current_job_name\n", "print(\"Job name: {}\".format(job_name))\n", "\n", "s3_url = \"s3://{}/{}\".format(s3_bucket, job_name)\n", "\n", "output_tar_key = \"{}/output/model.tar.gz\".format(job_name)\n", "\n", "intermediate_folder_key = \"{}/output/intermediate/\".format(job_name)\n", "output_url = \"s3://{}/{}\".format(s3_bucket, output_tar_key)\n", "intermediate_url = \"s3://{}/{}\".format(s3_bucket, intermediate_folder_key)\n", "\n", "print(\"S3 job path: {}\".format(s3_url))\n", "print(\"Output.tar.gz location: {}\".format(output_url))\n", "print(\"Intermediate folder path: {}\".format(intermediate_url))\n", "\n", "tmp_dir = \"/tmp/{}\".format(job_name)\n", "os.system(\"mkdir {}\".format(tmp_dir))\n", "print(\"Create local folder {}\".format(tmp_dir))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Evaluation of RL models\n", "We use the latest checkpointed model to run evaluation for the RL Agent." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Load checkpointed model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Checkpointed data from the previously trained models will be passed on for evaluation / inference in the checkpoint channel. \n", "Since TensorFlow stores ckeckpoint file containes absolute paths from when they were generated (see issue), we need to replace the absolute paths to relative paths. This is implemented within evaluate-game-server.py" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "local_mode = False\n", "if local_mode:\n", " model_tar_key = \"{}/model.tar.gz\".format(job_name)\n", "else:\n", " model_tar_key = \"{}/output/model.tar.gz\".format(job_name)\n", "\n", "local_checkpoint_dir = \"{}/model\".format(tmp_dir)\n", "\n", "wait_for_s3_object(s3_bucket, model_tar_key, tmp_dir, training_job_name=job_name)\n", "\n", "if not os.path.isfile(\"{}/model.tar.gz\".format(tmp_dir)):\n", " raise FileNotFoundError(\"File model.tar.gz not found\")\n", "\n", "os.system(\"mkdir -p {}\".format(local_checkpoint_dir))\n", "os.system(\"tar -xvzf {}/model.tar.gz -C {}\".format(tmp_dir, local_checkpoint_dir))\n", "\n", "print(\"Checkpoint directory {}\".format(local_checkpoint_dir))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if local_mode:\n", " checkpoint_path = \"file://{}\".format(local_checkpoint_dir)\n", " print(\"Local checkpoint file path: {}\".format(local_checkpoint_dir))\n", "else:\n", " checkpoint_path = \"s3://{}/{}/checkpoint/\".format(s3_bucket, job_name)\n", " if not os.listdir(local_checkpoint_dir):\n", " raise FileNotFoundError(\"Checkpoint files not found under the path\")\n", " os.system(\"aws s3 cp --recursive {} {}\".format(local_checkpoint_dir, checkpoint_path))\n", " print(\"S3 checkpoint file path: {}\".format(checkpoint_path))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Run the evaluation step\n", "Use the checkpointed model to run the evaluation step." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "%%time\n", "\n", "estimator_eval = RLEstimator(\n", " entry_point=\"evaluate_gameserver_ppo.py\",\n", " source_dir=\"src\",\n", " dependencies=[\"common/sagemaker_rl\"],\n", " role=role,\n", " toolkit=RLToolkit.RAY,\n", " toolkit_version=\"1.6.0\",\n", " framework=RLFramework.TENSORFLOW,\n", " instance_type=instance_type,\n", " instance_count=1,\n", " base_job_name=job_name_prefix + \"-evaluation\",\n", " hyperparameters={\n", " \"evaluate_episodes\": 1,\n", " \"algorithm\": \"PPO\",\n", " \"env\": \"GameServers-v0\",\n", " },\n", ")\n", "estimator_eval.fit({\"model\": checkpoint_path})\n", "job_name = estimator_eval.latest_training_job.job_name\n", "print(\"Evaluation job: %s\" % job_name)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Hosting\n", "Once the training is done, we can deploy the trained model as an Amazon SageMaker real-time hosted endpoint. This will allow us to make predictions (or inference) from the model. Note that we don't have to host on the same insantance (or type of instance) that we used to train. The endpoint deployment can be accomplished as follows:\n", "\n", "### Model deployment\n", "\n", "Now let us deploy the RL policy so that we can get the optimal action, given an environment observation." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.tensorflow.model import TensorFlowModel\n", "\n", "model_data = estimator.model_data\n", "model = TensorFlowModel(model_data=model_data, framework_version=\"2.5.1\", role=role)\n", "\n", "predictor = model.deploy(initial_instance_count=1, instance_type=instance_type)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Inference\n", "Now that the trained model is deployed at an endpoint that is up-and-running, we can use this endpoint for inference. The format of input should match that of observation_space in the defined environment. In this example, the observation space is a 10 dimensional vector formulated from previous and current observations. For the sake of space, this demo doesn't include the non-trivial construction process. Instead, we provide a dummy input below. For more details, please check src/gameserver_env.py." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "example = [np.arange(10).tolist()]\n", "example" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# ray 1.6.0 requires all the following inputs, ray 0.8.5 or below remove 'timestep'\n", "# 'prev_action', 'is_training', 'prev_reward', 'seq_lens' and 'timestep' are placeholders for this example\n", "# they won't affect prediction results\n", "\n", "input = {\n", " \"inputs\": {\n", " \"observations\": example,\n", " \"prev_action\": 0.5,\n", " \"is_training\": False,\n", " \"prev_reward\": -1,\n", " \"seq_lens\": -1,\n", " \"timestep\": 1,\n", " }\n", "}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "result = predictor.predict(input)\n", "\n", "result[\"outputs\"]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Delete the Endpoint\n", "\n", "Having an endpoint running will incur some costs. Therefore as a clean-up job, we should delete the endpoint." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "predictor.delete_endpoint()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Notebook CI Test Results\n", "\n", "This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.\n", "\n", "![This us-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-1/reinforcement_learning|rl_game_server_autopilot|sagemaker|rl_gamerserver_ray.ipynb)\n", "\n", "![This us-east-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-2/reinforcement_learning|rl_game_server_autopilot|sagemaker|rl_gamerserver_ray.ipynb)\n", "\n", "![This us-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-1/reinforcement_learning|rl_game_server_autopilot|sagemaker|rl_gamerserver_ray.ipynb)\n", "\n", "![This ca-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ca-central-1/reinforcement_learning|rl_game_server_autopilot|sagemaker|rl_gamerserver_ray.ipynb)\n", "\n", "![This sa-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/sa-east-1/reinforcement_learning|rl_game_server_autopilot|sagemaker|rl_gamerserver_ray.ipynb)\n", "\n", "![This eu-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-1/reinforcement_learning|rl_game_server_autopilot|sagemaker|rl_gamerserver_ray.ipynb)\n", "\n", "![This eu-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-2/reinforcement_learning|rl_game_server_autopilot|sagemaker|rl_gamerserver_ray.ipynb)\n", "\n", "![This eu-west-3 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-3/reinforcement_learning|rl_game_server_autopilot|sagemaker|rl_gamerserver_ray.ipynb)\n", "\n", "![This eu-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-central-1/reinforcement_learning|rl_game_server_autopilot|sagemaker|rl_gamerserver_ray.ipynb)\n", "\n", "![This eu-north-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-north-1/reinforcement_learning|rl_game_server_autopilot|sagemaker|rl_gamerserver_ray.ipynb)\n", "\n", "![This ap-southeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-1/reinforcement_learning|rl_game_server_autopilot|sagemaker|rl_gamerserver_ray.ipynb)\n", "\n", "![This ap-southeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-2/reinforcement_learning|rl_game_server_autopilot|sagemaker|rl_gamerserver_ray.ipynb)\n", "\n", "![This ap-northeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-1/reinforcement_learning|rl_game_server_autopilot|sagemaker|rl_gamerserver_ray.ipynb)\n", "\n", "![This ap-northeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-2/reinforcement_learning|rl_game_server_autopilot|sagemaker|rl_gamerserver_ray.ipynb)\n", "\n", "![This ap-south-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-south-1/reinforcement_learning|rl_game_server_autopilot|sagemaker|rl_gamerserver_ray.ipynb)\n" ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "conda_tensorflow_p36", "language": "python", "name": "conda_tensorflow_p36" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.13" }, "notice": "Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the \"License\"). You may not use this file except in compliance with the License. A copy of the License is located at http://aws.amazon.com/apache2.0/ or in the \"license\" file accompanying this file. This file is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." }, "nbformat": 4, "nbformat_minor": 4 }