{ "cells": [ { "cell_type": "markdown", "id": "16c61f54", "metadata": {}, "source": [ "# Generate fun images of your dog\n", "\n", "Note: This notebook is originally from [SageMaker JumpStart Notebook](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart_text_to_image/Amazon_JumpStart_Text_To_Image.ipynb)\n", "\n", "Note: This notebook requires AWS account and charge for AWS resources. Running this notebook take approximately \\\\$1.0. (\\\\$0.4 for training, $0.94/hour for inference) for Oregon Region. For more information about pricing of SageMaker, visit [pricing page](https://aws.amazon.com/sagemaker/pricing/)." ] }, { "cell_type": "markdown", "id": "bdc23bae", "metadata": {}, "source": [ "---\n", "In this demo notebook, we demonstrate how to use the JumpStart APIs to fine-tune the stable diffusion model to your dog images dataset and deploy the fine-tuned model. To execute this notebook, you will need a collection of dog images. You may upload as little as five images to your local folder and run the notebook.\n", "\n", "Note: This notebook contains a very simplified version of the features available for Stable Diffusion in JumpStart. Please refer to the [Introduction to JumpStart - Text to Image](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart_text_to_image/Amazon_JumpStart_Text_To_Image.ipynb) notebook for a more comprehensive list of Stable Diffusion features available in JumpStart.\n", "\n", "Note: To run this notebook, you would need `ml.g4dn.2xlarge` instance type for training and for inference.\n", "\n", "---" ] }, { "cell_type": "markdown", "id": "5db28351", "metadata": {}, "source": [ "1. [Set Up](#1.-Set-Up)\n", "\n", "3. [Fine-tune the pre-trained model on a custom dataset](#3.-Fine-tune-the-pre-trained-model-on-a-custom-dataset)\n", " * [Retrieve Training Artifacts](#3.1.-Retrieve-Training-Artifacts)\n", " * [Set Training parameters](#3.2.-Set-Training-parameters)\n", " * [Start Training](#3.3.-Start-Training)\n", " * [Deploy and run inference on the fine-tuned model](#3.4.-Deploy-and-run-inference-on-the-fine-tuned-model)\n" ] }, { "cell_type": "markdown", "id": "d0cc15bb-ad8f-4d59-9c9f-b62a7f75eb3a", "metadata": {}, "source": [ "## 1. Set Up\n", "---\n", "Before executing the notebook, there are some initial steps required for set up.\n", "\n", "1. You need to run this notebook with custom conda environment.\n", " 1. Right click `environment.yaml` and select `Build Conda Environment`.\n", " 2. After building the environment select the environment form dropdown at the top right corner of this page.\n", "2. You need to set up AWS credentials with existing AWS Account.\n", " 1. (Prerequisite) You need AWS IAM User with permission to access SageMaker and S3.\n", " 2. Open a terminal and run `aws configure` (We recommend using region supporting `ml.g5.2xlarge`. For exmaple, `us-west-2`)\n", "\n", "---" ] }, { "cell_type": "markdown", "id": "8c087840-df92-4913-8d16-63d48b81dcd6", "metadata": {}, "source": [ "Here, we use the execution role for SageMaker.\n", "\n", "1. If you are already used to using SageMaker within your own AWS account, please copy and paste the `RoleName` for your execution role below.\n", "2. If you are new to thise, follow the steps to create one [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html)\n", "\n", "Please note, in order to complete this you will need to have already created this SageMaker IAM Execution role.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "0c4085f2-5b3a-4638-a3f9-25f97473c11e", "metadata": { "tags": [] }, "outputs": [], "source": [ "import botocore\n", "import sagemaker, boto3, json\n", "from sagemaker import get_execution_role\n", "import os\n", "\n", "try:\n", " aws_role = sagemaker.get_execution_role()\n", "except:\n", " iam = boto3.client(\"iam\")\n", " # TODO: replace with your role name (i.e. \"AmazonSageMaker-ExecutionRole-20211014T154824\")\n", " aws_role = iam.get_role(RoleName=\"\")[\"Role\"][\"Arn\"]\n", "\n", "boto_session = boto3.Session()\n", "aws_region = boto_session.region_name\n", "sess = sagemaker.Session(boto_session=boto_session)\n", "\n", "print(aws_role)\n", "print(aws_region)\n", "print(sess.boto_region_name)\n", "\n", "# If uploading to a different folder, change this variable.\n", "local_training_dataset_folder = \"training_images\"\n", "if not os.path.exists(local_training_dataset_folder):\n", " os.mkdir(local_training_dataset_folder)" ] }, { "cell_type": "markdown", "id": "e1217900-12ed-473d-bff7-31ed90c905e4", "metadata": {}, "source": [ "---\n", "\n", "### Please upload images of your dog to `training_images` local folder and change use_local_images=True.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "7a25fffc-644e-4880-a209-c08862cf1860", "metadata": { "tags": [] }, "outputs": [], "source": [ "use_local_images = False # If False, notebook will use the example dataset provided by JumpStart\n", "\n", "\n", "if not use_local_images:\n", " # Downloading example dog images from JumpStart S3 bucket\n", "\n", " s3_resource = boto3.resource(\"s3\")\n", " bucket = s3_resource.Bucket(f\"jumpstart-cache-prod-{aws_region}\")\n", " for obj in bucket.objects.filter(Prefix=\"training-datasets/dogs_sd_finetuning/\"):\n", " bucket.download_file(\n", " obj.key, os.path.join(local_training_dataset_folder, obj.key.split(\"/\")[-1])\n", " ) # save to same path" ] }, { "cell_type": "code", "execution_count": null, "id": "0e5d1c0b-7a1a-4549-9827-8cb1d993bf26", "metadata": { "pycharm": { "is_executing": true }, "tags": [] }, "outputs": [], "source": [ "# Instance prompt refers to the textual description of images in the training dataset. Try to be as detailed and as accurate as possible.\n", "# In addition to the textual description, we also need a tag (Doppler in the example below).\n", "\n", "instance_prompt = \"A photo of a Doppler dog\"" ] }, { "cell_type": "code", "execution_count": null, "id": "a3c60aa0-6aaa-4358-a311-0111fe60fdf6", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Instance prompt is fed into the training script via dataset_info.json present in the training folder. Here, we write that file.\n", "import os\n", "import json\n", "\n", "with open(os.path.join(local_training_dataset_folder, \"dataset_info.json\"), \"w\") as f:\n", " f.write(json.dumps({\"instance_prompt\": instance_prompt}))" ] }, { "cell_type": "markdown", "id": "65f919a6-f47a-4607-9583-5f79311d225f", "metadata": { "tags": [] }, "source": [ "### Upload dataset to S3\n", "\n", "---\n", "Next, we upload the dataset to S3 bucket. If the bucket does not exists, we create a new bucket. \n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "f4a41265-8859-4e1f-a8c1-84279566172c", "metadata": { "tags": [] }, "outputs": [], "source": [ "mySession = boto3.session.Session()\n", "AwsRegion = mySession.region_name\n", "account_id = boto3.client(\"sts\").get_caller_identity().get(\"Account\")\n", "\n", "training_bucket = f\"stable-diffusion-jumpstart-{AwsRegion}-{account_id}\"" ] }, { "cell_type": "markdown", "id": "05e873b4-d090-4ccd-b1eb-ef5853396a35", "metadata": { "tags": [] }, "source": [ "---\n", "\n", "If you have an existing bucket you would like to use, please replace the `training_bucket` with your bucket in the cell above and avoid executing the following cell.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "a49c2e17-f079-471c-b1d0-14602bd5d23e", "metadata": { "tags": [] }, "outputs": [], "source": [ "assets_bucket = f\"jumpstart-cache-prod-{AwsRegion}\"\n", "\n", "\n", "s3 = boto3.client(\"s3\")\n", "s3.download_file(\n", " f\"jumpstart-cache-prod-{AwsRegion}\",\n", " \"ai_services_assets/custom_labels/cl_jumpstart_ic_notebook_utils.py\",\n", " \"utils.py\",\n", ")\n", "\n", "\n", "from utils import create_bucket_if_not_exists\n", "\n", "create_bucket_if_not_exists(training_bucket)" ] }, { "cell_type": "markdown", "id": "3b1b9596-0934-4b95-877e-8bba91c133af", "metadata": { "tags": [] }, "source": [ "---\n", "\n", "Next we upload the training datasets (images and `dataset_info.json`) to the S3 bucket.\n", "\n", "---\n" ] }, { "cell_type": "code", "execution_count": null, "id": "4ebee568-d904-42ca-a2cf-176ba00eafda", "metadata": { "tags": [] }, "outputs": [], "source": [ "train_s3_path = f\"s3://{training_bucket}/custom_dog_stable_diffusion_dataset/\"\n", "\n", "!aws s3 cp --recursive $local_training_dataset_folder $train_s3_path" ] }, { "cell_type": "markdown", "id": "2c8edfc4", "metadata": {}, "source": [ "## 2. Fine-tune the pre-trained model on a custom dataset\n", "\n" ] }, { "cell_type": "markdown", "id": "b8bfaa4d", "metadata": {}, "source": [ "### 2.1. Retrieve Training Artifacts\n", "\n", "---\n", "Here, we retrieve the training docker container, the training algorithm source, and the pre-trained base model. Note that model_version=\"*\" fetches the latest model.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "f11ff722", "metadata": { "tags": [] }, "outputs": [], "source": [ "from sagemaker import image_uris, model_uris, script_uris\n", "\n", "train_model_id, train_model_version, train_scope = (\n", " \"model-txt2img-stabilityai-stable-diffusion-v2-1-base\",\n", " \"*\",\n", " \"training\",\n", ")\n", "\n", "# Tested with ml.g4dn.2xlarge (16GB GPU memory) and ml.g5.2xlarge (24GB GPU memory) instances. Other instances may work as well.\n", "# If ml.g5.2xlarge instance type is available, please change the following instance type to speed up training.\n", "training_instance_type = \"ml.g4dn.2xlarge\"\n", "\n", "# Retrieve the docker image\n", "train_image_uri = image_uris.retrieve(\n", " region=None,\n", " framework=None, # automatically inferred from model_id\n", " model_id=train_model_id,\n", " model_version=train_model_version,\n", " image_scope=train_scope,\n", " instance_type=training_instance_type,\n", ")\n", "\n", "# Retrieve the training script. This contains all the necessary files including data processing, model training etc.\n", "train_source_uri = script_uris.retrieve(\n", " model_id=train_model_id, model_version=train_model_version, script_scope=train_scope\n", ")\n", "# Retrieve the pre-trained model tarball to further fine-tune\n", "train_model_uri = model_uris.retrieve(\n", " model_id=train_model_id, model_version=train_model_version, model_scope=train_scope\n", ")" ] }, { "cell_type": "markdown", "id": "6e266289", "metadata": {}, "source": [ "### 2.2. Set Training parameters\n", "\n", "---\n", "There are two kinds of parameters that need to be set for training. The first one are the parameters for the training job. These include: (i) Training data path: This is S3 folder in which the input data is stored, (ii) Output path: This the s3 folder in which the training output is stored. (iii) Training instance type: This indicates the type of machine on which to run the training. We defined the training instance type above to fetch the correct `train_image_uri`.\n", "\n", "The second set of parameters are algorithm specific training hyper-parameters.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "23a55b29-99af-49b6-8d32-62226aef0a65", "metadata": { "tags": [] }, "outputs": [], "source": [ "output_bucket = sess.default_bucket()\n", "output_prefix = \"jumpstart-example-sd-training\"\n", "\n", "s3_output_location = f\"s3://{output_bucket}/{output_prefix}/output\"" ] }, { "cell_type": "markdown", "id": "b68553b9-908e-4485-9584-19d97d03cfcd", "metadata": { "tags": [] }, "source": [ "---\n", "For algorithm specific hyper-parameters, we start by fetching python dictionary of the training hyper-parameters that the algorithm accepts with their default values. This can then be overridden to custom values.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "aa371787", "metadata": { "tags": [] }, "outputs": [], "source": [ "from sagemaker import hyperparameters\n", "\n", "# Retrieve the default hyper-parameters for fine-tuning the model\n", "hyperparameters = hyperparameters.retrieve_default(\n", " model_id=train_model_id, model_version=train_model_version\n", ")\n", "\n", "# [Optional] Override default hyperparameters with custom values. This controls the duration of the training and the quality of the output.\n", "# If max_steps is too small, training will be fast but the the model will not be able to generate custom images for your usecase.\n", "# If max_steps is too large, training will be very slow.\n", "hyperparameters[\"max_steps\"] = \"200\"\n", "print(hyperparameters)" ] }, { "cell_type": "markdown", "id": "7cda2854", "metadata": {}, "source": [ "### 2.3. Start Training\n", "---\n", "We start by creating the estimator object with all the required assets and then launch the training job. It takes less than 10 mins on the default dataset.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "76bdbb83", "metadata": { "tags": [] }, "outputs": [], "source": [ "%time\n", "from sagemaker.estimator import Estimator\n", "from sagemaker.utils import name_from_base\n", "from sagemaker.tuner import HyperparameterTuner\n", "\n", "training_job_name = name_from_base(f\"jumpstart-example-{train_model_id}-transfer-learning\")\n", "\n", "# Create SageMaker Estimator instance\n", "sd_estimator = Estimator(\n", " role=aws_role,\n", " image_uri=train_image_uri,\n", " source_dir=train_source_uri,\n", " model_uri=train_model_uri,\n", " entry_point=\"transfer_learning.py\", # Entry-point file in source_dir and present in train_source_uri.\n", " instance_count=1,\n", " instance_type=training_instance_type,\n", " max_run=360000,\n", " hyperparameters=hyperparameters,\n", " output_path=s3_output_location,\n", " base_job_name=training_job_name,\n", ")\n", "\n", "# Launch a SageMaker Training job by passing s3 path of the training data\n", "sd_estimator.fit({\"training\": train_s3_path}, logs=True)" ] }, { "cell_type": "markdown", "id": "6fadc21e", "metadata": {}, "source": [ "### 2.4. Deploy and run inference on the fine-tuned model\n", "\n", "---\n", "\n", "A trained model does nothing on its own. We now want to use the model to perform inference. For this example, that means predicting the bounding boxes of an image. We start by retrieving the JumpStart artifacts for deploying an endpoint. However, instead of base_predictor, we deploy the `sd_estimator` that we have fine-tuned.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "e5f00ffe-0bde-413c-86ea-c516453a0ef3", "metadata": { "tags": [] }, "outputs": [], "source": [ "%time\n", "\n", "inference_instance_type = \"ml.g4dn.2xlarge\"\n", "\n", "# Retrieve the inference docker container uri\n", "deploy_image_uri = image_uris.retrieve(\n", " region=None,\n", " framework=None, # automatically inferred from model_id\n", " image_scope=\"inference\",\n", " model_id=train_model_id,\n", " model_version=train_model_version,\n", " instance_type=inference_instance_type,\n", ")\n", "# Retrieve the inference script uri. This includes scripts for model loading, inference handling etc.\n", "deploy_source_uri = script_uris.retrieve(\n", " model_id=train_model_id, model_version=train_model_version, script_scope=\"inference\"\n", ")\n", "\n", "endpoint_name = name_from_base(f\"jumpstart-example-FT-{train_model_id}-\")\n", "\n", "# Use the estimator from the previous step to deploy to a SageMaker endpoint\n", "finetuned_predictor = sd_estimator.deploy(\n", " initial_instance_count=1,\n", " instance_type=inference_instance_type,\n", " entry_point=\"inference.py\", # entry point file in source_dir and present in deploy_source_uri\n", " image_uri=deploy_image_uri,\n", " source_dir=deploy_source_uri,\n", " endpoint_name=endpoint_name,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "4991af71-0b93-4a3e-b7e1-7b8ff393f258", "metadata": { "tags": [] }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "import numpy as np\n", "\n", "\n", "def query(model_predictor, text):\n", " \"\"\"Query the model predictor.\"\"\"\n", "\n", " encoded_text = json.dumps(text).encode(\"utf-8\")\n", "\n", " query_response = model_predictor.predict(\n", " encoded_text,\n", " {\n", " \"ContentType\": \"application/x-text\",\n", " \"Accept\": \"application/json\",\n", " },\n", " )\n", " return query_response\n", "\n", "\n", "def parse_response(query_response):\n", " \"\"\"Parse response and return generated image and the prompt\"\"\"\n", "\n", " response_dict = json.loads(query_response)\n", " return response_dict[\"generated_image\"], response_dict[\"prompt\"]\n", "\n", "\n", "def display_img_and_prompt(img, prmpt):\n", " \"\"\"Display hallucinated image.\"\"\"\n", " plt.figure(figsize=(12, 12))\n", " plt.imshow(np.array(img))\n", " plt.axis(\"off\")\n", " plt.title(prmpt)\n", " plt.show()" ] }, { "cell_type": "markdown", "id": "76b76663", "metadata": {}, "source": [ "---\n", "Next, we query the finetuned model, parse the response and display the generated image. Please execute the following cells.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "9dae5252-9261-4efb-bfba-f64d82650ede", "metadata": { "pycharm": { "is_executing": true }, "tags": [] }, "outputs": [], "source": [ "all_prompts = [\n", " \"A photo of a Doppler dog on a beach\",\n", " \"A pencil sketch of a Doppler dog\",\n", " \"A photo of a Doppler dog with a hat\",\n", "]\n", "for prompt in all_prompts:\n", " query_response = query(finetuned_predictor, prompt)\n", " img, _ = parse_response(query_response)\n", " display_img_and_prompt(img, prompt)" ] }, { "cell_type": "markdown", "id": "f3381a2c", "metadata": {}, "source": [ "---\n", "Next, we delete the endpoint corresponding to the finetuned model.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "9b724ea8-1500-4feb-bd92-1043a0e92700", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Delete the SageMaker endpoint\n", "finetuned_predictor.delete_model()\n", "finetuned_predictor.delete_endpoint()" ] }, { "cell_type": "markdown", "id": "5e9e4f37-0a79-49ec-bb2d-3f7375074c31", "metadata": { "tags": [] }, "source": [ "## Conclusion\n", "---\n", "In this notebook, we saw a simple workflow on how you can fine-tune the stable diffusion text-to-image model on your dataset with a small set of images. You can adapt the notebook your dataset by uploading images of the desired subject and changing the prompts. For instance, if you would like to generate images of your cat, please upload cat images in the first step and change dog to cat in the `instance_prompt` before training and while inocking endpoint with fine-tuned model.\n", "\n", "This notebook contains a barebone code to train and deploy the stable diffusion model. Please refer to the [Introduction to JumpStart - Text to Image](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart_text_to_image/Amazon_JumpStart_Text_To_Image.ipynb) for additional features such as (i) How to deploy a pre-trained Stable Diffusion model (more than 80 available in JumpStart), (ii) How to set parameters such as num_steps, guidance scale during inference, (iii) Prompt Engineering, (iv) How to set training related parameters.\n", "\n", "----" ] } ], "metadata": { "availableInstances": [ { "_defaultOrder": 0, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "memoryGiB": 4, "name": "ml.t3.medium", "vcpuNum": 2 }, { "_defaultOrder": 1, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 8, "name": "ml.t3.large", "vcpuNum": 2 }, { "_defaultOrder": 2, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 16, "name": "ml.t3.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 3, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 32, "name": "ml.t3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 4, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "memoryGiB": 8, "name": "ml.m5.large", "vcpuNum": 2 }, { "_defaultOrder": 5, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 16, "name": "ml.m5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 6, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 32, "name": "ml.m5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 7, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 64, "name": "ml.m5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 8, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 128, "name": "ml.m5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 9, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 192, "name": "ml.m5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 10, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 256, "name": "ml.m5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 11, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 384, "name": "ml.m5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 12, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 8, "name": "ml.m5d.large", "vcpuNum": 2 }, { "_defaultOrder": 13, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 16, "name": "ml.m5d.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 14, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 32, "name": "ml.m5d.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 15, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 64, "name": "ml.m5d.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 16, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 128, "name": "ml.m5d.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 17, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 192, "name": "ml.m5d.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 18, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 256, "name": "ml.m5d.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 19, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 384, "name": "ml.m5d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 20, "_isFastLaunch": true, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 4, "name": "ml.c5.large", "vcpuNum": 2 }, { "_defaultOrder": 21, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 8, "name": "ml.c5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 22, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 16, "name": "ml.c5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 23, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 32, "name": "ml.c5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 24, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 72, "name": "ml.c5.9xlarge", "vcpuNum": 36 }, { "_defaultOrder": 25, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 96, "name": "ml.c5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 26, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 144, "name": "ml.c5.18xlarge", "vcpuNum": 72 }, { "_defaultOrder": 27, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 192, "name": "ml.c5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 28, "_isFastLaunch": true, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 16, "name": "ml.g4dn.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 29, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 32, "name": "ml.g4dn.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 30, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 64, "name": "ml.g4dn.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 31, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 128, "name": "ml.g4dn.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 32, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "memoryGiB": 192, "name": "ml.g4dn.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 33, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 256, "name": "ml.g4dn.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 34, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 61, "name": "ml.p3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 35, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "memoryGiB": 244, "name": "ml.p3.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 36, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "memoryGiB": 488, "name": "ml.p3.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 37, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "memoryGiB": 768, "name": "ml.p3dn.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 38, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 16, "name": "ml.r5.large", "vcpuNum": 2 }, { "_defaultOrder": 39, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 32, "name": "ml.r5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 40, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 64, "name": "ml.r5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 41, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 128, "name": "ml.r5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 42, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 256, "name": "ml.r5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 43, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 384, "name": "ml.r5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 44, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 512, "name": "ml.r5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 45, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 768, "name": "ml.r5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 46, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 16, "name": "ml.g5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 47, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 32, "name": "ml.g5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 48, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 64, "name": "ml.g5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 49, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 128, "name": "ml.g5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 50, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 256, "name": "ml.g5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 51, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "memoryGiB": 192, "name": "ml.g5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 52, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "memoryGiB": 384, "name": "ml.g5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 53, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "memoryGiB": 768, "name": "ml.g5.48xlarge", "vcpuNum": 192 } ], "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "sagemaker:Python", "language": "python", "name": "conda-env-sagemaker-py" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.15" }, "pycharm": { "stem_cell": { "cell_type": "raw", "metadata": { "collapsed": false }, "source": [] } } }, "nbformat": 4, "nbformat_minor": 5 }