{ "cells": [ { "cell_type": "markdown", "id": "16c61f54", "metadata": {}, "source": [ "# Finetune a stable diffusion model for text to image generation" ] }, { "cell_type": "markdown", "id": "bdc23bae", "metadata": {}, "source": [ "***\n", "Welcome to Amazon [SageMaker JumpStart](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-jumpstart.html)! You can use JumpStart to solve many Machine Learning tasks through one-click in SageMaker Studio, or through [SageMaker JumpStart API](https://sagemaker.readthedocs.io/en/stable/overview.html#use-prebuilt-models-with-sagemaker-jumpstart). In this demo notebook, we demonstrate how to use the JumpStart API to generate images from text using state-of-the-art Stable Diffusion models. Furthermore, we show how to fine-tune the model to your dataset.\n", "\n", "Stable Diffusion is a text-to-image model that enables you to create photorealistic images from just a text prompt. A diffusion model trains by learning to remove noise that was added to a real image. This de-noising process generates a realistic image. These models can also generate images from text alone by conditioning the generation process on the text. For instance, Stable Diffusion is a latent diffusion where the model learns to recognize shapes in a pure noise image and gradually brings these shapes into focus if the shapes match the words in the input text.\n", "\n", "Training and deploying large models and running inference on models such as Stable Diffusion is often challenging and include issues such as cuda out of memory, payload size limit exceeded and so on. JumpStart simplifies this process by providing ready-to-use scripts that have been robustly tested. Furthermore, it provides guidance on each step of the process including the recommended instance types, how to select parameters to guide image generation process, prompt engineering etc. Moreover, you can deploy and run inference on any of the 80+ Diffusion models from JumpStart without having to write any piece of your own code.\n", "\n", "\n", "In his notebook, you will learn how to use JumpStart to fine-tune the Stable Diffusion model to your dataset. This can be useful when creating art, logos, custom designs, NFTs, and so on, or fun stuff such as generating custom AI images of your pets or avatars of yourself.\n", "\n", "\n", "Model lincese: By using this model, you agree to the [CreativeML Open RAIL-M++ license](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL).\n", "\n", "***" ] }, { "cell_type": "markdown", "id": "5db28351", "metadata": {}, "source": [ "1. [Set Up](#1.-Set-Up)\n", "2. [Fine-tune the pre-trained model on a custom dataset](#2.-Fine-tune-the-pre-trained-model-on-a-custom-dataset)\n", " * [Retrieve Training Artifacts](#2.1.-Retrieve-Training-Artifacts)\n", " * [Set Training parameters](#2.2.-Set-Training-parameters)\n", " * [Start Training](#2.3.-Start-Training)\n", " * [Deploy and run inference on the fine-tuned model](#3.2.-Deploy-and-run-inference-on-the-fine-tuned-model)\n", "3. [Conclusion](#3.-Conclusion)\n" ] }, { "cell_type": "markdown", "id": "ce462973", "metadata": {}, "source": [ "Note: This notebook was tested on ml.t3.medium instance in Amazon SageMaker Studio with Python 3 (Data Science) kernel and in Amazon SageMaker Notebook instance with conda_python3 kernel.\n", "\n", "Note: For fine-tuning the model on your dataset, you need `ml.g4dn.2xlarge` instance type available in your account. To deploy the fine-trained model, you can use `ml.p3.2xlarge` or `ml.g4dn.2xlarge` instance types. If `ml.g5.2xlarge` is available in your region, we recommend using that instance type for deployment. " ] }, { "cell_type": "markdown", "id": "9ea47727", "metadata": {}, "source": [ "## 1. Set Up" ] }, { "cell_type": "markdown", "id": "35b91e81", "metadata": {}, "source": [ "***\n", "Before executing the notebook, there are some initial steps required for set up. This notebook requires latest version of sagemaker.\n", "\n", "***" ] }, { "cell_type": "markdown", "id": "48370155", "metadata": {}, "source": [ "#### Permissions and environment variables\n", "\n", "***\n", "To host on Amazon SageMaker, we need to set up and authenticate the use of AWS services. Here, we use the execution role associated with the current notebook as the AWS account role with SageMaker access.\n", "\n", "***" ] }, { "cell_type": "code", "execution_count": null, "id": "90518e45", "metadata": { "tags": [] }, "outputs": [], "source": [ "import sagemaker, boto3, json\n", "from sagemaker import get_execution_role\n", "\n", "aws_role = get_execution_role()\n", "aws_region = boto3.Session().region_name\n", "sess = sagemaker.Session()" ] }, { "cell_type": "markdown", "id": "2c8edfc4", "metadata": {}, "source": [ "## 2. Fine-tune the pre-trained model on a custom dataset\n", "\n", "---\n", "Previously, we saw how to run inference on a pre-trained model. Next, we discuss how a model can be finetuned to a custom dataset with any number of classes.\n", "\n", "The model can be fine-tuned to any dataset of images. It works very well even with as little as five training images.\n", "\n", "The fine-tuning script is built on the script from [dreambooth](https://dreambooth.github.io/). The model returned by fine-tuning can be further deployed for inference. Below are the instructions for how the training data should be formatted.\n", "\n", "- **Input:** A directory containing the instance images, `dataset_info.json` and (optional) directory `class_data_dir`.\n", " - Images may be of `.png` or `.jpg` or `.jpeg` format.\n", " - `dataset_info.json` file must be of the format {'instance_prompt':<>,'class_prompt':<>}.\n", " - If with_prior_preservation = False, you may choose to ignore 'class_prompt'.\n", " - `class_data_dir` directory must have class images. If with_prior_preservation = True and class_data_dir is not present or there are not enough images already present in class_data_dir, additional images will be sampled with class_prompt.\n", "- **Output:** A trained model that can be deployed for inference.\n", "\n", "The s3 path should look like `s3://bucket_name/input_directory/`. Note the trailing `/` is required.\n", "\n", "Here is an example format of the training data.\n", "\n", " input_directory\n", " |---instance_image_1.png\n", " |---instance_image_2.png\n", " |---instance_image_3.png\n", " |---instance_image_4.png\n", " |---instance_image_5.png\n", " |---dataset_info.json\n", " |---class_data_dir\n", " |---class_image_1.png\n", " |---class_image_2.png\n", " |---class_image_3.png\n", " |---class_image_4.png\n", "\n", "**Prior preservation, instance prompt and class prompt:** Prior preservation is a technique that uses additional images of the same class that we are trying to train on. For instance, if the training data consists of images of a particular dog, with prior preservation, we incorporate class images of generic dogs. It tries to avoid overfitting by showing images of different dogs while training for a particular dog. Tag indicating the specific dog present in instance prompt is missing in the class prompt. For instance, instance prompt may be \"a photo of a Doppler dog\" and class prompt may be \"a photo of a dog\". You can enable prior preservation by setting the hyper-parameter with_prior_preservation = True.\n", "\n", "\n", "\n", "We provide a default dataset of dog images. It consists of images (instance images corresponding to instance prompt) of a single dog with no class images. If using the default dataset, try the prompt \"a photo of a Doppler dog\" while doing inference in the demo notebook.\n", "\n", "\n", "License: [MIT](https://github.com/marshmellow77/dreambooth-sm/blob/main/LICENSE)." ] }, { "cell_type": "markdown", "id": "b8bfaa4d", "metadata": {}, "source": [ "### 2.1. Retrieve Training Artifacts\n", "\n", "---\n", "Here, we retrieve the training docker container, the training algorithm source, and the pre-trained base model. Note that model_version=\"*\" fetches the latest model.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "f11ff722", "metadata": { "tags": [] }, "outputs": [], "source": [ "from sagemaker import image_uris, model_uris, script_uris\n", "\n", "# Currently, not all the stable diffusion models in jumpstart support finetuning. Thus, we manually select a model\n", "# which supports finetuning.\n", "train_model_id, train_model_version, train_scope = (\n", " \"model-txt2img-stabilityai-stable-diffusion-v2-1-base\",\n", " \"*\",\n", " \"training\",\n", ")\n", "\n", "# Tested with ml.g4dn.2xlarge (16GB GPU memory) and ml.g5.2xlarge (24GB GPU memory) instances. Other instances may work as well.\n", "# If ml.g5.2xlarge instance type is available, please change the following instance type to speed up training.\n", "training_instance_type = \"ml.g4dn.2xlarge\"\n", "\n", "# Retrieve the docker image\n", "train_image_uri = image_uris.retrieve(\n", " region=None,\n", " framework=None, # automatically inferred from model_id\n", " model_id=train_model_id,\n", " model_version=train_model_version,\n", " image_scope=train_scope,\n", " instance_type=training_instance_type,\n", ")\n", "\n", "# Retrieve the training script. This contains all the necessary files including data processing, model training etc.\n", "train_source_uri = script_uris.retrieve(\n", " model_id=train_model_id, model_version=train_model_version, script_scope=train_scope\n", ")\n", "# Retrieve the pre-trained model tarball to further fine-tune\n", "train_model_uri = model_uris.retrieve(\n", " model_id=train_model_id, model_version=train_model_version, model_scope=train_scope\n", ")" ] }, { "cell_type": "markdown", "id": "6e266289", "metadata": {}, "source": [ "### 2.2. Set Training parameters\n", "\n", "---\n", "Now that we are done with all the set up that is needed, we are ready to train our stable diffusion model. To begin, let us create a [``sageMaker.estimator.Estimator``](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html) object. This estimator will launch the training job.\n", "\n", "There are two kinds of parameters that need to be set for training. The first one are the parameters for the training job. These include: (i) Training data path. This is S3 folder in which the input data is stored, (ii) Output path: This the s3 folder in which the training output is stored. (iii) Training instance type: This indicates the type of machine on which to run the training. We defined the training instance type above to fetch the correct train_image_uri.\n", "\n", "The second set of parameters are algorithm specific training hyper-parameters.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "e21c709f", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Sample training data is available in this bucket\n", "training_data_bucket = f\"jumpstart-cache-prod-{aws_region}\"\n", "training_data_prefix = \"training-datasets/dogs_sd_finetuning/\"\n", "\n", "training_dataset_s3_path = f\"s3://{training_data_bucket}/{training_data_prefix}\"\n", "\n", "output_bucket = sess.default_bucket()\n", "output_prefix = \"jumpstart-example-sd-training\"\n", "\n", "s3_output_location = f\"s3://{output_bucket}/{output_prefix}/output\"" ] }, { "cell_type": "markdown", "id": "adda2a1e", "metadata": {}, "source": [ "---\n", "For algorithm specific hyper-parameters, we start by fetching python dictionary of the training hyper-parameters that the algorithm accepts with their default values. This can then be overridden to custom values.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "aa371787", "metadata": { "tags": [] }, "outputs": [], "source": [ "from sagemaker import hyperparameters\n", "\n", "# Retrieve the default hyper-parameters for fine-tuning the model\n", "hyperparameters = hyperparameters.retrieve_default(\n", " model_id=train_model_id, model_version=train_model_version\n", ")\n", "\n", "# [Optional] Override default hyperparameters with custom values\n", "hyperparameters[\"max_steps\"] = \"400\"\n", "print(hyperparameters)" ] }, { "cell_type": "markdown", "id": "d102c884", "metadata": {}, "source": [ "---\n", "If setting `with_prior_preservation=True`, please use ml.g5.2xlarge instance type as more memory is required to generate class images. Currently, training on ml.g4dn.2xlarge instance type run into CUDA out of memory issue when setting `with_prior_preservation=True`.\n", "\n", "---" ] }, { "cell_type": "markdown", "id": "7cda2854", "metadata": {}, "source": [ "### 2.3. Start Training\n", "---\n", "We start by creating the estimator object with all the required assets and then launch the training job. It takes less than 10 mins on the default dataset.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "76bdbb83", "metadata": { "scrolled": true, "tags": [] }, "outputs": [], "source": [ "from sagemaker.estimator import Estimator\n", "from sagemaker.utils import name_from_base\n", "from sagemaker.tuner import HyperparameterTuner\n", "\n", "training_job_name = name_from_base(f\"jumpstart-example-{train_model_id}-transfer-learning\")\n", "\n", "# Create SageMaker Estimator instance\n", "sd_estimator = Estimator(\n", " role=aws_role,\n", " image_uri=train_image_uri,\n", " source_dir=train_source_uri,\n", " model_uri=train_model_uri,\n", " entry_point=\"transfer_learning.py\", # Entry-point file in source_dir and present in train_source_uri.\n", " instance_count=1,\n", " instance_type=training_instance_type,\n", " max_run=360000,\n", " hyperparameters=hyperparameters,\n", " output_path=s3_output_location,\n", " base_job_name=training_job_name,\n", ")\n", "\n", "# Launch a SageMaker Training job by passing s3 path of the training data\n", "sd_estimator.fit({\"training\": training_dataset_s3_path}, logs=True)" ] }, { "cell_type": "markdown", "id": "6fadc21e", "metadata": {}, "source": [ "### 2.4. Deploy and run inference on the fine-tuned model\n", "\n", "---\n", "\n", "A trained model does nothing on its own. We now want to use the model to perform inference. For this example, that means predicting the bounding boxes of an image. We follow the same steps as in [2. Run inference on the pre-trained model](#2.-Run-inference-on-the-pre-trained-model). We start by retrieving the jumpstart artifacts for deploying an endpoint. However, instead of base_predictor, we deploy the `od_estimator` that we fine-tuned.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "fcf25c1e", "metadata": { "tags": [] }, "outputs": [], "source": [ "inference_instance_type = \"ml.g4dn.2xlarge\"\n", "\n", "# Retrieve the inference docker container uri\n", "deploy_image_uri = image_uris.retrieve(\n", " region=None,\n", " framework=None, # automatically inferred from model_id\n", " image_scope=\"inference\",\n", " model_id=train_model_id,\n", " model_version=train_model_version,\n", " instance_type=inference_instance_type,\n", ")\n", "# Retrieve the inference script uri. This includes scripts for model loading, inference handling etc.\n", "deploy_source_uri = script_uris.retrieve(\n", " model_id=train_model_id, model_version=train_model_version, script_scope=\"inference\"\n", ")\n", "\n", "endpoint_name = name_from_base(f\"jumpstart-example-FT-{train_model_id}-\")\n", "\n", "# Use the estimator from the previous step to deploy to a SageMaker endpoint\n", "finetuned_predictor = sd_estimator.deploy(\n", " initial_instance_count=1,\n", " instance_type=inference_instance_type,\n", " entry_point=\"inference.py\", # entry point file in source_dir and present in deploy_source_uri\n", " image_uri=deploy_image_uri,\n", " source_dir=deploy_source_uri,\n", " endpoint_name=endpoint_name,\n", ")" ] }, { "cell_type": "markdown", "id": "51f61ca2-39f8-4449-b5bb-108851b74dba", "metadata": { "tags": [] }, "source": [ "Next, we query the finetuned model, parse the response and display the generated image. Functions for these are implemented below. Input to the endpoint is any string of text dumped in json and encoded in `utf-8` format. Output of the endpoint is a json with generated text." ] }, { "cell_type": "code", "execution_count": null, "id": "aaae5307-4beb-4f55-a389-a06c89cbc02c", "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "import numpy as np\n", "\n", "\n", "def query(model_predictor, text):\n", " \"\"\"Query the model predictor.\"\"\"\n", "\n", " encoded_text = text.encode(\"utf-8\")\n", "\n", " query_response = model_predictor.predict(\n", " encoded_text,\n", " {\n", " \"ContentType\": \"application/x-text\",\n", " \"Accept\": \"application/json\",\n", " },\n", " )\n", " return query_response\n", "\n", "\n", "def parse_response(query_response):\n", " \"\"\"Parse response and return generated image and the prompt\"\"\"\n", "\n", " response_dict = json.loads(query_response)\n", " return response_dict[\"generated_image\"], response_dict[\"prompt\"]\n", "\n", "\n", "def display_img_and_prompt(img, prmpt):\n", " \"\"\"Display hallucinated image.\"\"\"\n", " plt.figure(figsize=(12, 12))\n", " plt.imshow(np.array(img))\n", " plt.axis(\"off\")\n", " plt.title(prmpt)\n", " plt.show()" ] }, { "cell_type": "code", "execution_count": null, "id": "6a4bf38f", "metadata": { "pycharm": { "is_executing": true }, "tags": [] }, "outputs": [], "source": [ "text = \"a photo of a Doppler dog with a hat\"\n", "query_response = query(finetuned_predictor, text)\n", "img, prmpt = parse_response(query_response)\n", "display_img_and_prompt(img, prmpt)" ] }, { "cell_type": "markdown", "id": "944a6f0f", "metadata": {}, "source": [ "All the parameters mentioned in [2.4. Supported Inference parameters](#2.4.-Supported-Inference-parameters) are supported with finetuned model as well. You may also receive compressed image output as in [2.5. Compressed Image Output](#2.5.-Compressed-Image-Output) by changing `accept`." ] }, { "cell_type": "markdown", "id": "f3381a2c", "metadata": {}, "source": [ "---\n", "Next, we delete the endpoint corresponding to the finetuned model.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "b03c8594", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Delete the SageMaker endpoint\n", "finetuned_predictor.delete_model()\n", "finetuned_predictor.delete_endpoint()" ] }, { "cell_type": "markdown", "id": "a504c9ac", "metadata": {}, "source": [ "## 3. Conclusion\n", "---\n", "\n", "Although creating impressive images can find use in industries ranging from art to NFTs and beyond, today we also expect AI to be personalizable. JumpStart provides fine-tuning capability to the pre-trained models so that you can adapt the model to your own use case with as little as five training images. This can be useful when creating art, logos, custom designs, NFTs, and so on, or fun stuff such as generating custom AI images of your pets or avatars of yourself. In this lab, we learned how to fine-tune a stable diffusion text to image generation model. To learn more about Stable Diffusion fine-tuning, please check out the blog [Fine-tune text-to-image Stable Diffusion models with Amazon SageMaker JumpStart](https://aws.amazon.com/blogs/machine-learning/fine-tune-text-to-image-stable-diffusion-models-with-amazon-sagemaker-jumpstart/)." ] }, { "cell_type": "code", "execution_count": null, "id": "e5238f67-d001-441b-b054-129270e3a641", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "availableInstances": [ { "_defaultOrder": 0, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "memoryGiB": 4, "name": "ml.t3.medium", "vcpuNum": 2 }, { "_defaultOrder": 1, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 8, "name": "ml.t3.large", "vcpuNum": 2 }, { "_defaultOrder": 2, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 16, "name": "ml.t3.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 3, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 32, "name": "ml.t3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 4, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "memoryGiB": 8, "name": "ml.m5.large", "vcpuNum": 2 }, { "_defaultOrder": 5, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 16, "name": "ml.m5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 6, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 32, "name": "ml.m5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 7, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 64, "name": "ml.m5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 8, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 128, "name": "ml.m5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 9, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 192, "name": "ml.m5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 10, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 256, "name": "ml.m5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 11, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 384, "name": "ml.m5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 12, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 8, "name": "ml.m5d.large", "vcpuNum": 2 }, { "_defaultOrder": 13, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 16, "name": "ml.m5d.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 14, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 32, "name": "ml.m5d.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 15, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 64, "name": "ml.m5d.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 16, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 128, "name": "ml.m5d.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 17, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 192, "name": "ml.m5d.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 18, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 256, "name": "ml.m5d.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 19, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 384, "name": "ml.m5d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 20, "_isFastLaunch": true, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 4, "name": "ml.c5.large", "vcpuNum": 2 }, { "_defaultOrder": 21, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 8, "name": "ml.c5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 22, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 16, "name": "ml.c5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 23, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 32, "name": "ml.c5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 24, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 72, "name": "ml.c5.9xlarge", "vcpuNum": 36 }, { "_defaultOrder": 25, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 96, "name": "ml.c5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 26, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 144, "name": "ml.c5.18xlarge", "vcpuNum": 72 }, { "_defaultOrder": 27, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 192, "name": "ml.c5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 28, "_isFastLaunch": true, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 16, "name": "ml.g4dn.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 29, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 32, "name": "ml.g4dn.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 30, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 64, "name": "ml.g4dn.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 31, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 128, "name": "ml.g4dn.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 32, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "memoryGiB": 192, "name": "ml.g4dn.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 33, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 256, "name": "ml.g4dn.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 34, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 61, "name": "ml.p3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 35, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "memoryGiB": 244, "name": "ml.p3.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 36, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "memoryGiB": 488, "name": "ml.p3.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 37, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "memoryGiB": 768, "name": "ml.p3dn.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 38, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 16, "name": "ml.r5.large", "vcpuNum": 2 }, { "_defaultOrder": 39, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 32, "name": "ml.r5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 40, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 64, "name": "ml.r5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 41, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 128, "name": "ml.r5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 42, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 256, "name": "ml.r5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 43, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 384, "name": "ml.r5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 44, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 512, "name": "ml.r5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 45, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 768, "name": "ml.r5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 46, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 16, "name": "ml.g5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 47, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 32, "name": "ml.g5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 48, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 64, "name": "ml.g5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 49, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 128, "name": "ml.g5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 50, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 256, "name": "ml.g5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 51, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "memoryGiB": 192, "name": "ml.g5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 52, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "memoryGiB": 384, "name": "ml.g5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 53, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "memoryGiB": 768, "name": "ml.g5.48xlarge", "vcpuNum": 192 } ], "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (Data Science 2.0)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-west-2:236514542706:image/sagemaker-data-science-38" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.13" }, "pycharm": { "stem_cell": { "cell_type": "raw", "metadata": { "collapsed": false }, "source": [] } } }, "nbformat": 4, "nbformat_minor": 5 }