{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Train reward model with human feedback\n", "\n", "The reward model is trained on a human-labeled dataset with the preferred `star_rating` for a given review. The model flattens the human-labeled data from Ground Truth into (review, star_rating, ranking) tuples and provides a reward score for the RL-based model fine-tuning.\n", "\n", "![Pipeline](img/generative_ai_pipeline_rlhf_plus.png)\n", "\n", "![RLHF](img/rlhf_summarization.png)\n", "\n", "![Convert human ranking data into reward dataset](img/convert_groundtruth_ranking_data_to_reward_model_dataset_summarization.png)" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "svmem(total=32877658112, available=8898641920, percent=72.9, used=23516119040, free=3900121088, active=24321519616, inactive=3359760384, buffers=0, cached=5461417984, shared=1404928, slab=504270848)\n" ] } ], "source": [ "import psutil\n", "\n", "notebook_memory = psutil.virtual_memory()\n", "print(notebook_memory)\n", "\n", "if notebook_memory.total < 32 * 1000 * 1000 * 1000:\n", " print('*******************************************') \n", " print('YOU ARE NOT USING THE CORRECT INSTANCE TYPE')\n", " print('PLEASE CHANGE INSTANCE TYPE TO m5.2xlarge ')\n", " print('*******************************************')\n", "else:\n", " correct_instance_type=True" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "# %pip install --disable-pip-version-check -q \\\n", "# transformers==4.26.1 \\\n", "# datasets==2.9.0 \\\n", "# accelerate==0.17.0 \\\n", "# bitsandbytes==0.37.0 \\\n", "# promptsource==0.2.3 \\\n", "# trl==0.4.1 \\\n", "# evaluate==0.4.0" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "tags": [] }, "outputs": [], "source": [ "import boto3\n", "import sagemaker\n", "import pandas as pd\n", "\n", "sess = sagemaker.Session()\n", "bucket = sess.default_bucket()\n", "role = sagemaker.get_execution_role()\n", "region = boto3.Session().region_name" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "tags": [] }, "outputs": [], "source": [ "import io\n", "import json\n", "import uuid\n", "import time\n", "import boto3\n", "import botocore\n", "\n", "# Amazon Python SDK clients\n", "sagemaker = boto3.client(\"sagemaker\", region)\n", "a2i = boto3.client(\"sagemaker-a2i-runtime\")\n", "s3 = boto3.client(\"s3\", region)" ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "tags": [] }, "outputs": [], "source": [ "import os\n", "import glob\n", "import numpy as np\n", "import argparse\n", "import pprint\n", "from collections import defaultdict\n", "\n", "import torch\n", "import torch.distributed as dist\n", "import torch.nn as nn\n", "import torch.nn.functional as F\n", "import torch.optim as optim\n", "import torch.utils.data\n", "import torch.utils.data.distributed\n", "from torch.utils.data import Dataset, DataLoader\n", "\n", "from transformers import AutoConfig, AutoModelForSequenceClassification\n", "from transformers import AdamW, get_linear_schedule_with_warmup" ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "tags": [] }, "outputs": [], "source": [ "%store -r human_feedback_dataset" ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "tags": [] }, "outputs": [], "source": [ "try:\n", " human_feedback_dataset\n", "except NameError:\n", " print(\"+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\")\n", " print(\"[ERROR] Please run the notebooks in the previous section before you continue.\")\n", " print(\"+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\")" ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Dataset({\n", " features: ['prompt', 'response', 'reward'],\n", " num_rows: 2\n", "})\n" ] } ], "source": [ "print(human_feedback_dataset)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Train a reward model with human preference and alignment data\n", "This is typically a language model initialized from the supervised-fine-tuned (SFT) model (trained in a previous notebook), but with an additional binary-classification layer placed on top. This reward model is used to train the reinforcement-learning model in the next step. The reinforcement-learning model is what is deployed into production to serve applications." ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [], "source": [ "# %store -r supervised_fine_tuned_model_path" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [], "source": [ "# try:\n", "# supervised_fine_tuned_model_path\n", "# except NameError:\n", "# print(\"+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\")\n", "# print(\"[ERROR] Please run the notebooks in the previous section before you continue.\")\n", "# print(\"+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\")" ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "tags": [] }, "outputs": [], "source": [ "# TODO: switch back to the stored `supervised_fine_tuned_model_path` variable\n", "supervised_fine_tuned_model_path = 'google/flan-t5-base'" ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "google/flan-t5-base\n" ] } ], "source": [ "print(supervised_fine_tuned_model_path)" ] }, { "cell_type": "code", "execution_count": 30, "metadata": { "tags": [] }, "outputs": [], "source": [ "from dataclasses import dataclass, field\n", "from typing import Any, Dict, List, Optional, Union\n", "\n", "import evaluate\n", "import numpy as np\n", "import torch.nn as nn\n", "from transformers import (\n", " AutoModelForSequenceClassification,\n", " AutoTokenizer,\n", " HfArgumentParser,\n", " PreTrainedTokenizerBase,\n", " Trainer,\n", " TrainingArguments,\n", ")\n", "from transformers.utils import PaddingStrategy" ] }, { "cell_type": "code", "execution_count": 31, "metadata": { "tags": [] }, "outputs": [], "source": [ "tokenizer = AutoTokenizer.from_pretrained(supervised_fine_tuned_model_path)" ] }, { "cell_type": "code", "execution_count": 32, "metadata": { "tags": [] }, "outputs": [], "source": [ "# Turn the dataset into pairs of prompt + responses, where text_j is the preferred prompt + response and text_k is the other.\n", "def turn_into_text_classification_format(examples):\n", " new_examples = {\"text_j\": [], \"text_k\": []}\n", " for prompt, response, reward in zip(examples[\"prompt\"], examples[\"response\"], examples[\"reward\"]):\n", " # TODO: Add a check to make sure there is only a single 0 and a single 1\n", " if len(response) != 2 or len(reward) != 2 or reward[0] not in (0, 1) or reward[1] not in (0, 1):\n", " raise ValueError(\n", " f\"There should be two responses with a reward that is either 0 or 1. Received {len(response)} responses and {len(reward)} rewards.\"\n", " )\n", " \n", " reward_response_index = reward.index(1)\n", "\n", " new_examples[\"text_j\"].append(\n", " #str(response[highest_ranked_response_index]) + \" \" + tokenizer.bos_token + \" \" + prompt\n", " prompt + \" \" + str(response[reward_response_index])\n", " )\n", " new_examples[\"text_k\"].append(\n", " #str(response[0 if highest_ranked_response_index == 1 else 1]) + \" \" + tokenizer.bos_token + \" \" + prompt\n", " prompt + \" \" + str(response[0 if reward_response_index == 1 else 1])\n", " )\n", "\n", " return new_examples\n", "\n", "# Tokenize the dataset.\n", "def preprocess_function(examples):\n", " tokenized_j = tokenizer(examples[\"text_j\"], truncation=True)\n", " tokenized_k = tokenizer(examples[\"text_k\"], truncation=True)\n", " return {\n", " \"input_ids_j\": tokenized_j[\"input_ids\"],\n", " \"attention_mask_j\": tokenized_j[\"attention_mask\"],\n", " \"input_ids_k\": tokenized_k[\"input_ids\"],\n", " \"attention_mask_k\": tokenized_k[\"attention_mask\"],\n", " }" ] }, { "cell_type": "code", "execution_count": 33, "metadata": { "tags": [] }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "num_proc must be <= 2. Reducing num_proc to 2 for dataset of size 2.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "['prompt', 'response', 'reward']\n", " " ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "a91bdccfe01b40a5bbc4a185f1684a58", "version_major": 2, "version_minor": 0 }, "text/plain": [ "#0: 0%| | 0/1 [00:00 rewards_k.\n", " predictions = np.argmax(predictions, axis=0)\n", " labels = np.zeros(predictions.shape)\n", " return accuracy.compute(predictions=predictions, references=labels)" ] }, { "cell_type": "code", "execution_count": 35, "metadata": { "tags": [] }, "outputs": [], "source": [ "# We need to define a special data collator that batches the data in our j vs k format.\n", "@dataclass\n", "class RewardDataCollatorWithPadding:\n", " tokenizer: PreTrainedTokenizerBase\n", " padding: Union[bool, str, PaddingStrategy] = True\n", " max_length: Optional[int] = None\n", " pad_to_multiple_of: Optional[int] = None\n", " return_tensors: str = \"pt\"\n", "\n", " def __call__(self, features: List[Dict[str, Any]]) -> Dict[str, Any]:\n", " features_j = []\n", " features_k = []\n", " for feature in features:\n", " features_j.append({\"input_ids\": feature[\"input_ids_j\"], \"attention_mask\": feature[\"attention_mask_j\"]})\n", " features_k.append({\"input_ids\": feature[\"input_ids_k\"], \"attention_mask\": feature[\"attention_mask_k\"]})\n", " batch_j = self.tokenizer.pad(\n", " features_j,\n", " padding=self.padding,\n", " max_length=self.max_length,\n", " pad_to_multiple_of=self.pad_to_multiple_of,\n", " return_tensors=self.return_tensors,\n", " )\n", " batch_k = self.tokenizer.pad(\n", " features_k,\n", " padding=self.padding,\n", " max_length=self.max_length,\n", " pad_to_multiple_of=self.pad_to_multiple_of,\n", " return_tensors=self.return_tensors,\n", " )\n", " batch = {\n", " \"input_ids_j\": batch_j[\"input_ids\"],\n", " \"attention_mask_j\": batch_j[\"attention_mask\"],\n", " \"input_ids_k\": batch_k[\"input_ids\"],\n", " \"attention_mask_k\": batch_k[\"attention_mask\"],\n", " \"return_loss\": True,\n", " }\n", " return batch" ] }, { "cell_type": "code", "execution_count": 42, "metadata": { "tags": [] }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Some weights of the model checkpoint at roberta-base were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.weight', 'lm_head.bias', 'lm_head.decoder.weight', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.dense.weight', 'roberta.pooler.dense.bias', 'lm_head.dense.bias']\n", "- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n", "- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n", "Some weights of RobertaForSequenceClassification were not initialized from the model checkpoint at roberta-base and are newly initialized: ['classifier.dense.bias', 'classifier.out_proj.weight', 'classifier.out_proj.bias', 'classifier.dense.weight']\n", "You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n" ] } ], "source": [ "# We are using RoBERTa because it's the basis of a good, lightweight reward classifier model. \n", "# For more info, see Practical Data Science on AWS from DeepLearning.ai\n", "ranking_reward_custom_model_name = 'roberta-base'\n", "ranking_reward_custom_model = AutoModelForSequenceClassification.from_pretrained(ranking_reward_custom_model_name, num_labels=1)" ] }, { "cell_type": "code", "execution_count": 45, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/opt/conda/lib/python3.7/site-packages/transformers/optimization.py:395: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning\n", " FutureWarning,\n", "You're using a T5TokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\n", "Could not estimate the number of tokens of the input, floating-point operations will not be computed\n" ] }, { "data": { "text/html": [ "\n", "
\n", " \n", " \n", " [1/1 00:08, Epoch 1/1]\n", "
\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
StepTraining Loss

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/plain": [ "TrainOutput(global_step=1, training_loss=0.18226858973503113, metrics={'train_runtime': 9.6516, 'train_samples_per_second': 0.207, 'train_steps_per_second': 0.104, 'total_flos': 0.0, 'train_loss': 0.18226858973503113, 'epoch': 1.0})" ] }, "execution_count": 45, "metadata": {}, "output_type": "execute_result" } ], "source": [ "class RewardTrainer(Trainer):\n", " # Define how to compute the reward loss.\n", " def compute_loss(self, model, inputs, return_outputs=False):\n", " rewards_j = model(input_ids=inputs[\"input_ids_j\"], attention_mask=inputs[\"attention_mask_j\"])[0]\n", " rewards_k = model(input_ids=inputs[\"input_ids_k\"], attention_mask=inputs[\"attention_mask_k\"])[0]\n", " loss = -nn.functional.logsigmoid(rewards_j - rewards_k).mean()\n", " if return_outputs:\n", " return loss, {\"rewards_j\": rewards_j, \"rewards_k\": rewards_k}\n", " return loss\n", "\n", "# Define and parse arguments.\n", "local_rank = 0\n", "resume_from_checkpoint = False\n", "deepspeed = None\n", "per_device_train_batch_size = 16\n", "per_device_eval_batch_size = 16\n", "gradient_accumulation_steps = 4\n", "learning_rate = 2e-5\n", "weight_decay = 0.001\n", "bf16 = False\n", "num_train_epochs = 1\n", "\n", "ranking_reward_model_custom_checkpoint = './ranking_reward_model_custom/'\n", "\n", "# Define the training args. Needs to be done before the model is loaded if you are using deepspeed.\n", "training_args = TrainingArguments(\n", " output_dir=ranking_reward_model_custom_checkpoint,\n", " learning_rate=learning_rate,\n", " per_device_train_batch_size=per_device_train_batch_size,\n", " per_device_eval_batch_size=per_device_eval_batch_size,\n", " num_train_epochs=num_train_epochs,\n", " weight_decay=weight_decay,\n", "# evaluation_strategy=\"epoch\",\n", " save_strategy=\"epoch\",\n", " gradient_accumulation_steps=gradient_accumulation_steps,\n", "# deepspeed=deepspeed,\n", "# local_rank=local_rank,\n", " remove_unused_columns=False,\n", " label_names=[],\n", ")\n", " \n", "# Train the model, woohoo.\n", "trainer = RewardTrainer(\n", " model=ranking_reward_custom_model,\n", " args=training_args,\n", " train_dataset=human_feedback_tokenized_dataset, #[\"train\"],\n", "# eval_dataset=tokenized_ds[\"validation\"],\n", " compute_metrics=compute_metrics,\n", " data_collator=RewardDataCollatorWithPadding(tokenizer=tokenizer),\n", ")\n", "\n", "trainer.train(resume_from_checkpoint)" ] }, { "cell_type": "code", "execution_count": 46, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "('./ranking_reward_model_custom/tokenizer_config.json',\n", " './ranking_reward_model_custom/special_tokens_map.json',\n", " './ranking_reward_model_custom/tokenizer.json')" ] }, "execution_count": 46, "metadata": {}, "output_type": "execute_result" } ], "source": [ "trainer.save_model(ranking_reward_model_custom_checkpoint)\n", "tokenizer.save_pretrained(ranking_reward_model_custom_checkpoint)" ] }, { "cell_type": "code", "execution_count": 47, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Stored 'ranking_reward_model_custom_checkpoint' (str)\n" ] } ], "source": [ "%store ranking_reward_model_custom_checkpoint" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# from transformers import TextClassificationPipeline\n", "# from transformers import pipeline\n", "\n", "# tokenizer = AutoTokenizer.from_pretrained(rl_ranking_reward_custom_dataset_model_checkpoint)\n", "# ranking_reward_model_custom_model = AutoModelForSequenceClassification.from_pretrained(ranking_reward_model_custom_checkpoint, num_labels=1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# conversation = ''\n", "# summary = ''\n", "\n", "# # run the conversation + summary pair though the reward classifier\n", "# #prompt_and_response_pair_classification = rl_ranking_reward_custom_dataset_model ...\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%html\n", "\n", "

Shutting down your kernel for this notebook to release resources.

\n", "\n", " \n", "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "availableInstances": [ { "_defaultOrder": 0, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.t3.medium", "vcpuNum": 2 }, { "_defaultOrder": 1, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.t3.large", "vcpuNum": 2 }, { "_defaultOrder": 2, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.t3.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 3, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.t3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 4, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5.large", "vcpuNum": 2 }, { "_defaultOrder": 5, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 6, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 7, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 8, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 9, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 10, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 11, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 12, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5d.large", "vcpuNum": 2 }, { "_defaultOrder": 13, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5d.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 14, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5d.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 15, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5d.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 16, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5d.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 17, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5d.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 18, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5d.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 19, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 20, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": true, "memoryGiB": 0, "name": "ml.geospatial.interactive", "supportedImageNames": [ "sagemaker-geospatial-v1-0" ], "vcpuNum": 0 }, { "_defaultOrder": 21, "_isFastLaunch": true, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.c5.large", "vcpuNum": 2 }, { "_defaultOrder": 22, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.c5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 23, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.c5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 24, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.c5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 25, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 72, "name": "ml.c5.9xlarge", "vcpuNum": 36 }, { "_defaultOrder": 26, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 96, "name": "ml.c5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 27, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 144, "name": "ml.c5.18xlarge", "vcpuNum": 72 }, { "_defaultOrder": 28, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.c5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 29, "_isFastLaunch": true, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g4dn.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 30, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g4dn.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 31, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g4dn.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 32, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g4dn.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 33, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g4dn.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 34, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g4dn.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 35, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 61, "name": "ml.p3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 36, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 244, "name": "ml.p3.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 37, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 488, "name": "ml.p3.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 38, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.p3dn.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 39, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.r5.large", "vcpuNum": 2 }, { "_defaultOrder": 40, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.r5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 41, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.r5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 42, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.r5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 43, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.r5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 44, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.r5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 45, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 512, "name": "ml.r5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 46, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.r5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 47, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 48, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 49, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 50, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 51, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 52, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 53, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.g5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 54, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.g5.48xlarge", "vcpuNum": 192 } ], "instance_type": "ml.m5.2xlarge", "kernelspec": { "display_name": "Python 3 (Data Science)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/datascience-1.0" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.10" } }, "nbformat": 4, "nbformat_minor": 4 }