{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "a84cc22c",
"metadata": {},
"source": [
"# Introduction to Large Language Model Hosting on SageMaker with DeepSpeed Container\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "9eda5e96",
"metadata": {},
"source": [
"##### This notebook has been taken from the Generative Ai Hosting workshop at [SageMaker Examples repo](https://github.com/aws/amazon-sagemaker-examples/tree/main/inference/generativeai/llm-workshop) \n",
"\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "198bb2ca",
"metadata": {},
"source": [
"\n",
"In this notebook, we explore how to host a large language model on SageMaker using the [Large Model Inference](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-large-model-inference.html) container that is optimized for hosting large models using DJLServing. DJLServing is a high-performance universal model serving solution powered by the Deep Java Library (DJL) that is programming language agnostic. To learn more about DJL and DJLServing, you can refer to our recent [blog post](https://aws.amazon.com/blogs/machine-learning/deploy-large-models-on-amazon-sagemaker-using-djlserving-and-deepspeed-model-parallel-inference/).\n",
"\n",
"Language models have recently exploded in both size and popularity. In 2018, BERT-large entered the scene and, with its 340M parameters and novel transformer architecture, set the standard on NLP task accuracy. Within just a few years, state-of-the-art NLP model size has grown by more than 500x with models such as OpenAI’s 175 billion parameter GPT-3 and similarly sized open source Bloom 176B raising the bar on NLP accuracy. This increase in the number of parameters is driven by the simple and empirically-demonstrated positive relationship between model size and accuracy: more is better. With easy access from models zoos such as Hugging Face and improved accuracy in NLP tasks such as classification and text generation, practitioners are increasingly reaching for these large models. However, deploying them can be a challenge because of their size.\n",
"\n",
"In this notebook, we deploy the open source GPT-J Model which is comprised of 6B parameters on a single GPU. Along the way we will explore approaches that will allow us to scale to larger models with practically no code changes.\n",
"\n",
"This notebook was tested on a `ml.t3.medium` instance using the `Python 3 (Data Science)` kernel on SageMaker Studio."
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "9f899bc2",
"metadata": {},
"source": [
"## Create a SageMaker Model for Deployment\n",
"As a first step, we'll import the relevant libraries and configure several global variables such as the hosting image that will be used nd the S3 location of our model artifacts"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ae274590-2828-4592-8b01-219797b226a9",
"metadata": {},
"outputs": [],
"source": [
"!pip install sagemaker boto3 --upgrade --quiet"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dc9515a9",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import sagemaker\n",
"from sagemaker.model import Model\n",
"from sagemaker import serializers, deserializers\n",
"from sagemaker import image_uris\n",
"import boto3\n",
"import os\n",
"import time\n",
"import json\n",
"import jinja2\n",
"from pathlib import Path"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8ffef362",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"role = sagemaker.get_execution_role() # execution role for the endpoint\n",
"sess = sagemaker.session.Session() # sagemaker session for interacting with different AWS APIs\n",
"bucket = sess.default_bucket() # bucket to house artifacts\n",
"model_bucket = sess.default_bucket() # bucket to house artifacts\n",
"s3_code_prefix = \"large-model-djl-gptj6b/code\" # folder within bucket where code artifact will go\n",
"s3_model_prefix = \"hf-large-model-djl-gptj6b/model\" # folder where model checkpoint will go\n",
"\n",
"region = sess._region_name # region name of the current SageMaker Studio environment\n",
"account_id = sess.account_id() # account_id of the current SageMaker Studio environment\n",
"\n",
"s3_client = boto3.client(\"s3\") # client to intreract with S3 API\n",
"sm_client = boto3.client(\"sagemaker\") # client to intreract with SageMaker\n",
"smr_client = boto3.client(\"sagemaker-runtime\") # client to intreract with SageMaker Endpoints\n",
"jinja_env = jinja2.Environment() # jinja environment to generate model configuration templates"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7c88a9b1",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# lookup the inference image uri based on our current region\n",
"inference_image_uri = (\n",
" f\"763104351884.dkr.ecr.{region}.amazonaws.com/djl-inference:0.20.0-deepspeed0.7.5-cu116\"\n",
")\n",
"print(f\"Image going to be used is ---- > {inference_image_uri}\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2ad9e457",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# lookup the S3 model location based on our region\n",
"pretrained_model_location = f\"s3://sagemaker-example-files-prod-{region}/models/gpt-j-6b-model/\"\n",
"print(f\"Pretrained model will be downloaded from ---- > {pretrained_model_location}\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "d73f2b49",
"metadata": {},
"source": [
"## Deploying a Large Language Model using Hugging Face Accelerate\n",
"The DJL Inference Image which we will be utilizing ships with a number of built-in inference handlers for a wide variety of tasks including:\n",
"- `text-generation`\n",
"- `question-answering`\n",
"- `text-classification`\n",
"- `token-classification`\n",
"\n",
"You can refer to this [GitRepo](https://github.com/deepjavalibrary/djl-serving/tree/master/engines/python/setup/djl_python) for a list of additional handlers and available NLP Tasks.
\n",
"These handlers can be utilized as is without having to write any custom inference code. We simply need to create a `serving.properties` text file with our desired hosting options and package it up into a `tar.gz` artifact.\n",
"\n",
"Lets take a look at the `serving.properties` file that we'll be using for our first example"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "52de9bc2",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# we plug in the appropriate model location into our `serving.properties` file based on the region in which this notebook is running\n",
"template = jinja_env.from_string(Path(\"accelerate_src/serving.template\").open().read())\n",
"Path(\"accelerate_src/serving.properties\").open(\"w\").write(\n",
" template.render(s3url=pretrained_model_location)\n",
")\n",
"!pygmentize accelerate_src/serving.properties | cat -n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "3f8d0b1c",
"metadata": {},
"source": [
"There are a few options specified here. Lets go through them in turn
\n",
"1. `engine` - specifies the engine that will be used for this workload. In this case we'll be hosting a model using the [DJL Python Engine](https://github.com/deepjavalibrary/djl-serving/tree/master/engines/python)\n",
"2. `option.entryPoint` - specifies the entrypoint code that will be used to host the model. djl_python.huggingface refers to the `huggingface.py` module from [djl_python repo](https://github.com/deepjavalibrary/djl-serving/tree/master/engines/python/setup/djl_python). \n",
"3. `option.s3url` - specifies the location of the model files. Alternativelly an `option.model_id` option can be used instead to specifiy a model from Hugging Face Hub (e.g. `EleutherAI/gpt-j-6B`) and the model will be automatically downloaded from the Hub. The s3url approach is recommended as it allows you to host the model artifact within your own environment and enables faster deployments by utilizing optimized approach within the DJL inference container to transfer the model from S3 into the hosting instance \n",
"4. `option.task` - This is specific to the `huggingface.py` inference handler and specifies for which task this model will be used\n",
"5. `option.device_map` - Enables layer-wise model partitioning through [Hugging Face Accelerate](https://huggingface.co/docs/accelerate/usage_guides/big_modeling#designing-a-device-map). With `option.device_map=auto`, Accelerate will determine where to put each **layer** to maximize the use of your fastest devices (GPUs) and offload the rest on the CPU, or even the hard drive if you don’t have enough GPU RAM (or CPU RAM). Even if the model is split across several devices, it will run as you would normally expect.\n",
"6. `option.load_in_8bit` - Quantizes the model weights to int8 thereby greatly reducing the memory footprint of the model from the initial FP32. See this [blog post](https://huggingface.co/blog/hf-bitsandbytes-integration) from Hugging Face for additional information \n",
"\n",
"For more information on the available options, please refer to the [SageMaker Large Model Inference Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-large-model-configuration.html)\n",
"\n",
"Our initial approach here is to utilize the built-in functionality within Hugging Face Transformers to enable Large Language Model hosting. These are exposed through the `device_map` and `load_in_8bit` parameters which enable sharding and shrinking of the model. The sharding approach taken here is layer wise as individual model layers are placed onto different GPU devices and data flows sequentially from the input to the final output layer as illustated below
\n",
"
\n",
"\n",
"Even though in this example the model will be running on a single GPU and will not be sharded, this parameter would automatically apply sharding as we scale to larger models on multi-GPU instances."
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "4c2d0302",
"metadata": {},
"source": [
"We place the `serving.properties` file into a tarball and upload it to S3"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2a9ac570",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"!tar czvf acc_model.tar.gz accelerate_src/"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dfd0ce74",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"hf_s3_code_artifact = sess.upload_data(\"acc_model.tar.gz\", bucket, s3_code_prefix)\n",
"print(f\"S3 Code or Model tar ball uploaded to --- > {hf_s3_code_artifact}\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "5e4bb2e7",
"metadata": {},
"source": [
"## Deploy Model to a SageMaker Endpoint\n",
"With a helper function we can now deploy our endpoint and invoke it with some sample inputs"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "30c4991b",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"def deploy_model(image_uri, model_data, role, endpoint_name, instance_type, sagemaker_session):\n",
" \"\"\"Helper function to create the SageMaker Endpoint resources and return a predictor\"\"\"\n",
" model = Model(image_uri=image_uri, model_data=model_data, role=role)\n",
"\n",
" model.deploy(initial_instance_count=1, instance_type=instance_type, endpoint_name=endpoint_name)\n",
"\n",
" # our requests and responses will be in json format so we specify the serializer and the deserializer\n",
" predictor = sagemaker.Predictor(\n",
" endpoint_name=endpoint_name,\n",
" sagemaker_session=sagemaker_session,\n",
" serializer=serializers.JSONSerializer(),\n",
" deserializer=deserializers.JSONDeserializer(),\n",
" )\n",
"\n",
" return predictor"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f3631412",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# creates a unique endpoint name\n",
"hf_endpoint_name = sagemaker.utils.name_from_base(\"gptj-acc\")\n",
"print(f\"Our endpoint will be called {hf_endpoint_name}\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dc206d70",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# deployment will take about 10 minutes\n",
"hf_predictor = deploy_model(\n",
" image_uri=inference_image_uri,\n",
" model_data=hf_s3_code_artifact,\n",
" role=role,\n",
" endpoint_name=hf_endpoint_name,\n",
" instance_type=\"ml.g4dn.4xlarge\",\n",
" sagemaker_session=sess,\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "287d7f09-209b-4c39-9f40-cead808dac81",
"metadata": {},
"source": [
"Let's run an example with a basic text generation prompt `Large model inference is`"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8ec49948-7ad2-4dac-8db5-35dbd9a32240",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"hf_predictor.predict(\n",
" {\"inputs\": \"Large model inference is\", \"parameters\": {\"max_length\": 50, \"temperature\": 0.5}}\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "042080bf-0726-4092-9a4d-76cf49796fad",
"metadata": {},
"source": [
"Now let's try another example where we provide the a few samples of text and sentiment pairs and ask it to classify a new example"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b6cc88fc-2e0c-4b09-b7f9-1522a43117da",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"print(\n",
" hf_predictor.predict(\n",
" {\n",
" \"inputs\": \"\"\"Message: Support has been terrible for 2 weeks...\n",
" Sentiment: Negative\n",
" ###\n",
" Message: I love your API, it is simple and so fast!\n",
" Sentiment: Positive\n",
" ###\n",
" Message: GPT-J has been released 12 months ago.\n",
" Sentiment: Neutral\n",
" ###\n",
" Message: The responsiveness of your team has been amazing, thank you so much!\n",
" Sentiment:\"\"\",\n",
" \"parameters\": {\"max_length\": 50, \"temperature\": 0.5},\n",
" }\n",
" )[0][\"generated_text\"]\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "80e6d851-e602-4570-a024-d939a68aa7e8",
"metadata": {},
"source": [
"You can see that the model filled in a Sentiment value for the last example. You can take a look at a blog post [here](https://towardsdatascience.com/how-to-use-gpt-j-for-almost-any-nlp-task-cb3ca8ff5826) for more examples of prompts. Finally Let's do a quick benchmark to see what kind of latency we can expect from this model"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ab660ce0",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"%%timeit -n3 -r1\n",
"hf_predictor.predict(\n",
" {\"inputs\": \"Large model inference is\", \"parameters\": {\"max_length\": 50, \"temperature\": 0.5}}\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e09c6eeb",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# Clean up the endpoint before proceeding\n",
"hf_predictor.delete_endpoint()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "af980378",
"metadata": {},
"source": [
"## Bonus: Deploying a Large Language Model Using DeepSpeed\n",
"Now we will explore another approach for deploying Large Language Models using [DeepSpeed](https://www.deepspeed.ai/). DeepSpeed provides various [inference optimizations](https://www.deepspeed.ai/tutorials/inference-tutorial/) for compatible transformer based models including model sharding, optimized inference kernels, and quantization. To leverage DeepSpeed, we simply need to modify our `serving.properties` file"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "11f41d6c",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"template = jinja_env.from_string(Path(\"deepspeed_src/serving.template\").open().read())\n",
"Path(\"deepspeed_src/serving.properties\").open(\"w\").write(\n",
" template.render(s3url=pretrained_model_location)\n",
")\n",
"!pygmentize deepspeed_src/serving.properties | cat -n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "ff87f2a2",
"metadata": {},
"source": [
"Notice that the `engine` parameter is now set to `DeepSpeed` and the `option.entryPoint` has been modified to use the `deepspeed.py` module. Python scripts that use DeepSpeed can not be launched as traditional python scripts (i.e. python `deepspeed.py` would not work.) Setting `engine=DeepSpeed` will automatically configure the environment and launch the inference script appropriatelly. \n",
"The only other new parameter here is `option.tensor_parallel_degree` where we have to specify the number of GPU devices to which the model will be sharded.\n",
"\n",
"Unlike Accelerate where the model was partitioned along the layers, DeepSpeed uses TensorParallelism where individual layers (Tensors) are sharded accross devices. For example each GPU can have a slice of each layer. The diagram below provides a high level illustartion of how this works
\n",
"\n",
"
\n",
"\n",
"Where with the layer-wise approach, the data fllowed through each GPU device sequeantially, here data is sent to all GPU devices where a partial result is compute on each GPU. The partial results are then collected though an All-Gather operation to compute the final result. \n",
"TensorParallelism generally provides higher GPU utilization and better performance."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "04af8ed8",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"!tar czvf ds_model.tar.gz deepspeed_src/"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c56865df",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"ds_s3_code_artifact = sess.upload_data(\"ds_model.tar.gz\", bucket, s3_code_prefix)\n",
"print(f\"S3 Code or Model tar ball uploaded to --- > {ds_s3_code_artifact}\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d43188d4",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"ds_endpoint_name = sagemaker.utils.name_from_base(\"gptj-ds\")\n",
"ds_predictor = deploy_model(\n",
" image_uri=inference_image_uri,\n",
" model_data=ds_s3_code_artifact,\n",
" role=role,\n",
" endpoint_name=ds_endpoint_name,\n",
" instance_type=\"ml.g4dn.4xlarge\",\n",
" sagemaker_session=sess,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "256827c6",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"ds_predictor.predict(\n",
" {\"inputs\": \"Large model inference is\", \"parameters\": {\"max_length\": 50, \"temperature\": 0.5}}\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5f0c706d-233c-4e34-98f6-910883107036",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"print(\n",
" ds_predictor.predict(\n",
" {\n",
" \"inputs\": \"\"\"Message: Support has been terrible for 2 weeks...\n",
" Sentiment: Negative\n",
" ###\n",
" Message: I love your API, it is simple and so fast!\n",
" Sentiment: Positive\n",
" ###\n",
" Message: GPT-J has been released 12 months ago.\n",
" Sentiment: Neutral\n",
" ###\n",
" Message: The responsiveness of your team has been amazing, thank you so much!\n",
" Sentiment:\"\"\",\n",
" \"parameters\": {\"max_length\": 50, \"temperature\": 0.5},\n",
" }\n",
" )[0][0][\"generated_text\"]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "18052400",
"metadata": {},
"outputs": [],
"source": [
"%%timeit -n3 -r1\n",
"ds_predictor.predict(\n",
" {\"inputs\": \"Large model inference is\", \"parameters\": {\"max_length\": 50, \"temperature\": 0.5}}\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "479e5991",
"metadata": {},
"outputs": [],
"source": [
"ds_predictor.delete_endpoint()"
]
}
],
"metadata": {
"availableInstances": [
{
"_defaultOrder": 0,
"_isFastLaunch": true,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 4,
"name": "ml.t3.medium",
"vcpuNum": 2
},
{
"_defaultOrder": 1,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 8,
"name": "ml.t3.large",
"vcpuNum": 2
},
{
"_defaultOrder": 2,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 16,
"name": "ml.t3.xlarge",
"vcpuNum": 4
},
{
"_defaultOrder": 3,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 32,
"name": "ml.t3.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 4,
"_isFastLaunch": true,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 8,
"name": "ml.m5.large",
"vcpuNum": 2
},
{
"_defaultOrder": 5,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 16,
"name": "ml.m5.xlarge",
"vcpuNum": 4
},
{
"_defaultOrder": 6,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 32,
"name": "ml.m5.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 7,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 64,
"name": "ml.m5.4xlarge",
"vcpuNum": 16
},
{
"_defaultOrder": 8,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 128,
"name": "ml.m5.8xlarge",
"vcpuNum": 32
},
{
"_defaultOrder": 9,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 192,
"name": "ml.m5.12xlarge",
"vcpuNum": 48
},
{
"_defaultOrder": 10,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 256,
"name": "ml.m5.16xlarge",
"vcpuNum": 64
},
{
"_defaultOrder": 11,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 384,
"name": "ml.m5.24xlarge",
"vcpuNum": 96
},
{
"_defaultOrder": 12,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 8,
"name": "ml.m5d.large",
"vcpuNum": 2
},
{
"_defaultOrder": 13,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 16,
"name": "ml.m5d.xlarge",
"vcpuNum": 4
},
{
"_defaultOrder": 14,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 32,
"name": "ml.m5d.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 15,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 64,
"name": "ml.m5d.4xlarge",
"vcpuNum": 16
},
{
"_defaultOrder": 16,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 128,
"name": "ml.m5d.8xlarge",
"vcpuNum": 32
},
{
"_defaultOrder": 17,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 192,
"name": "ml.m5d.12xlarge",
"vcpuNum": 48
},
{
"_defaultOrder": 18,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 256,
"name": "ml.m5d.16xlarge",
"vcpuNum": 64
},
{
"_defaultOrder": 19,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 384,
"name": "ml.m5d.24xlarge",
"vcpuNum": 96
},
{
"_defaultOrder": 20,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": true,
"memoryGiB": 0,
"name": "ml.geospatial.interactive",
"supportedImageNames": [
"sagemaker-geospatial-v1-0"
],
"vcpuNum": 0
},
{
"_defaultOrder": 21,
"_isFastLaunch": true,
"category": "Compute optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 4,
"name": "ml.c5.large",
"vcpuNum": 2
},
{
"_defaultOrder": 22,
"_isFastLaunch": false,
"category": "Compute optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 8,
"name": "ml.c5.xlarge",
"vcpuNum": 4
},
{
"_defaultOrder": 23,
"_isFastLaunch": false,
"category": "Compute optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 16,
"name": "ml.c5.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 24,
"_isFastLaunch": false,
"category": "Compute optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 32,
"name": "ml.c5.4xlarge",
"vcpuNum": 16
},
{
"_defaultOrder": 25,
"_isFastLaunch": false,
"category": "Compute optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 72,
"name": "ml.c5.9xlarge",
"vcpuNum": 36
},
{
"_defaultOrder": 26,
"_isFastLaunch": false,
"category": "Compute optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 96,
"name": "ml.c5.12xlarge",
"vcpuNum": 48
},
{
"_defaultOrder": 27,
"_isFastLaunch": false,
"category": "Compute optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 144,
"name": "ml.c5.18xlarge",
"vcpuNum": 72
},
{
"_defaultOrder": 28,
"_isFastLaunch": false,
"category": "Compute optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 192,
"name": "ml.c5.24xlarge",
"vcpuNum": 96
},
{
"_defaultOrder": 29,
"_isFastLaunch": true,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 16,
"name": "ml.g4dn.xlarge",
"vcpuNum": 4
},
{
"_defaultOrder": 30,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 32,
"name": "ml.g4dn.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 31,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 64,
"name": "ml.g4dn.4xlarge",
"vcpuNum": 16
},
{
"_defaultOrder": 32,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 128,
"name": "ml.g4dn.8xlarge",
"vcpuNum": 32
},
{
"_defaultOrder": 33,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 4,
"hideHardwareSpecs": false,
"memoryGiB": 192,
"name": "ml.g4dn.12xlarge",
"vcpuNum": 48
},
{
"_defaultOrder": 34,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 256,
"name": "ml.g4dn.16xlarge",
"vcpuNum": 64
},
{
"_defaultOrder": 35,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 61,
"name": "ml.p3.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 36,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 4,
"hideHardwareSpecs": false,
"memoryGiB": 244,
"name": "ml.p3.8xlarge",
"vcpuNum": 32
},
{
"_defaultOrder": 37,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 8,
"hideHardwareSpecs": false,
"memoryGiB": 488,
"name": "ml.p3.16xlarge",
"vcpuNum": 64
},
{
"_defaultOrder": 38,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 8,
"hideHardwareSpecs": false,
"memoryGiB": 768,
"name": "ml.p3dn.24xlarge",
"vcpuNum": 96
},
{
"_defaultOrder": 39,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 16,
"name": "ml.r5.large",
"vcpuNum": 2
},
{
"_defaultOrder": 40,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 32,
"name": "ml.r5.xlarge",
"vcpuNum": 4
},
{
"_defaultOrder": 41,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 64,
"name": "ml.r5.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 42,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 128,
"name": "ml.r5.4xlarge",
"vcpuNum": 16
},
{
"_defaultOrder": 43,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 256,
"name": "ml.r5.8xlarge",
"vcpuNum": 32
},
{
"_defaultOrder": 44,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 384,
"name": "ml.r5.12xlarge",
"vcpuNum": 48
},
{
"_defaultOrder": 45,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 512,
"name": "ml.r5.16xlarge",
"vcpuNum": 64
},
{
"_defaultOrder": 46,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 768,
"name": "ml.r5.24xlarge",
"vcpuNum": 96
},
{
"_defaultOrder": 47,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 16,
"name": "ml.g5.xlarge",
"vcpuNum": 4
},
{
"_defaultOrder": 48,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 32,
"name": "ml.g5.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 49,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 64,
"name": "ml.g5.4xlarge",
"vcpuNum": 16
},
{
"_defaultOrder": 50,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 128,
"name": "ml.g5.8xlarge",
"vcpuNum": 32
},
{
"_defaultOrder": 51,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 256,
"name": "ml.g5.16xlarge",
"vcpuNum": 64
},
{
"_defaultOrder": 52,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 4,
"hideHardwareSpecs": false,
"memoryGiB": 192,
"name": "ml.g5.12xlarge",
"vcpuNum": 48
},
{
"_defaultOrder": 53,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 4,
"hideHardwareSpecs": false,
"memoryGiB": 384,
"name": "ml.g5.24xlarge",
"vcpuNum": 96
},
{
"_defaultOrder": 54,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 8,
"hideHardwareSpecs": false,
"memoryGiB": 768,
"name": "ml.g5.48xlarge",
"vcpuNum": 192
},
{
"_defaultOrder": 55,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 8,
"hideHardwareSpecs": false,
"memoryGiB": 1152,
"name": "ml.p4d.24xlarge",
"vcpuNum": 96
},
{
"_defaultOrder": 56,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 8,
"hideHardwareSpecs": false,
"memoryGiB": 1152,
"name": "ml.p4de.24xlarge",
"vcpuNum": 96
}
],
"kernelspec": {
"display_name": "Python 3 (Data Science 3.0)",
"language": "python",
"name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-west-2:236514542706:image/sagemaker-data-science-310-v1"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}