{ "cells": [ { "cell_type": "markdown", "id": "4c6bf30a-db53-467c-b32f-2859c9388fc8", "metadata": {}, "source": [ "# Deploy Flan-UL2 on SageMaker with DJLServing using PySDK" ] }, { "cell_type": "markdown", "id": "82130f95", "metadata": {}, "source": [ "---\n", "\n", "This notebook's CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook.\n", "\n", "![This us-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-2/inference|generativeai|llm-workshop|flan-ul2-pySDK|flan-ul2-pySDK.ipynb)\n", "\n", "---" ] }, { "cell_type": "markdown", "id": "a3ce738c-78bb-4564-a4d4-7e63468bf4a2", "metadata": {}, "source": [ "Sparked by the release of large AI models like AlexaTM, GPT, OpenChatKit, BLOOM, GPT-J, GPT-NeoX, FLAN-T5, OPT, Stable Diffusion, ControlNet, etc; the popularity of generative AI has seen a recent boom.\n", "\n", "However, as the size and complexity of the deep learning models that power generative AI continue to grow, deployment can be a challenging task. Advanced techniques such as model parallelism and quantization become necessary to achieve latency and throughput requirements. Without expertise in using these techniques, many customers struggle to get started with hosting large models for generative AI applications.\n", "\n", "We demonstrate how you can deploy these large models on SageMaker using [DJL Serving](https://github.com/deepjavalibrary/djl-serving) and [Large Model Inference containers](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#large-model-inference-containers).\n", "\n", "Specifically we deploy the open source [Flan-UL2](https://huggingface.co/google/flan-ul2) model, which is comprised of 20B parameters on an instance with 4 GPUs. (`ml.g5.24xlarge`)\n", "\n", "In this notebook, we will -\n", "- leverage the [SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable/frameworks/djl/index.html) to deploy the models using [DeepSpeed](https://www.deepspeed.ai/), [HuggingFace Accelerate](https://huggingface.co/docs/accelerate/index) and [FasterTransformer](https://github.com/NVIDIA/FasterTransformer)\n", "- perform a quick benchmark of the latency incurred when invoking the endpoints when deployed using each of the frameworks.\n", "- demonstrate how you can let DJL Serving determine the best backend based on your model architecture and configuration." ] }, { "cell_type": "markdown", "id": "0ba0a155-f325-480f-aa74-7841598d4ccc", "metadata": {}, "source": [ "### Update the sagemaker and boto3 packages" ] }, { "cell_type": "code", "execution_count": null, "id": "9041e32d-e321-41be-9635-48dd1f55a73d", "metadata": { "tags": [] }, "outputs": [], "source": [ "!pip install sagemaker==2.160.0 boto3==1.26.145 botocore==1.29.145 awscli==0.11.0" ] }, { "cell_type": "markdown", "id": "916056e5-dfab-4c2e-964f-ec86d0becf5b", "metadata": { "tags": [] }, "source": [ "### Import the required packages" ] }, { "cell_type": "code", "execution_count": null, "id": "7319e4c1-f3b1-4157-bc27-ce96cb5a7a7b", "metadata": { "tags": [] }, "outputs": [], "source": [ "import warnings\n", "\n", "warnings.filterwarnings(\"ignore\")\n", "\n", "import sagemaker\n", "from sagemaker import image_uris\n", "from sagemaker.session import Session\n", "from sagemaker.djl_inference import (\n", " HuggingFaceAccelerateModel,\n", " DeepSpeedModel,\n", " FasterTransformerModel,\n", " DJLModel,\n", ")" ] }, { "cell_type": "markdown", "id": "5ce037d2-b360-4187-8a21-7abdaddfd279", "metadata": {}, "source": [ "### Define variables to store the sagemaker session, region and execution role" ] }, { "cell_type": "code", "execution_count": null, "id": "d949d828-f14e-4ba0-a309-cd438d6d924d", "metadata": { "tags": [] }, "outputs": [], "source": [ "sagemaker_session = Session()\n", "region = sagemaker_session._region_name\n", "role = sagemaker.get_execution_role()\n", "\n", "instance_type = \"ml.g5.24xlarge\" # define the instance type on which you want to deploy the model" ] }, { "cell_type": "markdown", "id": "670b2d59-c8b3-41f0-b40b-89c7308c2bdc", "metadata": { "tags": [] }, "source": [ "### Define the S3 location of the model weights.\n", "Downloading the model from the [HuggingFace Hub](https://huggingface.co/google/flan-ul2) is time consuming. Hence, we recommend that you download the model and upload the uncompressed artifacts to a S3 bucket.\n", "\n", "For the purpose of demonstration, we use a S3 location that already contains the model weights and is accesible publicly. " ] }, { "cell_type": "code", "execution_count": null, "id": "f1fcef02-f347-4948-b1df-6d9c3600bcdc", "metadata": { "tags": [] }, "outputs": [], "source": [ "pretrained_model_location = f\"s3://sagemaker-example-files-prod-{region}/models/flan-ul2\" # replace with the S3 URI that has your model" ] }, { "cell_type": "markdown", "id": "86a9853e-bf10-47a9-998a-28725a5140fa", "metadata": {}, "source": [ "### Define the payload - Article Generation\n", "We now define a payload that will be used to invoke the endpoints when they are deployed.\n", "\n", "The below payload consists of a prompt that has a title and prompts the model to write an article." ] }, { "cell_type": "code", "execution_count": null, "id": "ff70b271-2aae-461f-8216-5c3a31f3b899", "metadata": { "tags": [] }, "outputs": [], "source": [ "data = {\n", " \"inputs\": [\n", " \"Title: ”Utility of Large Language Models“\\\\n Given the above title, write an article.\\n\"\n", " ],\n", " \"parameters\": {\n", " \"max_length\": 250,\n", " \"temperature\": 0.1,\n", " },\n", "}" ] }, { "cell_type": "markdown", "id": "cb9a6e1b-c021-45fc-a00e-348acbae6507", "metadata": {}, "source": [ "## Deploy using HuggingFace Accelerate\n", "\n", "[HuggingFace Accelerate](https://huggingface.co/docs/accelerate/index) is a library that can be used to host large models across multiple GPUs by model partitioning. It leverages layer wise paritioning to load individual layers onto different GPUs.\n", "\n", "By using DJL Serving's [HuggingFaceAccelerateModel](https://sagemaker.readthedocs.io/en/stable/frameworks/djl/sagemaker.djl_inference.html#huggingfaceacceleratemodel) you can host a large model to a SageMaker endpoint with multiple GPUs. The model paritioning is done by HuggingFace Accelerate using the different parameters that are passed to the model object.\n", "\n", "Here, we set the `dtype` to `fp16` and `number_of_paritions` to `4` to partition the model across 4 GPUs.\n", "See [HuggingFaceAccelerateModel](https://sagemaker.readthedocs.io/en/stable/frameworks/djl/sagemaker.djl_inference.html#huggingfaceacceleratemodel) for other parameters that can be used." ] }, { "cell_type": "code", "execution_count": null, "id": "e4ec0707-0e21-4630-b1db-aa7f18582187", "metadata": { "tags": [] }, "outputs": [], "source": [ "hf_accelerate_model = HuggingFaceAccelerateModel(\n", " pretrained_model_location,\n", " role,\n", " device_map=\"auto\",\n", " dtype=\"fp16\",\n", " number_of_partitions=4,\n", ")\n", "\n", "hf_accelerate_predictor = hf_accelerate_model.deploy(\n", " instance_type=instance_type, initial_instance_count=1\n", ")" ] }, { "cell_type": "markdown", "id": "0b520083-7765-46b3-8c81-f1551ac38c8e", "metadata": {}, "source": [ "#### Invoke the endpoint and perform a quick benchmark." ] }, { "cell_type": "code", "execution_count": null, "id": "8ab7a281-ae1b-4b93-840e-f332d03390c0", "metadata": { "tags": [] }, "outputs": [], "source": [ "%%timeit -n3 -r1\n", "hf_accelerate_predictor.predict(data)" ] }, { "cell_type": "markdown", "id": "505088aa-9b1e-435b-9f31-4a0e71063d99", "metadata": {}, "source": [ "#### Delete the model and the endpoint" ] }, { "cell_type": "code", "execution_count": null, "id": "33e16fbb-ddd4-46a3-a023-1a7dd9c47f0c", "metadata": { "tags": [] }, "outputs": [], "source": [ "hf_accelerate_predictor.delete_model()\n", "hf_accelerate_predictor.delete_endpoint()" ] }, { "cell_type": "markdown", "id": "dd450489-970d-4f9a-abc5-86a5f8bfc107", "metadata": {}, "source": [ "## Deploy the model using DeepSpeed\n", "\n", "[DeepSpeed](https://www.deepspeed.ai/) provides various [inference optimizations](https://www.deepspeed.ai/tutorials/inference-tutorial/) for compatible transformer based models including model sharding, optimized inference kernels, and quantization.\n", "\n", "By using DJL Serving's [DeepSpeedModel](https://sagemaker.readthedocs.io/en/stable/frameworks/djl/sagemaker.djl_inference.html#deepspeedmodel) you can host a large model to a SageMaker endpoint with multiple GPUs. The model paritioning is done by DeepSpeed using the different parameters that are passed to the model object.\n", "\n", "Here, we set the `dtype` to `bf16` and `number_of_paritions` to `4` to partition the model across 4 GPUs.\n", "See [DeepSpeedModel](https://sagemaker.readthedocs.io/en/stable/frameworks/djl/sagemaker.djl_inference.html#deepspeedmodel) for other parameters that can be used." ] }, { "cell_type": "code", "execution_count": null, "id": "e57642ad-ecd4-4001-86ec-5402e53a7a50", "metadata": { "tags": [] }, "outputs": [], "source": [ "deepspeed_model = DeepSpeedModel(\n", " pretrained_model_location, role, dtype=\"bf16\", tensor_parallel_degree=4\n", ")\n", "deepspeed_predictor = deepspeed_model.deploy(instance_type=instance_type, initial_instance_count=1)" ] }, { "cell_type": "markdown", "id": "d9384bca-b4ae-4729-a8fd-de6d6447b7ba", "metadata": {}, "source": [ "#### Invoke the endpoint and perform a quick benchmark." ] }, { "cell_type": "code", "execution_count": null, "id": "63499174-33f5-41c3-ade3-b6e49bf40be4", "metadata": { "tags": [] }, "outputs": [], "source": [ "%%timeit -n3 -r1\n", "deepspeed_predictor.predict(data)" ] }, { "cell_type": "markdown", "id": "985db921-db00-4e5f-a7a0-99f9f02cdf50", "metadata": { "tags": [] }, "source": [ "#### Delete the model and the endpoint." ] }, { "cell_type": "code", "execution_count": null, "id": "65602f57-e5b7-4a79-b9ce-1c5570e2355e", "metadata": { "tags": [] }, "outputs": [], "source": [ "deepspeed_predictor.delete_model()\n", "deepspeed_predictor.delete_endpoint()" ] }, { "cell_type": "markdown", "id": "c15ee1ff-f69b-436b-bc8f-1b600109f92b", "metadata": {}, "source": [ "## Deploy the Model using FasterTransformer\n", "\n", "[FasterTransformer](https://github.com/NVIDIA/FasterTransformer) is a library implementing an accelerated engine for the inference of transformer-based neural networks, with a special emphasis on large models, spanning many GPUs and nodes in a distributed manner.\n", "FasterTransformer contains the implementation of the highly-optimized version of the transformer block that contains the encoder and decoder parts.\n", "\n", "By using DJL Serving's [FasterTransformerModel](https://sagemaker.readthedocs.io/en/stable/frameworks/djl/sagemaker.djl_inference.html#fastertransformermodel) you can host a large model to a SageMaker endpoint with multiple GPUs. The model paritioning is done by FasterTransformer using the different parameters that are passed to the model object.\n", "\n", "Here, we set the `dtype` to `fp16` and `number_of_paritions` to `4` to partition the model across 4 GPUs.\n", "See [FasterTransformer](https://sagemaker.readthedocs.io/en/stable/frameworks/djl/sagemaker.djl_inference.html#fastertransformermodel) for other parameters that can be used." ] }, { "cell_type": "code", "execution_count": null, "id": "5814629c-1eef-4acf-a15c-f8948ad96965", "metadata": { "tags": [] }, "outputs": [], "source": [ "fastertransformer_model = FasterTransformerModel(\n", " pretrained_model_location, role, dtype=\"fp16\", tensor_parallel_degree=4\n", ")\n", "fastertransformer_predictor = fastertransformer_model.deploy(\n", " instance_type=instance_type, initial_instance_count=1\n", ")" ] }, { "cell_type": "markdown", "id": "79e7eadc-993a-4206-ad8e-eac0a53af57d", "metadata": {}, "source": [ "#### Invoke the endpoint and perform a quick benchmark." ] }, { "cell_type": "code", "execution_count": null, "id": "80959c61-f73f-4def-9885-2dbb3c2818be", "metadata": { "tags": [] }, "outputs": [], "source": [ "%%timeit -n3 -r1\n", "fastertransformer_predictor.predict(data)" ] }, { "cell_type": "markdown", "id": "8d61a934-6b40-4924-a1bc-a0d769d43f39", "metadata": {}, "source": [ "#### Delete the model and the endpoint." ] }, { "cell_type": "code", "execution_count": null, "id": "1ee484dd-baa9-46de-b0dd-2fef344c5248", "metadata": { "tags": [] }, "outputs": [], "source": [ "fastertransformer_predictor.delete_model()\n", "fastertransformer_predictor.delete_endpoint()" ] }, { "cell_type": "markdown", "id": "be59b318-8d98-40fe-a631-06492abbdce2", "metadata": {}, "source": [ "### Performance Comparision\n", "From the above benchmark results, you can see that `FasterTransformer` and `DeepSpeed` have a lower latency compared to `HuggingFace Accelerate`, with `FasterTransformer` having the best performance.\n", "\n", "We'll now use [`DJLModel`](https://sagemaker.readthedocs.io/en/stable/frameworks/djl/sagemaker.djl_inference.html#djlmodel) to deploy the model and let it pick the best backend based on the model architecture." ] }, { "cell_type": "markdown", "id": "7f41e6ae-3ab3-49bd-8836-7a045b22d19d", "metadata": {}, "source": [ "## Deploy the Model using DJLModel\n", "Instantiating an instance of the [DJLModel](https://sagemaker.readthedocs.io/en/stable/frameworks/djl/sagemaker.djl_inference.html#sagemaker.djl_inference.model.DJLModel) let's you use the framework recommendation for the model type without explicity specifying one.\n", "\n", "For additional parameters that can be set while creating the object, please refer [DJLModel](https://sagemaker.readthedocs.io/en/stable/frameworks/djl/sagemaker.djl_inference.html#sagemaker.djl_inference.model.DJLModel)." ] }, { "cell_type": "code", "execution_count": null, "id": "86b89803-ebf5-46cd-82a4-49b7f1c3cd2c", "metadata": { "tags": [] }, "outputs": [], "source": [ "djl_model = DJLModel(pretrained_model_location, role, dtype=\"fp16\", tensor_parallel_degree=4)\n", "djl_predictor = djl_model.deploy(instance_type=instance_type, initial_instance_count=1)" ] }, { "cell_type": "markdown", "id": "40dfb152-723a-45f7-86d8-8475422c1965", "metadata": {}, "source": [ "Note that DJLModel returns an instance of a framework specific model. i.e. the framework that was used as the backend to deploy the model.\n", "\n", "In this case, `FasterTransformer` is optimal." ] }, { "cell_type": "code", "execution_count": null, "id": "4759fc5f-a3af-40bf-be1c-68406dcc6539", "metadata": { "tags": [] }, "outputs": [], "source": [ "djl_model" ] }, { "cell_type": "markdown", "id": "7183eb1d-ba8c-4c8b-be79-68338f24a487", "metadata": {}, "source": [ "#### Invoke the endpoint and perform a quick benchmark." ] }, { "cell_type": "code", "execution_count": null, "id": "e00ab67e-4ba1-47d0-a976-6cdcd892cac2", "metadata": { "tags": [] }, "outputs": [], "source": [ "%%timeit -n3 -r1\n", "djl_predictor.predict(data)" ] }, { "cell_type": "markdown", "id": "092bbf9b-622b-4df1-b69b-9ac7dbe20a6a", "metadata": {}, "source": [ "#### Delete the model and the endpoint." ] }, { "cell_type": "code", "execution_count": null, "id": "29fe974a-55df-40de-b3d1-9c4ad2855697", "metadata": { "tags": [] }, "outputs": [], "source": [ "djl_predictor.delete_model()\n", "djl_predictor.delete_endpoint()" ] }, { "cell_type": "markdown", "id": "91656c94-5540-4c75-b764-a31b1a1c449c", "metadata": {}, "source": [ "### Conclusion\n", "\n", "We have demonstrated how to use the SageMaker SDK to deploy large language models like Flan-UL2 using DJL Serving and benchmarked the performance of different frameworks." ] }, { "cell_type": "markdown", "id": "982d3936", "metadata": {}, "source": [ "## Notebook CI Test Results\n", "\n", "This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.\n", "\n", "\n", "![This us-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-1/inference|generativeai|llm-workshop|flan-ul2-pySDK|flan-ul2-pySDK.ipynb)\n", "\n", "![This us-east-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-2/inference|generativeai|llm-workshop|flan-ul2-pySDK|flan-ul2-pySDK.ipynb)\n", "\n", "![This us-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-1/inference|generativeai|llm-workshop|flan-ul2-pySDK|flan-ul2-pySDK.ipynb)\n", "\n", "![This ca-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ca-central-1/inference|generativeai|llm-workshop|flan-ul2-pySDK|flan-ul2-pySDK.ipynb)\n", "\n", "![This sa-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/sa-east-1/inference|generativeai|llm-workshop|flan-ul2-pySDK|flan-ul2-pySDK.ipynb)\n", "\n", "![This eu-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-1/inference|generativeai|llm-workshop|flan-ul2-pySDK|flan-ul2-pySDK.ipynb)\n", "\n", "![This eu-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-2/inference|generativeai|llm-workshop|flan-ul2-pySDK|flan-ul2-pySDK.ipynb)\n", "\n", "![This eu-west-3 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-3/inference|generativeai|llm-workshop|flan-ul2-pySDK|flan-ul2-pySDK.ipynb)\n", "\n", "![This eu-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-central-1/inference|generativeai|llm-workshop|flan-ul2-pySDK|flan-ul2-pySDK.ipynb)\n", "\n", "![This eu-north-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-north-1/inference|generativeai|llm-workshop|flan-ul2-pySDK|flan-ul2-pySDK.ipynb)\n", "\n", "![This ap-southeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-1/inference|generativeai|llm-workshop|flan-ul2-pySDK|flan-ul2-pySDK.ipynb)\n", "\n", "![This ap-southeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-2/inference|generativeai|llm-workshop|flan-ul2-pySDK|flan-ul2-pySDK.ipynb)\n", "\n", "![This ap-northeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-1/inference|generativeai|llm-workshop|flan-ul2-pySDK|flan-ul2-pySDK.ipynb)\n", "\n", "![This ap-northeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-2/inference|generativeai|llm-workshop|flan-ul2-pySDK|flan-ul2-pySDK.ipynb)\n", "\n", "![This ap-south-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-south-1/inference|generativeai|llm-workshop|flan-ul2-pySDK|flan-ul2-pySDK.ipynb)\n" ] } ], "metadata": { "availableInstances": [ { "_defaultOrder": 0, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.t3.medium", "vcpuNum": 2 }, { "_defaultOrder": 1, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.t3.large", "vcpuNum": 2 }, { "_defaultOrder": 2, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.t3.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 3, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.t3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 4, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5.large", "vcpuNum": 2 }, { "_defaultOrder": 5, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 6, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 7, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 8, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 9, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 10, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 11, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 12, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5d.large", "vcpuNum": 2 }, { "_defaultOrder": 13, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5d.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 14, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5d.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 15, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5d.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 16, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5d.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 17, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5d.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 18, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5d.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 19, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 20, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": true, "memoryGiB": 0, "name": "ml.geospatial.interactive", "supportedImageNames": [ "sagemaker-geospatial-v1-0" ], "vcpuNum": 0 }, { "_defaultOrder": 21, "_isFastLaunch": true, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.c5.large", "vcpuNum": 2 }, { "_defaultOrder": 22, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.c5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 23, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.c5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 24, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.c5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 25, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 72, "name": "ml.c5.9xlarge", "vcpuNum": 36 }, { "_defaultOrder": 26, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 96, "name": "ml.c5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 27, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 144, "name": "ml.c5.18xlarge", "vcpuNum": 72 }, { "_defaultOrder": 28, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.c5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 29, "_isFastLaunch": true, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g4dn.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 30, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g4dn.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 31, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g4dn.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 32, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g4dn.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 33, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g4dn.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 34, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g4dn.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 35, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 61, "name": "ml.p3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 36, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 244, "name": "ml.p3.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 37, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 488, "name": "ml.p3.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 38, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.p3dn.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 39, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.r5.large", "vcpuNum": 2 }, { "_defaultOrder": 40, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.r5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 41, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.r5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 42, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.r5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 43, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.r5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 44, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.r5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 45, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 512, "name": "ml.r5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 46, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.r5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 47, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 48, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 49, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 50, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 51, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 52, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 53, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.g5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 54, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.g5.48xlarge", "vcpuNum": 192 }, { "_defaultOrder": 55, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 1152, "name": "ml.p4d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 56, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 1152, "name": "ml.p4de.24xlarge", "vcpuNum": 96 } ], "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (Data Science 3.0)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-west-2:236514542706:image/sagemaker-data-science-310-v1" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.6" } }, "nbformat": 4, "nbformat_minor": 5 }