{ "cells": [ { "cell_type": "markdown", "id": "a7b6c10d-19ab-45ff-aa96-14676491aad4", "metadata": {}, "source": [ "# Lab 1\n", "\n", "## Introduction\n", "\n", "Language models have recently exploded in both size and popularity. In 2018, BERT-large entered the scene and, with its 340M parameters and novel transformer architecture, set the standard on NLP task accuracy. Within just a few years, state-of-the-art NLP model size has grown by more than 500x with models such as OpenAI’s 175 billion parameter GPT-3 and similarly sized open source Bloom 176B raising the bar on NLP accuracy. This increase in the number of parameters is driven by the simple and empirically-demonstrated positive relationship between model size and accuracy: more is better. With easy access from models zoos such as HuggingFace and improved accuracy in NLP tasks such as classification and text generation, practitioners are increasingly reaching for these large models. However, deploying them can be a challenge because of their size.\n", "\n", "In this Lab, we'll explore how to host a large language model on Amazon SageMaker using Sagemaker Inference, one of many ready-to-use AWS Deep Learning Containers (DLCs) and the built-in HuggingFace integration of the Sagemaker SDK. \n", "\n", "## Background and Details\n", "We'll be working with GPT-J , a large language model with over 6B parameters pre-trained on the Pile dataset. The pre-trained weights of the original model come in FP32 format (4 bytes per parameter) and combine to roughly 24Gb in total. Since most of today's state-of-the-art single-GPU-powered instances are only equipped with 16, max. 24 GPU GB memory, the size of the model weights is bringing up challenges for model inference. With quantization, we'll explore one of several optimization options available that allow us to host this model on single GPU instances with fewer than 16Gb of GPU memory. Finally, we take a closer look into the art of prompt engineering and discover how [few-shot](https://www.analyticsvidhya.com/blog/2021/05/an-introduction-to-few-shot-learning/) (as opposed to zero-shot) approaches can significantly improve model performance on a huge variety of NLP tasks. \n", "\n", "## Instructions\n", "\n", "### Prerequisites\n", "\n", "#### To run this workshop...\n", "You need a computer with a web browser, preferably with the latest version of Chrome / FireFox.\n", "Sequentially read and follow the instructions described in AWS Hosted Event and Work Environment Set Up\n", "\n", "#### Recommended background\n", "It will be easier for you to run this workshop if you have:\n", "\n", "- Experience with Deep learning models\n", "- Familiarity with Python or other similar programming languages\n", "- Experience with Jupyter notebooks\n", "- Begineers level knowledge and experience with SageMaker Hosting/Inference.\n", "\n", "#### Target audience\n", "Data Scientists, ML Engineering, ML Infrastructure, MLOps Engineers, Technical Leaders.\n", "Intended for customers working with large Generative AI models including Language, Computer vision and Multi-modal use-cases.\n", "Customers using EKS/EC2/ECS/On-prem for hosting or experience with SageMaker.\n", "\n", "Level of expertise - 400\n", "\n", "#### Time to complete\n", "Approximately 1 hour." ] }, { "cell_type": "markdown", "id": "ff43d981-d2dc-440c-a8ed-ed47ea8fabe3", "metadata": {}, "source": [ "# Import of required dependencies\n", "\n", "For this lab, we will use the following libraries:\n", "\n", " - SageMaker SDK for interacting with Amazon SageMaker. We especially want to highlight the classes 'HuggingFaceModel' and 'HuggingFacePredictor', utilizing the built-in HuggingFace integration into SageMaker SDK. These classes are used to encapsulate functionality around the model and the deployed endpoint we will use. They inherit from the generic 'Model' and 'Predictor' classes of the native SageMaker SDK, however implementing some additional functionality specific to HuggingFace and the HuggingFace model hub.\n", " - boto3, the AWS SDK for python\n", " - os, a python library implementing miscellaneous operating system interfaces \n", " - tarfile, a python library to read and write tar archive files" ] }, { "cell_type": "code", "execution_count": null, "id": "0437cde3-ed17-4cec-b06c-0d0441436644", "metadata": { "tags": [] }, "outputs": [], "source": [ "from sagemaker.huggingface import HuggingFaceModel, HuggingFacePredictor\n", "import sagemaker\n", "import boto3\n", "import os\n", "import tarfile" ] }, { "cell_type": "markdown", "id": "7e3938a6-b8e1-4d4c-abec-48a4c25a7862", "metadata": {}, "source": [ "# Setup of notebook environment\n", "\n", "Before we begin with the actual work for packaging and deploying the model to Amazon SageMaker, we need to setup the notebook environment respectively. This includes:\n", "- retrieval of the execution role our SageMaker Studio domain is associated with for later usage\n", "- retrieval of our account_id for later usage\n", "- retrieval of the chosen region for later usage" ] }, { "cell_type": "code", "execution_count": null, "id": "9a1c9354-e9e2-4e8d-a604-a051f4d04b1b", "metadata": { "tags": [] }, "outputs": [], "source": [ "# IAM role with permissions to create endpoint\n", "role = sagemaker.get_execution_role()" ] }, { "cell_type": "code", "execution_count": null, "id": "186e7070-5c55-4188-8fda-d7e71deca18d", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Create a new STS client\n", "sts_client = boto3.client('sts')\n", "\n", "# Call the GetCallerIdentity operation to retrieve the account ID\n", "response = sts_client.get_caller_identity()\n", "account_id = response['Account']\n", "account_id" ] }, { "cell_type": "code", "execution_count": null, "id": "835c385f", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Retrieve region\n", "region = boto3.Session().region_name\n", "region" ] }, { "cell_type": "markdown", "id": "f654d9ba-1a5b-4ef1-8bda-e51873274e7a", "metadata": {}, "source": [ "# Create Model Artifact Archive\n", "\n", "For hosting a model with AWS SageMaker Inference we need to package our model artifacts into an archive called ‘model.tar.gz’ and upload it to S3. Within this archive, your model artifacts should be stored in the following directory structure:\n", "\n", "`model.tar.gz`\n", "- `model.bin`\n", "- `code/`\n", " - `inference.py`\n", " - `requirements.txt`\n", "\n", "The \"code\" directory contains your inference script (inference.py) and your requirements.txt file (if you have additional dependencies, detailed description see below). The “model.bin†file is a file in one of various binary formats containing the model weights as well as some configuration. In our case this will be non-existing at archive creation time, since we will be loading our model files on endpoint-start time from the HuggingFace model hub. Before you continue, open the code directory to get familiar with the structure and be able to follow when going through the subsequent steps below.\n", "\n", "If you have additional dependencies for your model, you can include them in a requirements.txt file in the \"code\" directory. SageMaker will install these dependencies during the deployment of your model.\n", "\n", "Sometimes you may want to override one of the [five functions](https://huggingface.co/docs/sagemaker/inference#user-defined-code-and-modules) within the hosting cycle, such as the model_fn function. To do this, you can create a new function in your inference.py file with the same name as the function you want to override. SageMaker will automatically use your new function instead of the default function. Since we want to dynamically load the model binaries on endpoint-start time, we will override the model_fn, the default method for loading a model. \n", "\n", "Within the model_fn() function of inference.py, this can be done by leveraging the capabilities of HuggingFace. HuggingFace is a company focussing on democratisation of open-source AI and closely partnering with AWS. With the ‘transformers’ library, they have created a popular open-source API/framework for natural language processing on top of common frameworks like PyTorch or Tensorflow. They have also built the HuggingFace model hub, a model repository providing thousands of open-source models throughout different ML tasks. \n", "\n", "SageMaker provides built-in support for HuggingFace models through the SageMaker HuggingFace SDK. Here are the steps we will take to dynamically load our model from the HuggingFace model hub into SageMaker:\n", "1. Import the transformers library\n", "2. Use the from_pretrained method to download a pre-built tokenised together with a pre-trained model from the HuggingFace Model Hub\n", "```python\n", "tokenizer = AutoTokenizer.from_pretrained(\"EleutherAI/gpt-j-6B\")\n", "model = GPTJForCausalLM.from_pretrained(\"EleutherAI/gpt-j-6B\", revision=\"float16\", torch_dtype=torch.float16)\n", "\n", "```\n", "3. Use the pipeline method to perform inference on your text data. These pipelines can be configured to be task specific. For our use case we will use the [‘text-generation’ task](link***)\n", "```python\n", "generation = pipeline(\"text-generation\", model=model, tokenizer=tokenizer, device=device)\n", "\n", "```\n", "4. Pass the generated pipeline object as return value of the model_fn() function to integrate with the rest of the inference lifecycle\n", "\n", "The pre-trained weights of the original model come in FP32 format (4 bytes per parameter) and combine to roughly 24Gb in total. Since most of today's state-of-the-art single-GPU-powered instances are only equipped with 16, max. 24 GPU GB memory, the size of the model weights is bringing up challenges for model inference. Quantization is a technique to reduce the memory footprint when hosting large models. Thereby, the model weights are converted into FP16 or int8 format, resulting into a reduction of the hosting footprint by 2-4. By applying quantization, the GPT-J model can be hosted on a single-gpu instance like the ml.g4dn series.\n", "The from_pretrained method in the HuggingFace SDK allows you to download different revisions of a pre-trained model from the HuggingFace Model Hub. You can specify the revision you want to download using the revision parameter. For our use case, we will be using the FP16 revision of the model weights.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "7f0be137-65fc-4e93-b031-a10a033d321d", "metadata": { "tags": [] }, "outputs": [], "source": [ "# function to compress the code directory into a model.tar.gz archive as outlined above\n", "def compress(tar_dir=None, output_file=\"./model.tar.gz\"):\n", " with tarfile.open(output_file, \"w:gz\") as tar:\n", " tar.add(tar_dir, arcname=\"code\")" ] }, { "cell_type": "code", "execution_count": null, "id": "95614fe2-3a5a-4858-adec-26ee5b625842", "metadata": { "tags": [] }, "outputs": [], "source": [ "# specifying source code directory path\n", "model_code_dir = './code'\n", "# create tar.gz archive\n", "print(\"creating `model.tar.gz` archive\")\n", "compress(model_code_dir)" ] }, { "cell_type": "markdown", "id": "923347b6", "metadata": {}, "source": [ "# Uploading to S3\n", "We now have successfully created the model.tar.gz. However, it is still residing within the EBS volume of our SageMaker Studio domain. In the next step we will upload the archive file into a S3 bucket to make it available for SageMaker Inference. Therefor we will perform the following steps:\n", "- Creation of a new S3 bucket for model artifact storage\n", "- Upload of the model artifact to S3 using the python AWS SDK 'boto3'" ] }, { "cell_type": "code", "execution_count": null, "id": "bb42c6a3", "metadata": { "tags": [] }, "outputs": [], "source": [ "# function to upload the model artifact model.tar.gz into a S3 bucket \n", "def upload_file_to_s3(bucket_name=None, file_name=\"model.tar.gz\", key_prefix=\"\"):\n", " s3 = boto3.resource(\"s3\")\n", " key_prefix_with_file_name = os.path.join(key_prefix, file_name)\n", " s3.Bucket(bucket_name).upload_file(file_name, key_prefix_with_file_name)\n", " return f's3://{bucket_name}/{key_prefix_with_file_name}'" ] }, { "cell_type": "code", "execution_count": null, "id": "cc859d76", "metadata": { "tags": [] }, "outputs": [], "source": [ "# specifying bucket name for model artifact storage\n", "model_bucket_name = f'immersion-day-bucket-{account_id}-{region}'\n", "# specifying key prefix for model artifact storage\n", "model_s3_key_prefix = 'huggingface/gpt-j/'" ] }, { "cell_type": "code", "execution_count": null, "id": "469cfe60-bb73-417d-bbdc-f1c4f1dc125f", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Create S3 bucket\n", "s3_client = boto3.client('s3', region_name=region)\n", "location = {'LocationConstraint': region}\n", "\n", "bucket_name = model_bucket_name\n", "\n", "# Check if bucket already exists\n", "bucket_exists = True\n", "try:\n", " s3_client.head_bucket(Bucket=bucket_name)\n", "except:\n", " bucket_exists = False\n", "\n", "# Create bucket if it does not exist\n", "if not bucket_exists:\n", " if region == 'us-east-1':\n", " s3_client.create_bucket(Bucket=bucket_name)\n", " else: \n", " s3_client.create_bucket(Bucket=bucket_name,\n", " CreateBucketConfiguration=location)\n", " print(f\"Bucket '{bucket_name}' created successfully\")" ] }, { "cell_type": "code", "execution_count": null, "id": "b820737e-8424-426f-b966-c5696eb2ed1a", "metadata": { "tags": [] }, "outputs": [], "source": [ "# upload to s3\n", "print(\n", " f\"uploading `model.tar.gz` archive to s3://{bucket_name}/{model_s3_key_prefix}model.tar.gz\"\n", ")\n", "model_uri = upload_file_to_s3(bucket_name=bucket_name, key_prefix=model_s3_key_prefix)\n", "print(f\"Successfully uploaded to {model_uri}\")" ] }, { "cell_type": "markdown", "id": "6693abd6", "metadata": {}, "source": [ "Amazon SageMaker Inference is a managed service that allows you to deploy machine learning models to make predictions or perform inference on new data. It enables you to create an endpoint that can be accessed using HTTP requests to make predictions in real-time. This service is designed to make it easy to deploy and manage machine learning models in production. SageMaker Inference provides a scalable, reliable, and cost-effective way to deploy machine learning models. For deploying a model with SageMaker Inference we will use the SageMaker SDK and leverage the built-in HuggingFace integration.\n", "\n", "## Model packaging\n", "First, we package the model into the 'HuggingFaceModel' class by specifying the following parameters:\n", "- image_uri: The image uri of a Docker image used for hosting the model. We will be using on of the many ready-to-use Deep Learning Containers AWS is providing [here](https://aws.amazon.com/machine-learning/containers/). Deep Learning Containers are Docker images that are preinstalled and tested with the latest versions of popular deep learning frameworks. Deep Learning Containers lets you deploy custom ML environments quickly without building and optimizing your environments from scratch. Since we are deploying a model from the HuggingFace model hub, we will use one of the HuggingFace DLCs, coming with preinstalled python 3.8, pytorch 1.10.2, transformers 4.17.0 dependencies and optimized for inference in GPU-accelerated environments. \n", "- model_data: S3 path to the model artifact we just created and uploaded to S3\n", "- role: IAM role, holding the required IAM permissions to perform the operations required to deploy a SageMaker Inference endpoint\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "id": "076078f9-fbf5-44f8-80da-ef1aaaf9eec0", "metadata": { "tags": [] }, "outputs": [], "source": [ "# create Hugging Face Model Class\n", "huggingface_model = HuggingFaceModel(\n", " image_uri=f'763104351884.dkr.ecr.{region}.amazonaws.com/huggingface-pytorch-inference:1.10.2-transformers4.17.0-gpu-py38-cu113-ubuntu20.04',\n", " model_data=model_uri,\n", "\trole=role\n", " )" ] }, { "cell_type": "markdown", "id": "0318fcb3", "metadata": {}, "source": [ "## Model deployment\n", "The created model package can now be used to deploy the actual model by calling its .deploy() function. Thereby, the following parameters have to be specified:\n", "- initial_instance_count: number of endpoint instances to be deployed \n", "- instance_type: EC2 instance type used for endpoint hosting\n", "- endpoint_name: name of endpoint\n", "\n", "Note that the SageMaker SDK creates the following two resources for you in the background:\n", "- EndpointConfiguration\n", "- Endpoint\n", "You can check these in the Inference section of the SageMaker section in the AWS console once the model has been successfully deployed.\n", "\n", "" ] }, { "cell_type": "code", "execution_count": null, "id": "3dea56b6-e602-4e5f-afd8-031b580c0494", "metadata": { "tags": [] }, "outputs": [], "source": [ "# deploy model to SageMaker Inference\n", "predictor = huggingface_model.deploy(\n", " initial_instance_count=1, # number of instances\n", " instance_type='ml.g4dn.4xlarge', \n", " endpoint_name='sm-endpoint-gpt-j-6b-immersion-day',\n", ")" ] }, { "cell_type": "markdown", "id": "2555072b", "metadata": {}, "source": [ "# Inference \n", "## First try\n", "The .deploy() function returns an object of the HuggingFacePredictor class. This class implements functionality around the interfaces for the actual inference against deployed endpoints. Amongst others, it implements a .predict() function that can be used to conveniently call the endpoint for inference. When calling it, we can pass an object tothe function that consists of an 'inputs' parameter holding the prompt to be passed to the model. \n", "\n", "Hint: in case an error occurs, go and check the CloudWatch logs. Try to figure out what happened!" ] }, { "cell_type": "code", "execution_count": null, "id": "3e5ab2f5", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Calling the predict() function for inference \n", "predictor.predict({\"inputs\": \"What is the capital of Germany?\"})" ] }, { "cell_type": "markdown", "id": "c5839cf6", "metadata": {}, "source": [ "For more advanced use cases we can also specify a second parameter: 'parameters' is a python dictionary consisting of model-specific parameters passed to the model for customization of the output generated. For the GPT-J model, amongst [many options given](https://huggingface.co/docs/api-inference/detailed_parameters#text-generation-task) we specify the following parameters:\n", "- max_new_tokens: maximum number of tokens to be generated by the model \n", "- temperature: creativity of generated text. According to our experience, values between 0.2 (newspaper article, code) and 1.2 (poem) lead to high-quality results. \n", "- repetition_penalty: penalty for repeating occurence of tokens\n", "- top_k: breadth of vocabulary used. top amount of token candidates taken into account when sampling for next token prediction\n", "- return_full_text: boolean variable indicating if input prompt should be returned with result" ] }, { "cell_type": "code", "execution_count": null, "id": "75ff1cc4-022c-4989-ba24-14d6c6c4f45b", "metadata": { "tags": [] }, "outputs": [], "source": [ "predictor.predict({\"inputs\": \"What is the capital of Germany?\",\n", "\"parameters\": {\n", " \"max_new_tokens\": 30,\n", " \"temperature\": 0.5,\n", " \"repetition_penalty\": 1.1,\n", " \"top_k\": 20,\n", " \"return_full_text\": False\n", "}\n", "})" ] }, { "cell_type": "markdown", "id": "56ae7538-6b09-4db2-8494-37b49a30933c", "metadata": {}, "source": [ "## Prompt Engineering\n", "Prompt engineering is a technique used to design effective prompts for LLMs with the goal to achieve: \n", "\n", "- Control over the output: With prompt engineering, developers can control the output generated by LLMs. By designing prompts that specify the desired topic, style, tone, and level of formality, they can guide the LLM to produce text that meets the desired criteria.\n", "- Mitigating bias: LLMs have been shown to produce biased outputs when prompted with certain topics or language patterns. By engineering prompts that avoid biased language and encourage fairness, developers can help mitigate these issues.\n", "- Improving efficiency: Prompt engineering can help LLMs work more efficiently by guiding them to generate the desired output with fewer iterations. By providing clear, concise, and specific prompts, developers can help LLMs achieve the desired outcome faster and with fewer errors.\n", "\n", "In general, a prompt can contain any of the following components:\n", "\n", "- Instruction - a specific task or instruction you want the model to perform\n", "- Context - can involve external information or additional context that can steer the model to better responses\n", "- Input Data - is the input or question that we are interested to find a response for\n", "- Output Indicator - indicates the type or format of output.\n", "\n", "In general, the more information we provide with the prompt the better the above mentioned goals will be achieved.\n", "\n", "Let's try it out!" ] }, { "cell_type": "code", "execution_count": null, "id": "60c30344-99d6-4670-a850-a41f9895428f", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Simple unstructured prompt\n", "prompt = \"\"\"\n", "Teplizumab traces its roots to a New Jersey drug company called Ortho Pharmaceutical. There, scientists generated an early version of the antibody, dubbed OKT3. Originally sourced from mice, the molecule was able to bind to the surface of T cells and limit their cell-killing potential. In 1986, it was approved to help prevent organ rejection after kidney transplants, making it the first therapeutic antibody allowed for human use.\n", "\n", "What was OKT3 originally sourced from?\"\"\"\n", "\n", "predictor.predict({\"inputs\": prompt,\n", "\"parameters\": {\n", " \"max_new_tokens\": 10,\n", " \"temperature\": 0.7,\n", " \"repetition_penalty\": 1.1,\n", " \"top_k\": 20,\n", " \"return_full_text\": False\n", "}\n", "})" ] }, { "cell_type": "code", "execution_count": null, "id": "c149ed9b-fd75-4225-af16-1f0b111ec2aa", "metadata": { "tags": [] }, "outputs": [], "source": [ "# We now stick to the scheme proposed above\n", "prompt = \"\"\"\n", "Answer the question based on the context below. Keep the answer short and concise. Respond \"Unsure about answer\" if not sure about the answer.\n", "\n", "Context: Teplizumab traces its roots to a New Jersey drug company called Ortho Pharmaceutical. There, scientists generated an early version of the antibody, dubbed OKT3. Originally sourced from mice, the molecule was able to bind to the surface of T cells and limit their cell-killing potential. In 1986, it was approved to help prevent organ rejection after kidney transplants, making it the first therapeutic antibody allowed for human use.\n", "\n", "Question: What was OKT3 originally sourced from?\n", "\n", "Answer:\"\"\"\n", "\n", "predictor.predict({\"inputs\": prompt,\n", "\"parameters\": {\n", " \"max_new_tokens\": 10,\n", " \"temperature\": 0.7,\n", " \"repetition_penalty\": 1.1,\n", " \"top_k\": 20,\n", " \"return_full_text\": False\n", "}\n", "})" ] }, { "cell_type": "markdown", "id": "4c922d5c-0725-4472-a4da-102729ab62ef", "metadata": {}, "source": [ "In addition, [few-shot learning](https://www.analyticsvidhya.com/blog/2021/05/an-introduction-to-few-shot-learning/) is an interesting approach for the context element of a prompt. Few-shot learning is a prompt engineering technique that enables models to learn new tasks or concepts from only a few examples (usually a single digit number is just fine) or samples. Despite of the fact that the model has never seen this task in the training phase, we experience a significant boost in performance. " ] }, { "cell_type": "code", "execution_count": null, "id": "16512452-c095-4783-8e73-82429b478e29", "metadata": { "tags": [] }, "outputs": [], "source": [ "# One-shot\n", "prompt = \"\"\"\n", "Tweet: \"This new music video was incredibile\"\n", "Sentiment:\"\"\"\n", "predictor.predict({\"inputs\": prompt,\n", "\"parameters\": {\n", " \"max_new_tokens\": 20,\n", " \"temperature\": 0.5,\n", " \"repetition_penalty\": 1.1,\n", " \"top_k\": 20,\n", " \"return_full_text\": False\n", "}\n", "})" ] }, { "cell_type": "code", "execution_count": null, "id": "46858a2d", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Few-shot\n", "prompt = \"\"\"\n", "Tweet: \"I hate it when my phone battery dies.\"\n", "Sentiment: Negative\n", "###\n", "Tweet: \"My day has been ðŸ‘\"\n", "Sentiment: Positive\n", "###\n", "Tweet: \"This is the link to the article\"\n", "Sentiment: Neutral\n", "###\n", "Tweet: \"This new music video was incredibile\"\n", "Sentiment:\"\"\"\n", "predictor.predict({\"inputs\": prompt,\n", "\"parameters\": {\n", " \"max_new_tokens\": 20,\n", " \"temperature\": 0.5,\n", " \"repetition_penalty\": 1.1,\n", " \"top_k\": 20,\n", " \"return_full_text\": False\n", "}\n", "})" ] }, { "cell_type": "markdown", "id": "20e4a455", "metadata": {}, "source": [ "# Cleanup\n", "Finally, we clean up all resources not needed anymore since we pledge for the responsible use of compute resources. In this case this is the created endpoint together with the respective endpoint configuration. " ] }, { "cell_type": "code", "execution_count": null, "id": "e30cb399-97f8-43cd-b446-2d7ac5081a42", "metadata": { "tags": [] }, "outputs": [], "source": [ "predictor.delete_endpoint(delete_endpoint_config=True)" ] }, { "cell_type": "code", "execution_count": null, "id": "c3a19208-f871-4098-80a5-931a9e8c276c", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "availableInstances": [ { "_defaultOrder": 0, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.t3.medium", "vcpuNum": 2 }, { "_defaultOrder": 1, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.t3.large", "vcpuNum": 2 }, { "_defaultOrder": 2, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.t3.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 3, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.t3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 4, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5.large", "vcpuNum": 2 }, { "_defaultOrder": 5, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 6, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 7, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 8, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 9, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 10, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 11, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 12, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5d.large", "vcpuNum": 2 }, { "_defaultOrder": 13, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5d.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 14, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5d.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 15, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5d.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 16, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5d.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 17, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5d.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 18, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5d.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 19, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 20, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": true, "memoryGiB": 0, "name": "ml.geospatial.interactive", "supportedImageNames": [ "sagemaker-geospatial-v1-0" ], "vcpuNum": 0 }, { "_defaultOrder": 21, "_isFastLaunch": true, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.c5.large", "vcpuNum": 2 }, { "_defaultOrder": 22, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.c5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 23, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.c5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 24, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.c5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 25, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 72, "name": "ml.c5.9xlarge", "vcpuNum": 36 }, { "_defaultOrder": 26, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 96, "name": "ml.c5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 27, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 144, "name": "ml.c5.18xlarge", "vcpuNum": 72 }, { "_defaultOrder": 28, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.c5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 29, "_isFastLaunch": true, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g4dn.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 30, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g4dn.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 31, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g4dn.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 32, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g4dn.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 33, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g4dn.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 34, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g4dn.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 35, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 61, "name": "ml.p3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 36, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 244, "name": "ml.p3.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 37, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 488, "name": "ml.p3.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 38, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.p3dn.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 39, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.r5.large", "vcpuNum": 2 }, { "_defaultOrder": 40, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.r5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 41, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.r5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 42, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.r5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 43, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.r5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 44, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.r5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 45, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 512, "name": "ml.r5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 46, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.r5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 47, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 48, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 49, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 50, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 51, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 52, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 53, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.g5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 54, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.g5.48xlarge", "vcpuNum": 192 }, { "_defaultOrder": 55, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 1152, "name": "ml.p4d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 56, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 1152, "name": "ml.p4de.24xlarge", "vcpuNum": 96 } ], "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (Data Science)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/datascience-1.0" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.10" }, "vscode": { "interpreter": { "hash": "5c7b89af1651d0b8571dde13640ecdccf7d5a6204171d6ab33e7c296e100e08a" } } }, "nbformat": 4, "nbformat_minor": 5 }