{ "cells": [ { "attachments": {}, "cell_type": "markdown", "id": "34003d7d-74b8-4dd5-bbde-bd8de5353607", "metadata": { "tags": [] }, "source": [ "# Lab 1 (b)\n", "\n", "## Objective\n", "In this lab, we'll explore how to host a large language model on Amazon SageMaker using [Hugging Face LLM Inference Container for Amazon SageMaker](https://huggingface.co/blog/sagemaker-huggingface-llm), which allows you to easily deploy the most popular open-source LLMs, including Falcon, StarCoder, BLOOM, GPT-NeoX, Llama, and T5\n", "\n", "## Introduction\n", "\n", "Language models have recently exploded in both size and popularity. In 2018, BERT-large entered the scene and, with its 340M parameters and novel transformer architecture, set the standard on NLP task accuracy. Within just a few years, state-of-the-art NLP model size has grown by more than 500x with models such as OpenAI’s 175 billion parameter GPT-3 and similarly sized open source Bloom 176B raising the bar on NLP accuracy. This increase in the number of parameters is driven by the simple and empirically-demonstrated positive relationship between model size and accuracy: more is better. With easy access from models zoos such as HuggingFace and improved accuracy in NLP tasks such as classification and text generation, practitioners are increasingly reaching for these large models. However, deploying them can be a challenge because of their size.\n", "\n", "## Background and Details\n", "We'll be working with [Falcon-40B-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) that was developed by the Technology Innovation Institute (TII). Falcon-40B-Instruct is a 40B parameters causal decoder-only model built by TII based on Falcon-40B and finetuned on a mixture of Baize. It is made available under the Apache 2.0 license.\n", "## Instructions\n", "\n", "### Prerequisites\n", "\n", "#### To run this workshop...\n", "You need a computer with a web browser, preferably with the latest version of Chrome / FireFox.\n", "Sequentially read and follow the instructions described in AWS Hosted Event and Work Environment Set Up\n", "\n", "#### Recommended background\n", "It will be easier for you to run this workshop if you have:\n", "\n", "- Experience with Deep learning models\n", "- Familiarity with Python or other similar programming languages\n", "- Experience with Jupyter notebooks\n", "- Begineers level knowledge and experience with SageMaker Hosting/Inference.\n", "\n", "#### Target audience\n", "Data Scientists, ML Engineering, ML Infrastructure, MLOps Engineers, Technical Leaders.\n", "Intended for customers working with large Generative AI models including Language, Computer vision and Multi-modal use-cases.\n", "Customers using EKS/EC2/ECS/On-prem for hosting or experience with SageMaker.\n", "\n", "Level of expertise - 400\n", "\n", "#### Time to complete\n", "Approximately 45 minutes." ] }, { "attachments": {}, "cell_type": "markdown", "id": "c295cdfa-c6b5-45b0-88e4-5f3f66aa6137", "metadata": {}, "source": [ "We are going to use the SageMaker Python SDK to deploy Falcon-40b-Instruct model to Amazon SageMaker. " ] }, { "cell_type": "code", "execution_count": null, "id": "d7cab4ed-1380-4ae5-a0c5-af2cc4e86490", "metadata": { "tags": [] }, "outputs": [], "source": [ "!pip install --upgrade boto3 sagemaker" ] }, { "attachments": {}, "cell_type": "markdown", "id": "e36960f4-07cf-4294-9fcc-8d320180f357", "metadata": {}, "source": [ "\n", "\n", "Before we begin with the actual work for packaging and deploying the model to Amazon SageMaker, we need to setup the notebook environment respectively. This includes:\n", "\n", "- retrieval of the execution role our SageMaker Studio domain is associated with for later usage\n", "- retrieval of our bucket for later usage\n", "- retrieval of the chosen region for later usage" ] }, { "cell_type": "code", "execution_count": null, "id": "60977155-7ea3-489b-ab11-3c06c7385c08", "metadata": {}, "outputs": [], "source": [ "import sagemaker\n", "import boto3\n", "sess = sagemaker.Session()\n", "# sagemaker session bucket -> used for uploading data, models and logs\n", "# sagemaker will automatically create this bucket if it not exists\n", "sagemaker_session_bucket=None\n", "if sagemaker_session_bucket is None and sess is not None:\n", " # set to default bucket if a bucket name is not given\n", " sagemaker_session_bucket = sess.default_bucket()\n", "\n", "try:\n", " role = sagemaker.get_execution_role()\n", "except ValueError:\n", " iam = boto3.client('iam')\n", " role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']\n", "\n", "sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)\n", "\n", "print(f\"sagemaker role arn: {role}\")\n", "print(f\"sagemaker session region: {sess.boto_region_name}\")\n" ] }, { "attachments": {}, "cell_type": "markdown", "id": "af10a9fe-2b82-474b-aaa9-88c62f72e952", "metadata": {}, "source": [ "Compared to deploying regular Hugging Face models, we first need to retrieve the container uri and provide it to our HuggingFaceModel model class with a **image_uri** pointing to the image. To retrieve the new Hugging Face LLM Deep Learning Container in Amazon SageMaker, we can use the **get_huggingface_llm_image_uri** method provided by the SageMaker SDK. This method allows us to retrieve the URI for the desired Hugging Face LLM DLC based on the specified backend, session, region, and version. " ] }, { "cell_type": "code", "execution_count": null, "id": "e461638a-6dc8-40e4-9601-c88edcb2885e", "metadata": {}, "outputs": [], "source": [ "from sagemaker.huggingface import get_huggingface_llm_image_uri\n", "\n", "# retrieve the llm image uri\n", "llm_image = get_huggingface_llm_image_uri(\n", " \"huggingface\",\n", " version=\"0.8.2\"\n", ")\n", "\n", "# print ecr image uri\n", "print(f\"llm image uri: {llm_image}\")" ] }, { "attachments": {}, "cell_type": "markdown", "id": "752b5437-abda-4e6f-ad85-7163a673ad5d", "metadata": {}, "source": [ "To deploy Falcon-40B-Instruct model to Amazon SageMaker, we create a HuggingFaceModel model class and define our endpoint configuration including the **hf_model_id**, and **instance_type**. We will use a **g5.12xlarge** instance type with 4 NVIDIA A10G GPUs and 96GB of GPU memory.\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "id": "e50d34ef-0215-418e-9685-57a6bd13e49a", "metadata": {}, "outputs": [], "source": [ "import json\n", "from sagemaker.huggingface import HuggingFaceModel\n", "\n", "# sagemaker config\n", "instance_type = \"ml.g5.12xlarge\"\n", "number_of_gpu = 4\n", "\n", "# TGI config\n", "config = {\n", " 'HF_MODEL_ID': \"tiiuae/falcon-40b-instruct\", # model id from hf.co/models\n", " 'SM_NUM_GPUS': json.dumps(number_of_gpu), # Number of GPU used per replica\n", " 'MAX_INPUT_LENGTH': json.dumps(1024), # Max length of input text\n", " 'MAX_TOTAL_TOKENS': json.dumps(2048), # Max length of the generation (including input text)\n", " # 'HF_MODEL_QUANTIZE': \"bitsandbytes\", # comment in to quantize\n", "}\n", "\n", "# create HuggingFaceModel\n", "llm_model = HuggingFaceModel(\n", " role=role,\n", " image_uri=llm_image,\n", " env=config\n", ")\n" ] }, { "attachments": {}, "cell_type": "markdown", "id": "6896199d-350a-40f9-8104-fa1462a21037", "metadata": {}, "source": [ "After we have created the HuggingFaceModel we can deploy it to Amazon SageMaker using the deploy method. We will deploy the model with the ml.g5.12xlarge instance type. The Hugging Face LLM Deep Learning Container is powered by [Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference), an open-source, purpose-built solution for deploying and serving Large Language Models.TGI will automatically distribute and shard the model across all GPUs." ] }, { "cell_type": "code", "execution_count": null, "id": "5a470809-96e6-4ced-bdb3-048b5b2c1efc", "metadata": {}, "outputs": [], "source": [ "# Deploy model to an endpoint\n", "\n", "llm = llm_model.deploy(\n", " initial_instance_count=1,\n", " instance_type=instance_type,\n", " # volume_size=400, # If using an instance with local SSD storage, volume_size must be None, e.g. p4 but not p3\n", ")\n" ] }, { "attachments": {}, "cell_type": "markdown", "id": "55c9138e-3e67-4b4a-9949-0e759522f453", "metadata": {}, "source": [ "After our endpoint is deployed we can run inference on it using the predict method from the predictor. We can use different parameters to control the generation, defining them in the parameters attribute of the payload. As of today TGI supports the following parameters:\n", "\n", "- temperature: Controls randomness in the model. Lower values will make the model more deterministic and higher values will make the model more random. Default value is 1.0.\n", "- max_new_tokens: The maximum number of tokens to generate. Default value is 20, max value is 512.\n", "- repetition_penalty: Controls the likelihood of repetition, defaults to null.\n", "- seed: The seed to use for random generation, default is null.\n", "- stop: A list of tokens to stop the generation. The generation will stop when one of the tokens is generated.\n", "- top_k: The number of highest probability vocabulary tokens to keep for top-k-filtering. Default value is null, which disables top-k-filtering.\n", "- top_p: The cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling, default to null\n", "- do_sample: Whether or not to use sampling; use greedy decoding otherwise. Default value is false.\n", "- best_of: Generate best_of sequences and return the one if the highest token logprobs, default to null.\n", "- details: Whether or not to return details about the generation. Default value is false.\n", "- return_full_text: Whether or not to return the full text or only the generated part. Default value is false.\n", "- truncate: Whether or not to truncate the input to the maximum length of the model. Default value is true.\n", "- typical_p: The typical probability of a token. Default value is null.\n", "- watermark: The watermark to use for the generation. Default value is false.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "d50bf38a-82a8-4706-a213-269992825ccd", "metadata": {}, "outputs": [], "source": [ "# define payload\n", "prompt = \"\"\"You are an helpful Assistant, called Falcon. Knowing everyting about AWS.\n", "\n", "User: Can you tell me something about Amazon SageMaker?\n", "Falcon:\"\"\"\n", "\n", "# hyperparameters for llm\n", "payload = {\n", " \"inputs\": prompt,\n", " \"parameters\": {\n", " \"do_sample\": True,\n", " \"top_p\": 0.9,\n", " \"temperature\": 0.8,\n", " \"max_new_tokens\": 1024,\n", " \"repetition_penalty\": 1.03,\n", " \"stop\": [\"\\nUser:\",\"<|endoftext|>\",\"\"]\n", " }\n", "}\n", "\n", "# send request to endpoint\n", "response = llm.predict(payload)\n", "\n", "for seq in response:\n", " print(f\"Result: {seq['generated_text']}\")\n" ] }, { "attachments": {}, "cell_type": "markdown", "id": "6e6b7092-450b-4994-a8d0-b0937f82b394", "metadata": {}, "source": [ "## Prompt Engineering\n", "Prompt engineering is a technique used to design effective prompts for LLMs with the goal to achieve: \n", "\n", "- Control over the output: With prompt engineering, developers can control the output generated by LLMs. By designing prompts that specify the desired topic, style, tone, and level of formality, they can guide the LLM to produce text that meets the desired criteria.\n", "- Mitigating bias: LLMs have been shown to produce biased outputs when prompted with certain topics or language patterns. By engineering prompts that avoid biased language and encourage fairness, developers can help mitigate these issues.\n", "- Improving efficiency: Prompt engineering can help LLMs work more efficiently by guiding them to generate the desired output with fewer iterations. By providing clear, concise, and specific prompts, developers can help LLMs achieve the desired outcome faster and with fewer errors.\n", "\n", "In general, a prompt can contain any of the following components:\n", "\n", "- Instruction - a specific task or instruction you want the model to perform\n", "- Context - can involve external information or additional context that can steer the model to better responses\n", "- Input Data - is the input or question that we are interested to find a response for\n", "- Output Indicator - indicates the type or format of output.\n", "\n", "In general, the more information we provide with the prompt the better the above mentioned goals will be achieved.\n", "\n", "Let's try it out!" ] }, { "cell_type": "code", "execution_count": null, "id": "473b3cfe-01ca-4d4a-a82d-b8b3f8c321a2", "metadata": {}, "outputs": [], "source": [ "# Simple unstructured prompt\n", "prompt = \"\"\"\n", "Teplizumab traces its roots to a New Jersey drug company called Ortho Pharmaceutical. There, scientists generated an early version of the antibody, dubbed OKT3. Originally sourced from mice, the molecule was able to bind to the surface of T cells and limit their cell-killing potential. In 1986, it was approved to help prevent organ rejection after kidney transplants, making it the first therapeutic antibody allowed for human use.\n", "\n", "User: What was OKT3 originally sourced from?\n", "\n", "Falcon:\"\"\"\n", "\n", "\n", "# hyperparameters for llm\n", "payload = {\n", " \"inputs\": prompt,\n", " \"parameters\": {\n", " \"do_sample\": True,\n", " \"top_p\": 0.9,\n", " \"temperature\": 0.8,\n", " \"max_new_tokens\": 1024,\n", " \"repetition_penalty\": 1.03,\n", " \"stop\": [\"\\nUser:\",\"<|endoftext|>\",\"\"]\n", " }\n", "}\n", "\n", "# send request to endpoint\n", "response = llm.predict(payload)\n", "\n", "for seq in response:\n", " print(f\"Result: {seq['generated_text']}\")" ] }, { "cell_type": "code", "execution_count": null, "id": "11b018ff-7ac3-4c1e-baca-66b373b5f36e", "metadata": {}, "outputs": [], "source": [ "# We now stick to the scheme proposed above\n", "prompt = \"\"\"\n", "Answer the question based on the context below. Keep the answer short and concise. Respond \"Unsure about answer\" if not sure about the answer.\n", "\n", "Context: Teplizumab traces its roots to a New Jersey drug company called Ortho Pharmaceutical. There, scientists generated an early version of the antibody, dubbed OKT3. Originally sourced from mice, the molecule was able to bind to the surface of T cells and limit their cell-killing potential. In 1986, it was approved to help prevent organ rejection after kidney transplants, making it the first therapeutic antibody allowed for human use.\n", "\n", "Question: What was OKT3 originally sourced from?\n", "\n", "Answer:\"\"\"\n", "\n", "\n", "# hyperparameters for llm\n", "payload = {\n", " \"inputs\": prompt,\n", " \"parameters\": {\n", " \"do_sample\": True,\n", " \"top_p\": 0.9,\n", " \"temperature\": 0.8,\n", " \"max_new_tokens\": 1024,\n", " \"repetition_penalty\": 1.03,\n", " \"stop\": [\"\\nUser:\",\"<|endoftext|>\",\"\"]\n", " }\n", "}\n", "\n", "# send request to endpoint\n", "response = llm.predict(payload)\n", "for seq in response:\n", " print(f\"Result: {seq['generated_text']}\")\n" ] }, { "attachments": {}, "cell_type": "markdown", "id": "aa161db6-7a30-4186-abf2-36503d771826", "metadata": {}, "source": [ "In addition, [few-shot learning](https://www.analyticsvidhya.com/blog/2021/05/an-introduction-to-few-shot-learning/) is an interesting approach for the context element of a prompt. Few-shot learning is a prompt engineering technique that enables models to learn new tasks or concepts from only a few examples (usually a single digit number is just fine) or samples. Despite of the fact that the model has never seen this task in the training phase, we experience a significant boost in performance. " ] }, { "cell_type": "code", "execution_count": null, "id": "ff9354fa-35f4-44fd-be85-03e93b7fec3a", "metadata": {}, "outputs": [], "source": [ "# One-shot\n", "prompt = \"\"\"\n", "Tweet: \"This new music video was incredibile\"\n", "Sentiment:\"\"\"\n", "\n", "\n", "# hyperparameters for llm\n", "payload = {\n", " \"inputs\": prompt,\n", " \"parameters\": {\n", " \"do_sample\": True,\n", " \"top_p\": 0.9,\n", " \"temperature\": 0.8,\n", " \"max_new_tokens\": 1024,\n", " \"repetition_penalty\": 1.03,\n", " \"stop\": [\"\\nUser:\",\"<|endoftext|>\",\"\"]\n", " }\n", "}\n", "\n", "# send request to endpoint\n", "response = llm.predict(payload)\n", "\n", "for seq in response:\n", " print(f\"Result: {seq['generated_text']}\")\n" ] }, { "cell_type": "code", "execution_count": null, "id": "40bb5842-077d-4da3-9760-a04d9424699e", "metadata": {}, "outputs": [], "source": [ "# Few-shot\n", "prompt = \"\"\"\n", "Tweet: \"I hate it when my phone battery dies.\"\n", "Sentiment: Negative\n", "###\n", "Tweet: \"My day has been 👍\"\n", "Sentiment: Positive\n", "###\n", "Tweet: \"This is the link to the article\"\n", "Sentiment: Neutral\n", "###\n", "Tweet: \"This new music video was incredibile\"\n", "Sentiment:\"\"\"\n", "\n", "# hyperparameters for llm\n", "payload = {\n", " \"inputs\": prompt,\n", " \"parameters\": {\n", " \"do_sample\": True,\n", " \"top_p\": 0.9,\n", " \"temperature\": 0.8,\n", " \"max_new_tokens\": 1024,\n", " \"repetition_penalty\": 1.03,\n", " \"stop\": [\"\\nUser:\",\"<|endoftext|>\",\"\"]\n", " }\n", "}\n", "\n", "# send request to endpoint\n", "response = llm.predict(payload)\n", "for seq in response:\n", " print(f\"Result: {seq['generated_text']}\")\n" ] }, { "cell_type": "code", "execution_count": null, "id": "0dbbb9c4-7428-45fa-9b25-d0392b08f62e", "metadata": { "tags": [] }, "outputs": [], "source": [ "llm.delete_model()\n", "llm.delete_endpoint()\n" ] } ], "metadata": { "availableInstances": [ { "_defaultOrder": 0, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.t3.medium", "vcpuNum": 2 }, { "_defaultOrder": 1, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.t3.large", "vcpuNum": 2 }, { "_defaultOrder": 2, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.t3.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 3, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.t3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 4, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5.large", "vcpuNum": 2 }, { "_defaultOrder": 5, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 6, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 7, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 8, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 9, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 10, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 11, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 12, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5d.large", "vcpuNum": 2 }, { "_defaultOrder": 13, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5d.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 14, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5d.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 15, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5d.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 16, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5d.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 17, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5d.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 18, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5d.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 19, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 20, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": true, "memoryGiB": 0, "name": "ml.geospatial.interactive", "supportedImageNames": [ "sagemaker-geospatial-v1-0" ], "vcpuNum": 0 }, { "_defaultOrder": 21, "_isFastLaunch": true, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.c5.large", "vcpuNum": 2 }, { "_defaultOrder": 22, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.c5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 23, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.c5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 24, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.c5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 25, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 72, "name": "ml.c5.9xlarge", "vcpuNum": 36 }, { "_defaultOrder": 26, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 96, "name": "ml.c5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 27, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 144, "name": "ml.c5.18xlarge", "vcpuNum": 72 }, { "_defaultOrder": 28, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.c5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 29, "_isFastLaunch": true, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g4dn.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 30, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g4dn.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 31, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g4dn.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 32, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g4dn.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 33, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g4dn.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 34, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g4dn.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 35, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 61, "name": "ml.p3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 36, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 244, "name": "ml.p3.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 37, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 488, "name": "ml.p3.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 38, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.p3dn.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 39, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.r5.large", "vcpuNum": 2 }, { "_defaultOrder": 40, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.r5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 41, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.r5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 42, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.r5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 43, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.r5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 44, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.r5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 45, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 512, "name": "ml.r5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 46, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.r5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 47, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 48, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 49, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 50, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 51, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 52, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 53, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.g5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 54, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.g5.48xlarge", "vcpuNum": 192 }, { "_defaultOrder": 55, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 1152, "name": "ml.p4d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 56, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 1152, "name": "ml.p4de.24xlarge", "vcpuNum": 96 } ], "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (Data Science)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/datascience-1.0" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.10" } }, "nbformat": 4, "nbformat_minor": 5 }