{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Using AI21 Contextual Answers on SageMaker through Model Packages\n", "\n", "This sample notebook shows you how to deploy **AI21 Contextual Answers** using Amazon SageMaker.\n", "\n", "## Pre-requisites:\n", "1. Before running this notebook, please make sure you got this notebook from the model catalog on SageMaker AWS Management Console.\n", "1. **Note**: This notebook contains elements which render correctly in Jupyter interface. Open this notebook from an Amazon SageMaker Notebook Instance or Amazon SageMaker Studio.\n", "1. Ensure that IAM role used has **AmazonSageMakerFullAccess**.\n", "1. This notebook is intended to work with **boto3 v1.25.4** or higher.\n", "\n", "## Contents:\n", "1. [Select model package](#1.-Select-model-package)\n", "1. [Create an endpoint and perform real-time inference](#2.-Create-an-endpoint-and-perform-real-time-inference)\n", " 1. [Create an endpoint](#A.-Create-an-endpoint)\n", " 1. [Interact with the model](#B.-Interact-with-the-model)\n", " 1. [Ask about financial reports](#C.-Ask-about-financial-reports)\n", "1. [Clean-up](#3.-Clean-up)\n", " 1. [Delete the endpoint](#A.-Delete-the-endpoint)\n", " 1. [Delete the model](#B.-Delete-the-model)\n", "\n", "\n", "## Usage instructions\n", "You can run this notebook one cell at a time (By using Shift+Enter for running a cell)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. Select model package\n", "Confirm that you received this notebook from the model catalog in SageMaker AWS Management Console." ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "tags": [] }, "outputs": [], "source": [ "model_package_map = {\n", " \"us-east-1\": \"arn:aws:sagemaker:us-east-1:865070037744:model-package/contextual-answers-1-0-001-a85d7d493b3a39e3a8a8ec734f2befae\",\n", " \"us-east-2\": \"arn:aws:sagemaker:us-east-2:057799348421:model-package/contextual-answers-1-0-001-a85d7d493b3a39e3a8a8ec734f2befae\",\n", " \"us-west-1\": \"arn:aws:sagemaker:us-west-1:382657785993:model-package/contextual-answers-1-0-001-a85d7d493b3a39e3a8a8ec734f2befae\",\n", " \"us-west-2\": \"arn:aws:sagemaker:us-west-2:594846645681:model-package/contextual-answers-1-0-001-a85d7d493b3a39e3a8a8ec734f2befae\",\n", " \"ca-central-1\": \"arn:aws:sagemaker:ca-central-1:470592106596:model-package/contextual-answers-1-0-001-a85d7d493b3a39e3a8a8ec734f2befae\",\n", " \"eu-central-1\": \"arn:aws:sagemaker:eu-central-1:446921602837:model-package/contextual-answers-1-0-001-a85d7d493b3a39e3a8a8ec734f2befae\",\n", " \"eu-west-1\": \"arn:aws:sagemaker:eu-west-1:985815980388:model-package/contextual-answers-1-0-001-a85d7d493b3a39e3a8a8ec734f2befae\",\n", " \"eu-west-2\": \"arn:aws:sagemaker:eu-west-2:856760150666:model-package/contextual-answers-1-0-001-a85d7d493b3a39e3a8a8ec734f2befae\",\n", " \"eu-west-3\": \"arn:aws:sagemaker:eu-west-3:843114510376:model-package/contextual-answers-1-0-001-a85d7d493b3a39e3a8a8ec734f2befae\",\n", " \"eu-north-1\": \"arn:aws:sagemaker:eu-north-1:136758871317:model-package/contextual-answers-1-0-001-a85d7d493b3a39e3a8a8ec734f2befae\",\n", " \"ap-southeast-1\": \"arn:aws:sagemaker:ap-southeast-1:192199979996:model-package/contextual-answers-1-0-001-a85d7d493b3a39e3a8a8ec734f2befae\",\n", " \"ap-southeast-2\": \"arn:aws:sagemaker:ap-southeast-2:666831318237:model-package/contextual-answers-1-0-001-a85d7d493b3a39e3a8a8ec734f2befae\",\n", " \"ap-northeast-2\": \"arn:aws:sagemaker:ap-northeast-2:745090734665:model-package/contextual-answers-1-0-001-a85d7d493b3a39e3a8a8ec734f2befae\",\n", " \"ap-northeast-1\": \"arn:aws:sagemaker:ap-northeast-1:977537786026:model-package/contextual-answers-1-0-001-a85d7d493b3a39e3a8a8ec734f2befae\",\n", " \"ap-south-1\": \"arn:aws:sagemaker:ap-south-1:077584701553:model-package/contextual-answers-1-0-001-a85d7d493b3a39e3a8a8ec734f2befae\",\n", " \"sa-east-1\": \"arn:aws:sagemaker:sa-east-1:270155090741:model-package/contextual-answers-1-0-001-a85d7d493b3a39e3a8a8ec734f2befae\"\n", "}" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "tags": [] }, "outputs": [], "source": [ "import json\n", "from sagemaker import ModelPackage\n", "from sagemaker import get_execution_role\n", "import sagemaker as sage\n", "import boto3" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Check the version of boto3 - must be v1.25.4 or higher\n", "If you see a lower version number, pick another kernel to run the notebook, with Python 3.8 or above" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "'1.26.151'" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "boto3.__version__" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Install ai21 python SDK" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Collecting ai21[SM]\n", " Downloading ai21-1.1.3.tar.gz (13 kB)\n", " Preparing metadata (setup.py) ... \u001b[?25ldone\n", "\u001b[33mWARNING: ai21 1.1.3 does not provide the extra 'sm'\u001b[0m\u001b[33m\n", "\u001b[0m\u001b[?25hRequirement already satisfied: requests in /opt/conda/lib/python3.7/site-packages (from ai21[SM]) (2.28.2)\n", "Requirement already satisfied: charset-normalizer<4,>=2 in /opt/conda/lib/python3.7/site-packages (from requests->ai21[SM]) (2.0.4)\n", "Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.7/site-packages (from requests->ai21[SM]) (2022.12.7)\n", "Requirement already satisfied: idna<4,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests->ai21[SM]) (2.8)\n", "Requirement already satisfied: urllib3<1.27,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from requests->ai21[SM]) (1.26.15)\n", "Building wheels for collected packages: ai21\n", " Building wheel for ai21 (setup.py) ... \u001b[?25ldone\n", "\u001b[?25h Created wheel for ai21: filename=ai21-1.1.3-py3-none-any.whl size=21712 sha256=1a3bab1a70ecb6139e38bf83fc401ec8e56f3105a1968a789baf44de5bb33f21\n", " Stored in directory: /root/.cache/pip/wheels/6f/88/9f/9e9850d2707a665d4a2bb239644ae164c48f9f8ffcad65c2c8\n", "Successfully built ai21\n", "Installing collected packages: ai21\n", "Successfully installed ai21-1.1.3\n", "\u001b[33mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv\u001b[0m\u001b[33m\n", "\u001b[0m\n", "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.0.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m23.1.2\u001b[0m\n", "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n" ] } ], "source": [ "! pip install -U \"ai21[SM]\"\n", "import ai21" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "tags": [] }, "outputs": [], "source": [ "region = boto3.Session().region_name\n", "if region not in model_package_map.keys():\n", " raise (\"UNSUPPORTED REGION\")\n", "\n", "model_package_arn = model_package_map[region]" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "tags": [] }, "outputs": [], "source": [ "role = get_execution_role()\n", "sagemaker_session = sage.Session()\n", "\n", "runtime_sm_client = boto3.client(\"runtime.sagemaker\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. Create an endpoint and perform real-time inference" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you want to understand how real-time inference with Amazon SageMaker works, see [Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/deploy-model.html)." ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "tags": [] }, "outputs": [], "source": [ "endpoint_name = \"contextual-answers\"\n", "\n", "content_type = \"application/json\"\n", "\n", "real_time_inference_instance_type = (\n", " \"ml.g5.12xlarge\"\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### A. Create an endpoint" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "----------------!" ] } ], "source": [ "# create a deployable model from the model package.\n", "model = ModelPackage(\n", " role=role, model_package_arn=model_package_arn, sagemaker_session=sagemaker_session\n", ")\n", "\n", "# Deploy the model\n", "predictor = model.deploy(1, real_time_inference_instance_type, endpoint_name=endpoint_name, \n", " model_data_download_timeout=3600,\n", " container_startup_health_check_timeout=600,\n", " )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Once endpoint has been created, you would be able to perform real-time inference." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### B. Interact with the model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**AI21 Studio Contextual Answers model** allows you to access our high-quality question answering technology. It was designed to answer questions based on a specific document context provided by the customer. This avoids any factual issues that language models may have and makes sure the answers it provides are grounded in that context document.\n", "\n", "This model receives document text, serving as a context, and a question and returns an answer based entirely on this context. This means that if the answer to your question is not in the document, the model will indicate it (instead of providing a false answer)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To get a sense of the model's behavior, let's use this toy example of asking what is the Eiffel tower height. Most language models will simply answer according to their training data.\n", "\n", "This model, however, bases its answer solely on the context you provide. Let's use the following [Wikipedia paragraph](https://en.wikipedia.org/wiki/Eiffel_Tower#:~:text=The%20Eiffel%20Tower%20(%2F%CB%88a%C9%AA,from%20the%20Champ%20de%20Mars) as context, with small modifications:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "tags": [] }, "outputs": [], "source": [ "# Actual paragraph\n", "context = \"The tower is 330 metres (1,083 ft) tall,[6] about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest human-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure in the world to surpass both the 200-metre and 300-metre mark in height. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct.\"\n", "\n", "# The paragraph with manual changes of the height\n", "false_context = \"The tower is 3 metres (10 ft) tall,[6] about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest human-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure in the world to surpass both the 200-metre and 300-metre mark in height. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct.\"\n", "\n", "# The paragraph with the height omitted\n", "partial_context = \"Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest human-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure in the world to surpass both the 200-metre and 300-metre mark in height. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct.\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here is what the model will say when asked the same question in each context." ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The Eiffel Tower is 330 metres tall.\n" ] } ], "source": [ "# True context\n", "response = ai21.Answer.execute(\n", " context=context,\n", " question=\"What is the height of the Eiffel tower?\",\n", " sm_endpoint=endpoint_name\n", ")\n", "\n", "print(response.answer)" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The Eiffel Tower is 3 metres (10 ft) tall\n" ] } ], "source": [ "# False context\n", "response = ai21.Answer.execute(\n", " context=false_context,\n", " question=\"What is the height of the Eiffel tower?\",\n", " sm_endpoint=endpoint_name\n", ")\n", "\n", "print(response.answer)" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Answer not in document\n" ] } ], "source": [ "# Irrelevant context\n", "response = ai21.Answer.execute(\n", " context=partial_context,\n", " question=\"What is the height of the Eiffel tower?\",\n", " sm_endpoint=endpoint_name\n", ")\n", "\n", "print(response.answer)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### C. Ask about financial reports" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The document context should be **no more than 10,000 characters**, and the question can be up to 160 characters." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Imagine you are performing research and rely on financial reports to base your findings. Let's take the following part from [JPMorgan Chase & Co. 2021 annual report](https://www.jpmorganchase.com/content/dam/jpmc/jpmorgan-chase-and-co/investor-relations/documents/annualreport-2021.pdf):" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "tags": [] }, "outputs": [], "source": [ "financial_context = \"\"\"In 2020 and 2021, enormous QE — approximately $4.4 trillion, or 18%, of 2021 gross domestic product (GDP) — and enormous fiscal stimulus (which has been and always will be inflationary) — approximately $5 trillion, or 21%, of 2021 GDP — stabilized markets and allowed companies to raise enormous amounts of capital. In addition, this infusion of capital saved many small businesses and put more than $2.5 trillion in the hands of consumers and almost $1 trillion into state and local coffers. These actions led to a rapid decline in unemployment, dropping from 15% to under 4% in 20 months — the magnitude and speed of which were both unprecedented. Additionally, the economy grew 7% in 2021 despite the arrival of the Delta and Omicron variants and the global supply chain shortages, which were largely fueled by the dramatic upswing in consumer spending and the shift in that spend from services to goods. Fortunately, during these two years, vaccines for COVID-19 were also rapidly developed and distributed.\n", "In today's economy, the consumer is in excellent financial shape (on average), with leverage among the lowest on record, excellent mortgage underwriting (even though we've had home price appreciation), plentiful jobs with wage increases and more than $2 trillion in excess savings, mostly due to government stimulus. Most consumers and companies (and states) are still flush with the money generated in 2020 and 2021, with consumer spending over the last several months 12% above pre-COVID-19 levels. (But we must recognize that the account balances in lower-income households, smaller to begin with, are going down faster and that income for those households is not keeping pace with rising inflation.)\n", "Today's economic landscape is completely different from the 2008 financial crisis when the consumer was extraordinarily overleveraged, as was the financial system as a whole — from banks and investment banks to shadow banks, hedge funds, private equity, Fannie Mae and many other entities. In addition, home price appreciation, fed by bad underwriting and leverage in the mortgage system, led to excessive speculation, which was missed by virtually everyone — eventually leading to nearly $1 trillion in actual losses.\n", "\"\"\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Rather than reading the entire report, just ask what you want to know:" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "tags": [] }, "outputs": [], "source": [ "question = \"Did the economy shrink after the Omicron variant arrived?\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The model will answer based on the provided report:" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The GDP growth in 2021 was 7%, despite the arrival of the Omicron variant and the global supply chain shortages\n" ] } ], "source": [ "response = ai21.Answer.execute(\n", " context=financial_context,\n", " question=question,\n", " sm_endpoint=endpoint_name\n", ")\n", "\n", "print(response.answer)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In addition, you can ask more complex questions, where the answer requires deductions rather than just extracting the correct sentence from the document context. This will result in abstractive, rather than extractive, answers that draw on several different parts of the document. For example, look at the following question:" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The COVID-19 pandemic had a negative impact on the economy, as businesses closed their doors and consumers stayed home. However, as vaccines were developed and the pandemic subsided, the economy began to recover\n" ] } ], "source": [ "harder_question = \"Did COVID-19 eventually help the economy?\"\n", "\n", "response = ai21.Answer.execute(\n", " context=financial_context,\n", " question=harder_question,\n", " sm_endpoint=endpoint_name\n", ")\n", "\n", "print(response.answer)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We now present the model with the following question. You may be confused to answer something based on the last paragraph without delving into the text. However, if you read the provided document context properly, you will discover that the answer does not appear there. The model handles this as expected:" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Answer not in document\n" ] } ], "source": [ "irrelevant_question = \"How did COVID-19 affect the financial crisis of 2008?\"\n", "\n", "response = ai21.Answer.execute(\n", " context=financial_context,\n", " question=irrelevant_question,\n", " sm_endpoint=endpoint_name\n", ")\n", "\n", "print(response.answer)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Interested in learning more?\n", "Take a look at our [guide](https://docs.ai21.com/docs/contextual-answers-api) to understand all the capabilities of AI21 Contextual Answers model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3. Clean-up" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### A. Delete the endpoint" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that you have successfully performed a real-time inference, you do not need the endpoint any more. You can terminate the endpoint to avoid being charged." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model.sagemaker_session.delete_endpoint(endpoint_name)\n", "model.sagemaker_session.delete_endpoint_config(endpoint_name)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### B. Delete the model" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model.delete_model()" ] } ], "metadata": { "availableInstances": [ { "_defaultOrder": 0, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.t3.medium", "vcpuNum": 2 }, { "_defaultOrder": 1, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.t3.large", "vcpuNum": 2 }, { "_defaultOrder": 2, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.t3.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 3, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.t3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 4, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5.large", "vcpuNum": 2 }, { "_defaultOrder": 5, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 6, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 7, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 8, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 9, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 10, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 11, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 12, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5d.large", "vcpuNum": 2 }, { "_defaultOrder": 13, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5d.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 14, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5d.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 15, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5d.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 16, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5d.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 17, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5d.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 18, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5d.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 19, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 20, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": true, "memoryGiB": 0, "name": "ml.geospatial.interactive", "supportedImageNames": [ "sagemaker-geospatial-v1-0" ], "vcpuNum": 0 }, { "_defaultOrder": 21, "_isFastLaunch": true, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.c5.large", "vcpuNum": 2 }, { "_defaultOrder": 22, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.c5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 23, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.c5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 24, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.c5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 25, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 72, "name": "ml.c5.9xlarge", "vcpuNum": 36 }, { "_defaultOrder": 26, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 96, "name": "ml.c5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 27, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 144, "name": "ml.c5.18xlarge", "vcpuNum": 72 }, { "_defaultOrder": 28, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.c5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 29, "_isFastLaunch": true, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g4dn.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 30, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g4dn.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 31, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g4dn.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 32, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g4dn.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 33, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g4dn.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 34, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g4dn.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 35, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 61, "name": "ml.p3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 36, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 244, "name": "ml.p3.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 37, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 488, "name": "ml.p3.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 38, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.p3dn.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 39, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.r5.large", "vcpuNum": 2 }, { "_defaultOrder": 40, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.r5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 41, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.r5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 42, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.r5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 43, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.r5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 44, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.r5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 45, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 512, "name": "ml.r5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 46, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.r5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 47, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 48, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 49, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 50, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 51, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 52, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 53, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.g5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 54, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.g5.48xlarge", "vcpuNum": 192 } ], "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (Data Science)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/datascience-1.0" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.10" } }, "nbformat": 4, "nbformat_minor": 4 }