{ "cells": [ { "cell_type": "markdown", "id": "ed679ad9-25d5-49e3-96ea-bfbdbc3c8e02", "metadata": { "tags": [] }, "source": [ "# Public sector use case: Benefit application\n", "\n", "## Lab 2 - Comprehend" ] }, { "cell_type": "markdown", "id": "a16c1180-cb34-4024-950e-00ac66671357", "metadata": {}, "source": [ "---\n", "\n", "## Introduction\n", "Amazon Comprehend is a natural-language processing (NLP) service that uses machine learning to uncover valuable insights and connections in text. \n", "\n", "In this lab we will demonstrate how to use the AWS SDK for Python (Boto3) to perform real-time analysis using the Amazon Comprehend Custom Classifier model we trained via the Amazon Managment Console. We'll also demonstrate performing inference against the endpoint and review the results.\n", "\n", "**Note:** This notebook requires that you completed the lab for training an Amazon Comprehend Custom Classifier model.\n", "\n", "
\n", "\n", "- 1. [Prerequisites](#section_1_0)\n", " - 1.1 [Install packages](#section_1_1)\n", " - 1.2 [Import packages and modules](#section_1_2)\n", " - 1.3 [Setup the notebook role and session](#section_1_3)\n", " - 1.4 [Setup the AWS service clients](#section_1_4)\n", "- 2. [Deploying an Amazon Comprehend Custom Classifier ](#section_2_0)\n", " - 2.1 [Check the model's training and deployment status](#section_2_1)\n", " - 2.2 [Deploy the Custom Classifier model](#section_2_2)\n", " - 2.3 [Custom Classifier model inference](#section_2_3)\n", "- 3. [Cleanup Resources ](#section_3_0)\n", "- 4. [Conclusion](#section_4_0)\n", "- 5. [Additional Resources](#section_5_0)\n", "\n", "##### **Let's get started!**" ] }, { "cell_type": "markdown", "id": "41084fb7-c255-44ea-821e-d2e8e91ecb76", "metadata": { "tags": [] }, "source": [ "---\n", "\n", "## 1. Prerequisites\n", "\n", "\n", "In this section, we'll install and import packages, establish the notebook execution role and session, and setup the AWS service clients." ] }, { "cell_type": "markdown", "id": "03936536-9880-48d1-988f-0b39d8d2a901", "metadata": { "tags": [] }, "source": [ "### 1.1 Install packages\n", "\n", "\n", "We use *pip* to install packages from the Python Package Index and other indexes. A package contains all the files you need for a module.\n", "Modules are Python code libraries you can include in your project. You can think of Python packages as the directories on a file system and modules as files within directories. \n", "\n", "**Note:** after executing code in this cell there will be lots of debug output, this is normal, and expected." ] }, { "cell_type": "code", "execution_count": null, "id": "18e3a7b4-2a23-46d0-af6c-bb1d92b4e82e", "metadata": { "tags": [] }, "outputs": [], "source": [ "!pip install boto3 \n", "!pip install botocore \n", "!pip install s3fs" ] }, { "cell_type": "markdown", "id": "ea934bed-94a2-48d5-971b-66f561df7051", "metadata": { "tags": [] }, "source": [ "### 1.2 Import packages and modules\n", "\n", "\n", "Python code in one module gains access to the code in another module by the process of importing it. In this section, we import packages and modules needed to execute code cells in this notebook." ] }, { "cell_type": "code", "execution_count": null, "id": "d9a539ed-95d6-4cbf-ab02-cac6c333b1af", "metadata": { "tags": [] }, "outputs": [], "source": [ "import boto3\n", "import sagemaker\n", "import s3fs\n", "import time\n", "import json\n", "\n", "from tabulate import tabulate\n", "import trp.trp2 as t2" ] }, { "cell_type": "markdown", "id": "be6d5ecf-7c90-47da-a898-9ef31f395f87", "metadata": {}, "source": [ "### 1.3 Setup the notebook role and session\n", "\n", "\n", "As a managed service, Amazon SageMaker performs operations on your behalf on the AWS hardware that is managed by SageMaker. SageMaker can perform only operations that the user permits. A SageMaker user can grant these permissions with an IAM role (referred to as an execution role).\n", "\n", "To create and use a locally available execution role, execute the code in the following cell" ] }, { "cell_type": "code", "execution_count": null, "id": "aa3af590-23d0-4434-ad18-b5a1a543e4ae", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Get the IAM role and Sagemaker session\n", "try:\n", " role = sagemaker.get_execution_role()\n", "except:\n", " role = get_execution_role()\n", "\n", "# Get the SakeMaker session\n", "session = sagemaker.Session()\n", "\n", "# Get the region name\n", "region = session.boto_region_name\n", "\n", "print('Using IAM role arn: {}'.format(role))\n", "print('Using region: {}'.format(region))" ] }, { "cell_type": "markdown", "id": "5b5a397f-cd8e-48da-b157-c24ae67be43d", "metadata": {}, "source": [ "### 1.4 Setup the AWS service clients\n", "\n", "\n", "AWS' Boto3 library is used commonly to integrate Python applications with various AWS services. Clients provide a low-level interface to the AWS service. In this section, we will create the Amazon Comprehend Boto3 client to help execute code cells in this notebook." ] }, { "cell_type": "code", "execution_count": null, "id": "0f226847-84d1-4cb8-8f8f-aabddb7efc85", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Setup the Comprehend client\n", "comprehend_client = boto3.client('comprehend', region_name=region)" ] }, { "cell_type": "markdown", "id": "7067f769-0cfd-445c-8bc3-96b393c1d8fc", "metadata": { "tags": [] }, "source": [ "## 2. Deploying an Amazon Comprehend Custom Classifier \n", "\n", "\n", "In this section, we'll demonstrate how to programatically use the Amazon Comprehend SDK to deploy the custom classifier we previously setup in the AWS management console for real-time inference. **Important**: Make sure you have ran all the code cells in Section 1 of this notebook and have completed the instructor guided Custom Classifier lab." ] }, { "cell_type": "code", "execution_count": null, "id": "462d39be-7899-446a-acc5-2ae490b2f3b0", "metadata": { "tags": [] }, "outputs": [], "source": [ "# ENTER THE NAME OF YOUR CUSTOM CLASSIFIER HERE\n", "documentClassifierName = \"ENTER THE NAME OF YOUR CUSTOM CLASSIFIER MODEL HERE\"\n", "\n", "# Well append '-endpoint' to maintain consistency in our naming convention\n", "documentClassifierEndpointName = documentClassifierName + \"-endpoint\"" ] }, { "cell_type": "markdown", "id": "23829cc2-ebb0-4fd8-aeb7-4549f8d130e7", "metadata": { "tags": [] }, "source": [ "### 2.1 Check the model's training and deployment status\n", "\n", "\n", "Some Amazon Comprehend Custom Classifier models can take more than 30 minutes to finish training, so let's create a function to periodically check the training status of the model. We want the model to return a 'TRAINED' status value before deploying the model to an inference endpoint. Additionally, deploying the model inference endpoint for this lab will take around 8 - 10 minuntes, so we'll periodically check its deployment status. We want the model endpoint to return a 'IN_SERVICE' status value before peforming an inference against the endpoint." ] }, { "cell_type": "code", "execution_count": null, "id": "0a606926-5c26-482b-ae7d-6b5ba203722e", "metadata": { "tags": [] }, "outputs": [], "source": [ "def get_custom_classifier_status(documentClassifierName, documentClassifierStatus):\n", " \n", " response = comprehend_client.list_document_classifiers(\n", " Filter={\n", " 'DocumentClassifierName': documentClassifierName\n", " })\n", " \n", " status = response.get('DocumentClassifierPropertiesList')[0].get('Status')\n", " print(\"Job status: {}\".format(status))\n", "\n", " while(status != documentClassifierStatus):\n", " time.sleep(30)\n", " response = comprehend_client.list_document_classifiers(\n", " Filter={\n", " 'DocumentClassifierName': documentClassifierName\n", " })\n", " status = response.get('DocumentClassifierPropertiesList')[0].get('Status')\n", " print(\"Job status: {}\".format(status))\n", "\n", " return status\n", "\n", "def get_custom_classifier_endpoint_status(custom_classifier_endpoint_arn, documentClassifierEndpointStatus):\n", " \n", " response = comprehend_client.describe_endpoint(EndpointArn=custom_classifier_endpoint_arn)\n", " status = response.get('EndpointProperties').get('Status')\n", " print(\"Job status: {}\".format(status))\n", "\n", " while(status != documentClassifierEndpointStatus):\n", " time.sleep(15)\n", " response = comprehend_client.describe_endpoint(EndpointArn=custom_classifier_endpoint_arn)\n", " status = response.get('EndpointProperties').get('Status')\n", " print(\"Job status: {}\".format(status))\n", "\n", " return status" ] }, { "cell_type": "markdown", "id": "7657a153-dbdd-46ed-8ec8-68dba3db745f", "metadata": { "tags": [] }, "source": [ "### 2.2 Deploy the Custom Classifier model\n", "\n", "\n", "Once the model has finished training, we can deploy it to a real-time inference endpoint. A status message will be periodically displayed to indicate the model training and model endpoint status.\n", "\n", "**Note this process may take between 8-10 minutes to complete**" ] }, { "cell_type": "code", "execution_count": null, "id": "165cd94c-9c4b-4e05-80b1-23dea0c8b33d", "metadata": { "tags": [] }, "outputs": [], "source": [ "documentClassifierArn = \"\"\n", "custom_classifier_endpoint_arn = \"\"\n", "\n", "# Check if the model has finished training\n", "if (get_custom_classifier_status(documentClassifierName, 'TRAINED') == 'TRAINED'):\n", " \n", " # Gets a list of the document classifiers that you have created\n", " document_classifiers = comprehend_client.list_document_classifiers(\n", " Filter={\n", " 'DocumentClassifierName': documentClassifierName\n", " })\n", " \n", " # Since we only trained one, we'll get the first in the list\n", " documentClassifierArn = document_classifiers.get('DocumentClassifierPropertiesList')[0].get('DocumentClassifierArn')\n", " print(documentClassifierArn)\n", "\n", " # Deploy the custom classifer to a real-time endpoint\n", " response = comprehend_client.create_endpoint(\n", " EndpointName=documentClassifierEndpointName,\n", " ModelArn=documentClassifierArn,\n", " DesiredInferenceUnits=1 # Each inference unit represents of a throughput of 100 characters per second\n", " )\n", " \n", " # Get the ARN of the custom classifer real-time endpoint\n", " custom_classifier_endpoint_arn = response.get('EndpointArn')\n", " \n", " if(get_custom_classifier_endpoint_status(custom_classifier_endpoint_arn, 'IN_SERVICE') == 'IN_SERVICE'):\n", " print(\"The custom classification model has been deployed to a real-time inference endpoint\")\n", " print(\"The EndpointArn is: {}\".format(custom_classifier_endpoint_arn))" ] }, { "cell_type": "markdown", "id": "e1a8d26f-b692-4e50-b708-fadfcbc41fdb", "metadata": { "tags": [] }, "source": [ "### 2.3 Custom Classifier model inference\n", "\n", "\n", "Now that the custom model is deployed, let's see how it classifies some text from a sample utility bill.\n", "\n", "Note, for real-time analysis, for all input document types, the input file maximum is one page, with no more than 10,000 characters. \n", "\n", "For convenience, we print the results in a tabular format. In the left column is the predicted class names and the right column has the predicted class value. \n" ] }, { "cell_type": "code", "execution_count": null, "id": "ba3d14c1-2446-48e8-b316-73f28775c05f", "metadata": {}, "outputs": [], "source": [ "%%time\n", "sample_text_for_inference = \"JOHN SMITH NYSEG Account Number: 1234-5478-123 Statement Date: January 1, 2021 Amount Due: $123.45 Service Address: 123 MAIN ST, ANYTOWN CA 90210 Page 1 of 4 Next Scheduled Read Date: On or about January 24, 2022 EBPP Account Summary Previous invoice $101.03 Payments received as of 12/27/21 -101.03 Residential Balance forward 0.00 Energy charges 384.71 Residential consumer Miscellaneous charges 0.92 discount $ 2.44 See details beginning Payment due upon receipt. $385.63 on page 3 To avoid a 1.5% late payment charge, please ensure payment is See messages on page 2 received by the date displayed below. Think of the minutes, money and natural resources you'll save by doing business online or by phone 24/7. Visit nyseg.com to: View and pay your bill online Submit and view meter readings Enroll and manage budget billing Enroll in Autopay Call our self-service line at 1.800.600.2275 for billing information, provide a meter reading and to pay by phone. Add $1, $2, or $5 to your payment to make a tax-deductible donation to NYSEG and RG&E Project SHARE Heating Fund. Learn more at nyseg.com. Please return bottom portion with your payment. Make checks payable to NYSEG. Account Number NYSEG 10041076133 Late Fee After 01/20/22 NYSEG P.O. BOX 847812 Due Upon Receipt BOSTON, MA 02284-7812 $385.63 Amount Paid $ JESSE ROBERTS 193 EDGEWOOD DR AVERILL PARK NY 12018-2510 Please do not write below this line. 601004107613300000385630000038563\"\n", "\n", "# Let's test our custom classifier \n", "classify_document_response = comprehend_client.classify_document(\n", " EndpointArn=custom_classifier_endpoint_arn, \n", " Text=sample_text_for_inference\n", ")\n", "\n", "# Here are the results of the custom classifier's predictions\n", "print(tabulate(classify_document_response.get('Classes')))" ] }, { "cell_type": "markdown", "id": "c94a178d-e262-4192-9850-ad57dafc7b1e", "metadata": { "tags": [] }, "source": [ "## 3. Cleanup Resources\n", "\n", "\n", "Run the following cells to delete the customer classifier endpoint and model resources." ] }, { "cell_type": "code", "execution_count": null, "id": "4bc9ba6e-f15e-46b8-9066-8e247e4425fe", "metadata": { "tags": [] }, "outputs": [], "source": [ "%%time\n", "try:\n", " # Check if real-time endpoint still exists \n", " response = comprehend_client.list_endpoints(\n", " Filter={\n", " 'ModelArn': documentClassifierArn,\n", " }\n", " )\n", " \n", " # Get the endpoint Arn\n", " custom_classifier_endpoint_arn = response.get('EndpointPropertiesList')[0].get('EndpointArn')\n", " \n", " # If exists, delete the real-time endpoint\n", " if(response.get('EndpointPropertiesList')):\n", "\n", " response = comprehend_client.delete_endpoint(\n", " EndpointArn=custom_classifier_endpoint_arn\n", " )\n", " print(\"Deleting the custom document classifier model endpoint.\")\n", " time.sleep(350)\n", " \n", "except Exception:\n", " print(\"The custom document classifier model endpoint not found\")" ] }, { "cell_type": "markdown", "id": "03e4ca52-b689-4f12-8c7b-6e840b52c240", "metadata": {}, "source": [ "We need to wait (5 - 7 mins) for the model endpoint to be deleted before deleting the custom classifier model itself. " ] }, { "cell_type": "code", "execution_count": null, "id": "f67f40c0-604d-402a-93d3-6c14d732bb52", "metadata": {}, "outputs": [], "source": [ "%%time\n", "try:\n", " \n", " # Check if real-time endpoint still exists\n", " response = comprehend_client.list_endpoints(\n", " Filter={\n", " 'ModelArn': documentClassifierArn,\n", " }\n", " )\n", " \n", " # Delete the custom classifier model if the endpoint no longer exists\n", " if(not response.get('EndpointPropertiesList')):\n", " response = comprehend_client.delete_document_classifier(\n", " DocumentClassifierArn=documentClassifierArn\n", " )\n", " print(\"Deleting the custom document classifier model.\") \n", " else:\n", " print(\"The custom document classifier model {} endpoint still exists\".format(documentClassifierArn))\n", " \n", "except Exception:\n", " print(\"The custom document classifier model is not found\") " ] }, { "cell_type": "markdown", "id": "5947b8d2-97b6-401d-9ae6-2fb1db496da5", "metadata": { "tags": [] }, "source": [ "## 4. Conclusion\n", "\n", "\n", "In this lab we demonsrated how to programatically deploy a trained Amazon Comprehend Custom Classifier to a real-time endpoint using the AWS SDK for Python (Boto3).\n" ] }, { "cell_type": "markdown", "id": "4160d465-9da7-41dd-a8d2-ffe783bf1944", "metadata": { "tags": [] }, "source": [ "## 5. Additional Resources\n", "\n", "\n", "- [Amazon Comprehend Custom classification Developer Guide](https://docs.aws.amazon.com/comprehend/latest/dg/how-document-classification.html)\n", "- [Amazon Comprehend Boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/comprehend.html)" ] }, { "cell_type": "code", "execution_count": null, "id": "0d98564a-b3b7-4729-b602-2c672f64f685", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "availableInstances": [ { "_defaultOrder": 0, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.t3.medium", "vcpuNum": 2 }, { "_defaultOrder": 1, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.t3.large", "vcpuNum": 2 }, { "_defaultOrder": 2, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.t3.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 3, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.t3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 4, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5.large", "vcpuNum": 2 }, { "_defaultOrder": 5, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 6, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 7, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 8, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 9, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 10, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 11, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 12, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5d.large", "vcpuNum": 2 }, { "_defaultOrder": 13, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5d.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 14, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5d.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 15, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5d.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 16, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5d.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 17, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5d.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 18, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5d.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 19, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 20, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": true, "memoryGiB": 0, "name": "ml.geospatial.interactive", "supportedImageNames": [ "sagemaker-geospatial-v1-0" ], "vcpuNum": 0 }, { "_defaultOrder": 21, "_isFastLaunch": true, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.c5.large", "vcpuNum": 2 }, { "_defaultOrder": 22, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.c5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 23, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.c5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 24, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.c5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 25, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 72, "name": "ml.c5.9xlarge", "vcpuNum": 36 }, { "_defaultOrder": 26, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 96, "name": "ml.c5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 27, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 144, "name": "ml.c5.18xlarge", "vcpuNum": 72 }, { "_defaultOrder": 28, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.c5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 29, "_isFastLaunch": true, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g4dn.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 30, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g4dn.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 31, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g4dn.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 32, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g4dn.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 33, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g4dn.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 34, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g4dn.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 35, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 61, "name": "ml.p3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 36, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 244, "name": "ml.p3.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 37, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 488, "name": "ml.p3.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 38, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.p3dn.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 39, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.r5.large", "vcpuNum": 2 }, { "_defaultOrder": 40, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.r5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 41, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.r5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 42, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.r5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 43, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.r5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 44, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.r5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 45, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 512, "name": "ml.r5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 46, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.r5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 47, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 48, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 49, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 50, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 51, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 52, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 53, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.g5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 54, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.g5.48xlarge", "vcpuNum": 192 } ], "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (Data Science)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-2:429704687514:image/datascience-1.0" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.10" } }, "nbformat": 4, "nbformat_minor": 5 }