{ "cells": [ { "attachments": {}, "cell_type": "markdown", "id": "a73bd45f-9f55-4c7b-93ae-9db1135f2f0f", "metadata": { "tags": [] }, "source": [ "# Amanzon SageMaker Ground Truth Demonstration for Video Frame Object Tracking Labeling Job\n", "\n", "1. [Introduction](#1-introduction)\n", " 1. [Cost and Runtime](#11-cost-and-runtime)\n", " 2. [Prerequisites](#12-prerequisites)\n", "2. [Launch the Notebook Instance and Setup the Environment](#2-launch-the-notebook-instance-and-setup-the-environment)\n", "3. [Run a Ground Truth Labeling Job](#3-run-a-ground-truth-labeling-job)\n", " 1. [Prepare the Data](#31-prepare-the-data)\n", " 2. [Create a Video Frame Input Manifest File](#32-create-a-video-frame-input-manifest-file)\n", " 3. [Create an Instruction Template](#33-create-the-instruction-template)\n", " 4. [Use a private team to test your task](#Use-a-private-team-to-test-your-task)\n", " 5. [Define Pre-built Lambda Functions for Use In the Labeling Job](#35-define-pre-built-lambda-functions-for-use-in-the-labeling-job)\n", " 6. [Submit the Ground Truth job request](#36-submit-the-ground-truth-job-request)\n", " 7. [Monitor the Job Progress](#37-monitor-the-job-progress)\n", " 8. [Preview the Worker UI Task](#38-preview-the-worker-ui-task)\n", " 9. [View the Task Results](#39-view-the-task-results)\n", "4. [Clean Up - Optional](#4-clean-up---optional)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "0abbbc7c-a515-4475-934f-c48cf2c66b48", "metadata": {}, "source": [ "## 1. Introduction\n", "\n", "This sample notebook takes you through an end-to-end workflow to demonstrate the functionality of SageMaker Ground Truth Video Frame Object Tracking. You can use the video frame object tracking task type to have workers track the movement of objects in a sequence of video frames (images extracted from a video) using bounding boxes, polylines, polygons or keypoint annotation tools.\n", "\n", "Before you begin, we highly recommend you start a Ground Truth labeling job through the AWS Console first to familiarize yourself with the workflow. The AWS Console offers less flexibility than the API, but is simple to use.\n", "\n", "For more information, refer to Amazon SageMaker Developer Guide: [Video Frame Object Tracking](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-video-object-tracking.html).\n", "\n", "#### 1.1 Cost and Runtime\n", "\n", "1. For pricing, please refer to [Ground Truth pricing scheme](https://aws.amazon.com/sagemaker/groundtruth/pricing/). In order to reduce the cost, we will use Ground Truth's auto-labeling feature. Amazon SageMaker Ground Truth can use active learning to automate the labeling of your input data for certain built-in task types. Active learning is a machine learning technique that identifies data that should be labeled by your workers. In Ground Truth, this functionality is called automated data labeling. Automated data labeling helps to reduce the cost and time that it takes to label your dataset compared to using only humans.\n", "\n", "#### 1.2 Prerequisites\n", "To run this notebook, you can simply execute each cell one-by-one. To understand what's happening, you'll need:\n", "* An S3 bucket you can write to -- please provide its name in the following cell. The bucket must be in the same region as this SageMaker Notebook instance. You can also change the `EXP_NAME` to any valid S3 prefix. All the files related to this experiment will be stored in that prefix of your bucket.\n", "* Basic familiarity with [AWS S3](https://docs.aws.amazon.com/s3/index.html)\n", "* Basic understanding of [AWS Sagemaker](https://aws.amazon.com/sagemaker/)\n", "* Basic familiarity with [AWS Command Line Interface (CLI)](https://aws.amazon.com/cli/). Set it up with credentials to access the AWS account you're running this notebook from. This should work out-of-the-box on SageMaker Jupyter Notebook instances." ] }, { "attachments": {}, "cell_type": "markdown", "id": "b560a438-db66-40f0-8381-98ad893d5337", "metadata": {}, "source": [ "## 2. Launch the Notebook Instance and Setup the Environment\n", "In this step, you will use Amazon SageMaker Studio notebook to call Amazon SageMaker APIs to create a video frame object tracking labeling job. In SageMaker Studio, click on \"File Browser\" pane on the left side, navigate to \"amazon-sagemaker-groundtruth-workshop/02-module-label-videos-videoframes\" directory and then double click on `video-frame-object-tracking-labeling.ipynb` notebook.\n", "If you are prompted to choose a Kernel, choose the “Python 3 (Data Science)” kernel and click “Select”.\n", "\n", "This notebook is only tested on a SageMaker Studio Notebook & SageMaker Notebook Instances. The runtimes given are approximate, we used an `ml.t3.medium` instance with `Data Science` image. However, you can also run it on a local instance by first executing the cell below on SageMaker, and then copying the `role` string to your local copy of the notebook.\n", "\n", "NOTES: \n", "- This notebook will create/remove subdirectories in its working directory. We recommend to place this notebook in its own directory before running it. \n", "\n", "- Ground Truth requires all S3 buckets that contain labeling job input image data have a CORS policy attached. To learn more about this change, see [CORS Permission Requirement](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-video-overview.html) for Video Frame Object Tracking." ] }, { "cell_type": "code", "execution_count": null, "id": "7973cd90-c64a-4df1-88e8-b82be01a2edc", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell 01\n", "\n", "%load_ext autoreload\n", "%autoreload 2\n", "\n", "import os\n", "import json\n", "import time\n", "import pandas as pd\n", "import matplotlib\n", "import matplotlib.pyplot as plt\n", "from sklearn.metrics import confusion_matrix\n", "import boto3\n", "import sagemaker\n", "from urllib.parse import urlparse\n", "import warnings\n", "\n", "sess = sagemaker.Session()\n", "BUCKET = sess.default_bucket() \n", "\n", "EXP_NAME = \"label-video/video-frame-object-tracking\" # Any valid S3 prefix." ] }, { "cell_type": "code", "execution_count": null, "id": "7210a79d-ca05-419d-b2a3-09cb12c62b03", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell 02\n", "\n", "# Make sure the bucket is in the same region as this notebook.\n", "role = sagemaker.get_execution_role()\n", "region = boto3.session.Session().region_name\n", "\n", "s3 = boto3.client(\"s3\")\n", "bucket_region = s3.head_bucket(Bucket=BUCKET)[\"ResponseMetadata\"][\"HTTPHeaders\"][\n", " \"x-amz-bucket-region\"\n", "]\n", "\n", "assert (\n", " bucket_region == region\n", "), f\"You S3 bucket {BUCKET} and this notebook need to be in the same region.\"" ] }, { "attachments": {}, "cell_type": "markdown", "id": "8e627465-266f-4c6d-bd2c-a64e57391731", "metadata": { "tags": [] }, "source": [ "## 3. Run a Ground Truth Labeling Job\n", "\n", "\n", "**This section should take about 30 min to complete.**\n", "\n", "We will first run a labeling job. This involves several steps: collecting the video frames for labeling, specifying the possible label categories, creating instructions, and writing a labeling job specification.\n", "\n", "### 3.1 Prepare the data\n", "\n", "For the purpose of this demo, we use 9 frames dataset that is created by the author, Michael Daniels and this dataset can be found in `object_tracking_data` directory.\n", "\n", "We will copy these frames from `object_tracking_data` directory to our local `BUCKET`, and will create the corresponding *input manifest*. The input manifest is a formatted list of the S3 locations of the images we want Ground Truth to annotate. We will upload this manifest to our S3 `BUCKET`.\n", "\n", "### 3.2 Create a Video Frame Input Manifest File\n", "Ground Truth uses the input manifest file to identify the location of your input dataset when creating labeling tasks. For video frame object object tracking labeling jobs, each line in the input manifest file identifies the location of a video frame sequence file. Each sequence file identifies the images included in a single sequence of video frames. For more information, click [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-video-manual-data-setup.html#sms-video-create-manifest). Run the next cell to create input.manifest and input.manifest.json files." ] }, { "cell_type": "code", "execution_count": null, "id": "b7a8dc88-7274-4523-a039-a92253bdf7bf", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell 03\n", "\n", "manifest_name = 'input.manifest'\n", "\n", "total_frames = 0\n", "frames = []\n", "fr_no = 0\n", "for i, filename in enumerate(sorted(os.listdir('./object_tracking_data/'))):\n", " if filename.endswith(('jpg','jpeg','png')):\n", " total_frames += 1\n", " frames.append({\"frame-no\":fr_no,\"frame\":filename})\n", " s3.upload_file(f\"./object_tracking_data/{filename}\", BUCKET, EXP_NAME + f\"/{filename}\")\n", " fr_no+=1\n", " \n", "json_body = {\n", " \"seq-no\":1,\n", " f\"prefix\":f\"s3://{BUCKET}/{EXP_NAME}/\",\n", " \"number-of-frames\":total_frames,\n", " \"frames\":frames\n", " }\n", "\n", "# Create input.manifest.json file\n", "with open(\"./input.manifest.json\", \"w\") as f:\n", " json.dump(json_body, f, separators=(',', ':'))\n", " \n", "# Create input.manifest file \n", "manifest = {\"source-ref\":f\"s3://{BUCKET}/{EXP_NAME}/{manifest_name}.json\"}\n", "\n", "with open(f\"./{manifest_name}\", \"w\") as outfile:\n", " json.dump(manifest, outfile, separators=(',', ':'))" ] }, { "attachments": {}, "cell_type": "markdown", "id": "1ffc5bde-65f5-4114-a6bb-b3ff55f15343", "metadata": {}, "source": [ "Run the next cell to upload `input.manifest` and `input.manifest.json` files to S3." ] }, { "cell_type": "code", "execution_count": null, "id": "20031da2-a4e6-4c49-8ac1-47511049dc11", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell 04\n", "\n", "s3.upload_file(\"input.manifest\", BUCKET, f\"{EXP_NAME.split('/')[0]}\" + \"/input.manifest\")\n", "s3.upload_file(\"input.manifest.json\", BUCKET, EXP_NAME + \"/input.manifest.json\")" ] }, { "attachments": {}, "cell_type": "markdown", "id": "1cfdadc9-2104-41c7-bf7f-4e68649ef998", "metadata": {}, "source": [ "After running the cell above, you should be able to see the following files in [S3 console](https://console.aws.amazon.com/s3/):\n", " \n", "- `s3://BUCKET/label-video/video-frame-object-tracking/input.manifest.json`\n", "- `s3://BUCKET/label-video/input.manifest`\n", "\n", "We recommend you inspect the contents of these content! You can download them all to a local machine using the AWS CLI." ] }, { "attachments": {}, "cell_type": "markdown", "id": "b39fd076-d195-423c-a165-61eb6fe75342", "metadata": {}, "source": [ "### Create the Instruction Template \n", " Specify labels and provide instructions for the workers" ] }, { "cell_type": "code", "execution_count": null, "id": "ae761eb9-00cf-475a-99fc-78e8855e1fb0", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell 05\n", "\n", "# define the classes\n", "json_body = {\n", " \"labels\": [\n", " {\n", " \"label\": \"cat\"\n", " }\n", " ],\n", " \"instructions\": {\n", " \"shortInstruction\": \"

Please draw bounding box for each object in each frame

\",\n", " \"fullInstruction\": \"\"\n", " }\n", " }\n", "\n", "# upload the json to s3\n", "with open(\"class_labels.json\", \"w\") as f:\n", " json.dump(json_body, f)\n", "\n", "s3.upload_file(\"class_labels.json\", BUCKET, EXP_NAME + \"/class_labels.json\")" ] }, { "attachments": {}, "cell_type": "markdown", "id": "f929d95e-6784-46f9-90f2-24972a07b557", "metadata": { "tags": [] }, "source": [ "## 3.4 Use a private team to test your task \n", "\n", "\n", "Refer to Prerequisites to setup private workforce team. " ] }, { "cell_type": "code", "execution_count": null, "id": "f3354c92-7f87-454e-a726-a314dc622058", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell 06\n", "\n", "# private workforce team\n", "private_workteam_arn = \"\"\n", "\n", "assert (\n", " private_workteam_arn\n", "), \"Please enter your private workforce team, private_workteam_arn. You can find it on Amazon SageMaker console > Ground Truth > Labeling workforces > Private Teams\"\n", "\n", "\n", "WORKTEAM_ARN = private_workteam_arn\n", "print(\"WORKTEAM_ARN : {}\".format(WORKTEAM_ARN))" ] }, { "attachments": {}, "cell_type": "markdown", "id": "1cbebdfb-dedf-4e88-8b8b-d1ee965a218e", "metadata": {}, "source": [ "## 3.5 Define Pre-built Lambda Functions for Use In the Labeling Job\n", "Before we submit the request, we need to define the ARNs for following key components of the labeling job: 1) the annotation consolidation Lambda function, 2) the pre-labeling task Lambda function, and 3) the human task UI template. These functions are defined by strings with region names and AWS service account numbers, so we will define a mapping below that will enable you to run this notebook in any of our supported regions. " ] }, { "cell_type": "code", "execution_count": null, "id": "61d59393-8c73-4bb4-9a10-97781885315f", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell 07\n", "\n", "ac_arn_map = {\n", " \"us-west-2\" : \"081040173940\",\n", " \"us-east-1\" : \"432418664414\",\n", " \"us-east-2\" : \"266458841044\",\n", " \"eu-west-1\" : \"568282634449\",\n", " \"ap-northeast-1\": \"477331159723\",\n", "}\n", "\n", "# PreHumanTaskLambdaArn for VideoObjectTracking\n", "prehuman_arn = f\"arn:aws:lambda:{region}:{ac_arn_map[region]}:function:PRE-VideoObjectTracking\"\n", "\n", "# AnnotationConsolidationConfig for VideoObjectTracking\n", "acs_arn = f\"arn:aws:lambda:{region}:{ac_arn_map[region]}:function:ACS-VideoObjectTracking\" " ] }, { "attachments": {}, "cell_type": "markdown", "id": "4a7d33a4-804e-4022-be8f-feeb0488d810", "metadata": { "tags": [] }, "source": [ "## 3.6 Submit the Ground Truth Job Request\n", "\n", "The API starts a Ground Truth job by submitting a request. The request contains the \n", "full configuration of the annotation task, and allows you to modify the fine details of\n", "the job that are fixed to default values when you use the AWS Console. The parameters that make up the request are described in more detail in the [SageMaker Ground Truth documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateLabelingJob.html).\n", "\n", "After you submit the request, you should be able to see the job in your AWS Console, at `Amazon SageMaker > Labeling Jobs`.\n", "You can track the progress of the job there. This job will take several hours to complete. If your job\n", "is larger (say 10,000 review text), the speed and cost benefit of auto-labeling should be larger.\n", "\n", "Run the next two cells. This will define the task and submit it to the private workforce (to you).\n", "3. After a few minutes, you should be able to see your task in your private workforce interface.\n", "Please verify that the task appears as you want it to appear." ] }, { "cell_type": "code", "execution_count": null, "id": "ef5de0fa-c5bb-4b97-b5b0-a894f885c68e", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell 08\n", "\n", "# task definitions\n", "task_description = f'Tracking the location of cat across video frames today. Please draw a box around each object. Thank you!'\n", "task_keywords = ['Video Frame Object Tracking']\n", "task_title = 'Video object tracking'\n", "job_name = \"video-frame-object-tracking-\" + str(int(time.time()))\n", "no_human_per_object = 1 # number of workers required to label each text.\n", "task_time_limit = 28800 # worker has to complete a task within 8 hours\n", "task_availability_lifetime = 21600 # 6 hours to complete all pending tasks by human worker(s)\n", "max_concurrent_task_count = 100 #maximum number of data objects that can be labeled by human workers at the same time" ] }, { "cell_type": "code", "execution_count": null, "id": "2d4525dc-40cd-4a04-94c7-cc774b1e925f", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell 09\n", "\n", "human_task_config={\n", " 'PreHumanTaskLambdaArn': prehuman_arn,\n", " 'TaskKeywords': task_keywords,\n", " 'TaskTitle': task_title,\n", " 'TaskDescription': task_description,\n", " 'NumberOfHumanWorkersPerDataObject': no_human_per_object, \n", " 'TaskTimeLimitInSeconds': task_time_limit, \n", " 'TaskAvailabilityLifetimeInSeconds': task_availability_lifetime,\n", " 'MaxConcurrentTaskCount': max_concurrent_task_count,\n", " 'AnnotationConsolidationConfig': {\n", " 'AnnotationConsolidationLambdaArn': acs_arn,\n", " },\n", " 'UiConfig': {\n", " 'HumanTaskUiArn': f\"arn:aws:sagemaker:{region}:394669845002:human-task-ui/VideoObjectTracking\",\n", " },\n", " }\n", " \n", "human_task_config[\"WorkteamArn\"] = private_workteam_arn\n", "\n", " \n", "ground_truth_request = {\n", " \"InputConfig\":{\n", " 'DataSource': {\n", " 'S3DataSource': {\n", " 'ManifestS3Uri': f\"s3://{BUCKET}/{EXP_NAME.split('/')[0]}/{manifest_name}\",\n", " }\n", " },\n", " 'DataAttributes': {\n", " 'ContentClassifiers': [\n", " 'FreeOfPersonallyIdentifiableInformation','FreeOfAdultContent',\n", " ]\n", " }\n", " },\n", " \"OutputConfig\":{\n", " 'S3OutputPath': f\"s3://{BUCKET}/{EXP_NAME}/output/\",\n", " },\n", " \n", " \"HumanTaskConfig\": human_task_config,\n", " \"LabelingJobName\": job_name,\n", " \"RoleArn\": role,\n", " \"LabelAttributeName\": \"category-ref\",\n", " \"LabelCategoryConfigS3Uri\": f\"s3://{BUCKET}/{EXP_NAME}/class_labels.json\",\n", "\n", "}\n", "\n", "sagemaker_client = boto3.client(\"sagemaker\")\n", "sagemaker_client.create_labeling_job(**ground_truth_request)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "4ecd70d0-12b8-4abc-9d20-ea261fb3c7e2", "metadata": {}, "source": [ "## 3.7 Monitor the Job Progress\n", "You can monitor the job's progress through AWS Console. In this notebook, we will use Ground Truth output files and Cloud Watch logs in order to monitor the progress. You can re-evaluate the next two cells repeatedly. It sends a `describe_labelging_job` request which should tell you whether the job is completed or not. If it is, then 'LabelingJobStatus' will be 'Completed'." ] }, { "cell_type": "code", "execution_count": null, "id": "118823c9-dcd9-43a9-884c-f7852cb8eeec", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell 10\n", "\n", "# re-evaluate repeatedly. It sends a `describe_labelging_job` request which should tell you whether the job is completed or not. If it is, then 'LabelingJobStatus' will be 'Completed'.\n", "while sagemaker_client.describe_labeling_job(LabelingJobName=job_name)['LabelingJobStatus'] == 'InProgress':\n", " job_status = sagemaker_client.describe_labeling_job(LabelingJobName=job_name)['LabelingJobStatus']\n", " print('Labelling job : {}, status : {}'.format(job_name, job_status))\n", " time.sleep(45)\n", "print('Labelling job : {}, status : {}'.format(job_name, sagemaker_client.describe_labeling_job(LabelingJobName=job_name)['LabelingJobStatus']))" ] }, { "attachments": {}, "cell_type": "markdown", "id": "cdf4cfa5-c046-41ed-ae7f-683dfd511ab9", "metadata": {}, "source": [ "## 3.8 Preview the Worker UI Task\n", "Ground Truth provides workers with a web user interface (UI) to complete the video frame object tracking annotation tasks. You can preview and interact with the worker UI when you create a labeling job in the console.\n", "The UI provides workers with the following assistive labeling tools to complete your object tracking tasks:\n", "- For all tasks, workers can use the Copy to next and Copy to all features to copy an annotation with the same unique ID to the next frame or to all subsequent frames respectively.\n", "- For tasks that include the bounding box tools, workers can use a Predict next feature to draw a bounding box in a single frame, and then have Ground Truth predict the location of boxes with the same unique ID in all other frames. Workers can then make adjustments to correct predicted box locations.\n", "The following video shows how a worker might use the worker UI with the bounding box tool to complete your object tracking tasks." ] }, { "attachments": {}, "cell_type": "markdown", "id": "8a11017e-31f9-48bb-bb19-6a3b5f2c055b", "metadata": { "tags": [] }, "source": [ "## 3.9 View the Task Results\n", "Once work is completed, Amazon SageMaker GroundTruth stores results in your S3 bucket and sends a Cloudwatch event. Your results should be available in the S3 OUTPUT_PATH when all work is completed." ] }, { "cell_type": "code", "execution_count": null, "id": "13facda7-4a6a-41fd-9892-9f38e6991fad", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell 11\n", "\n", "# ouput path\n", "S3_OUTPUT = boto3.client('sagemaker').describe_labeling_job(LabelingJobName=job_name)['OutputConfig']['S3OutputPath'] + job_name\n", "print('S3 OUPUT_PATH : {}'.format(S3_OUTPUT))\n", "\n", "# Download human annotation data.\n", "!aws s3 cp {S3_OUTPUT + '/manifests/output/output.manifest'} \"./output/\" #--recursive --quiet" ] }, { "cell_type": "code", "execution_count": null, "id": "2e353151-4609-4b08-8cc7-322b843c9c02", "metadata": {}, "outputs": [], "source": [ "# cell 12\n", "\n", "data=[]\n", "with open('./output/output.manifest') as f:\n", " for line in f:\n", " json_data = json.loads(line)\n", " data.append(json_data)\n", " \n", "print(data)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "91af3912-0cfb-4240-80f6-ba85eaf8e147", "metadata": {}, "source": [ "# 4. Clean Up - Optional\n", "Finally, let's clean up and delete this endpoint." ] }, { "cell_type": "code", "execution_count": null, "id": "f6327ec2-1d54-47ef-88b7-acc78001b71c", "metadata": { "tags": [] }, "outputs": [], "source": [ "# cell 13\n", "\n", "if sagemaker_client.describe_labeling_job(LabelingJobName=job_name)['LabelingJobStatus'] == 'InProgress':\n", " sagemaker_client.stop_labeling_job(LabelingJobName=job_name)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "c3466bfe-a942-45fb-8083-e686879e7090", "metadata": { "jp-MarkdownHeadingCollapsed": true, "tags": [] }, "source": [ "## The End!" ] } ], "metadata": { "availableInstances": [ { "_defaultOrder": 0, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.t3.medium", "vcpuNum": 2 }, { "_defaultOrder": 1, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.t3.large", "vcpuNum": 2 }, { "_defaultOrder": 2, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.t3.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 3, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.t3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 4, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5.large", "vcpuNum": 2 }, { "_defaultOrder": 5, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 6, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 7, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 8, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 9, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 10, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 11, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 12, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5d.large", "vcpuNum": 2 }, { "_defaultOrder": 13, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5d.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 14, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5d.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 15, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5d.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 16, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5d.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 17, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5d.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 18, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5d.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 19, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 20, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": true, "memoryGiB": 0, "name": "ml.geospatial.interactive", "supportedImageNames": [ "sagemaker-geospatial-v1-0" ], "vcpuNum": 0 }, { "_defaultOrder": 21, "_isFastLaunch": true, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.c5.large", "vcpuNum": 2 }, { "_defaultOrder": 22, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.c5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 23, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.c5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 24, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.c5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 25, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 72, "name": "ml.c5.9xlarge", "vcpuNum": 36 }, { "_defaultOrder": 26, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 96, "name": "ml.c5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 27, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 144, "name": "ml.c5.18xlarge", "vcpuNum": 72 }, { "_defaultOrder": 28, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.c5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 29, "_isFastLaunch": true, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g4dn.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 30, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g4dn.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 31, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g4dn.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 32, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g4dn.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 33, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g4dn.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 34, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g4dn.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 35, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 61, "name": "ml.p3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 36, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 244, "name": "ml.p3.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 37, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 488, "name": "ml.p3.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 38, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.p3dn.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 39, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.r5.large", "vcpuNum": 2 }, { "_defaultOrder": 40, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.r5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 41, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.r5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 42, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.r5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 43, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.r5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 44, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.r5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 45, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 512, "name": "ml.r5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 46, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.r5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 47, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 48, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 49, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 50, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 51, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 52, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 53, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.g5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 54, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.g5.48xlarge", "vcpuNum": 192 } ], "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (Data Science)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-west-2:236514542706:image/datascience-1.0" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.10" }, "toc-autonumbering": false, "toc-showcode": false, "toc-showmarkdowntxt": false, "toc-showtags": false }, "nbformat": 4, "nbformat_minor": 5 }