{ "cells": [ { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "# Explaining Object Detection model with Amazon SageMaker Clarify\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "This notebook's CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook. \n", "\n", "![This us-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-2/sagemaker-clarify|computer_vision|object_detection|object_detection_clarify.ipynb)\n", "\n", "---" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "In this notebook, we deploy a pre-trained image detection model to showcase how you can use Amazon SagemaMaker Clarify explainability features for Computer Vision, specifically for object detection models including your own ones.\n", "\n", " 1. We first import a model from the Gluon model zoo locally on the notebook, that we then compress and send to S3\n", " 1. We then use the SageMaker MXNet Serving feature to deploy the model to a managed SageMaker endpoint. It uses the model artifact that we previously loaded to S3.\n", " 1. We query the endpoint and visualize detection results\n", " 1. We explain the predictions of the model using Amazon SageMaker Clarify.\n", " \n", "This notebook can be run with the `conda_python3` Kernel.\n", "\n", "\n", "## More on Amazon SageMaker Clarify:\n", "\n", "Amazon SageMaker Clarify helps improve your machine learning models by detecting potential bias and helping explain how these models make predictions. The fairness and explainability functionality provided by SageMaker Clarify takes a step towards enabling AWS customers to build trustworthy and understandable machine learning models. The product comes with the tools to help you with the following tasks. \n", "\n", "Measure biases that can occur during each stage of the ML lifecycle (data collection, model training and tuning, and monitoring of ML models deployed for inference).\n", "Generate model governance reports targeting risk and compliance teams and external regulators.\n", "Provide explanations of the data, models, and monitoring used to assess predictions for input containing data of various modalities like numerical data, categorical data, text, and images.\n", "Learn more about SageMaker Clarify here: [https://aws.amazon.com/sagemaker/clarify/](https://aws.amazon.com/sagemaker/clarify/).\n", "\n", "\n", "## More on `Gluon` and `Gluon CV`:\n", " * [Gluon](https://mxnet.incubator.apache.org/api/python/docs/api/gluon/index.html) is the imperative python front-end of the Apache MXNet deep learning framework. Gluon notably features specialized toolkits helping reproducing state-of-the-art architectures: [Gluon-CV](https://gluon-cv.mxnet.io/), [Gluon-NLP](https://gluon-nlp.mxnet.io/), [Gluon-TS](https://gluon-ts.mxnet.io/). Gluon also features a number of excellent end-to-end tutorials mixing science with code such as [D2L.ai](https://classic.d2l.ai/) and [The Straight Dope](https://gluon.mxnet.io/)\n", " * [Gluon-CV](https://gluon-cv.mxnet.io/contents.html) is an efficient computer vision toolkit written on top of `Gluon` and MXNet aiming to make state-of-the-art vision research reproducible. \n", "\n", "**This sample is provided for demonstration purposes, make sure to conduct appropriate testing if derivating this code for your own use-cases!**\n", "\n", "\n", "## Index:\n", "1. Test a pre-trained detection model, locally\n", "1. Instantiate model\n", "1. Create endpoint and get predictions (optional)\n", "1. Run Clarify and interpret predictions" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "! pip install -r requirements.txt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's start by installing the latest version of the SageMaker Python SDK, boto, and AWS CLI." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "! pip install sagemaker botocore boto3 awscli --upgrade" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import datetime\n", "import json\n", "import math\n", "import os\n", "import shutil\n", "from subprocess import check_call\n", "import tarfile\n", "\n", "from PIL import Image\n", "import numpy as np\n", "from matplotlib import pyplot as plt\n", "\n", "import boto3\n", "import botocore\n", "\n", "import sagemaker\n", "from sagemaker import get_execution_role\n", "from sagemaker.mxnet.model import MXNetModel\n", "\n", "import gluoncv\n", "from gluoncv import model_zoo, data, utils\n", "import mxnet as mx\n", "from mxnet import gluon, image, nd" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sm_sess = sagemaker.Session()\n", "sm_client = boto3.client(\"sagemaker\")\n", "\n", "s3_bucket = (\n", " sm_sess.default_bucket()\n", ") # We use this bucket to store model weights - don't hesitate to change.\n", "print(f\"using bucket {s3_bucket}\")\n", "\n", "# For a sagemaker notebook\n", "sm_role = sagemaker.get_execution_role()\n", "# Override the role if you are executing locally:\n", "# sm_role = \"arn:aws:iam:::role/service-role/AmazonSageMaker-ExecutionRole\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Constants" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "TEST_IMAGE_DIR = \"caltech\" # directory with test images\n", "MODEL_NAME = \"yolo3_darknet53_coco\"\n", "S3_KEY_PREFIX = \"clarify_object_detection\" # S3 Key to store model artifacts\n", "ENDPOINT_INSTANCE_TYPE = \"ml.g4dn.xlarge\"\n", "ANALYZER_INSTANCE_TYPE = \"ml.c5.xlarge\"\n", "ANALYZER_INSTANCE_COUNT = 1" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false }, "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "def gen_unique_name(model_name: str):\n", " # Generate a unique name for this user / host combination\n", " import hashlib\n", " import socket\n", " import getpass\n", "\n", " user = getpass.getuser()\n", " host = socket.gethostname()\n", " h = hashlib.sha256()\n", " h.update(user.encode())\n", " h.update(host.encode())\n", " res = model_name + \"-\" + h.hexdigest()[:8]\n", " res = res.replace(\"_\", \"-\").replace(\".\", \"\")\n", " return res" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Test a pre-trained detection model, locally\n", "[Gluon model zoo](https://cv.gluon.ai/model_zoo/index.html) contains a variety of models.\n", "In this demo we use a YoloV3 detection model (Redmon et Farhadi). More about YoloV3:\n", "* Paper https://pjreddie.com/media/files/papers/YOLOv3.pdf\n", "* Website https://pjreddie.com/darknet/yolo/\n", "\n", "Gluon CV model zoo contains a number of architectures with different tradeoffs in terms of speed and accuracy. If you are looking for speed or accuracy, don't hesitate to change the model" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "net = model_zoo.get_model(MODEL_NAME, pretrained=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The model we downloaded above is trained on the COCO dataset and can detect 80 classes. In this demo, we restrict the model to detect only specific classes of interest.\n", "This idea is derived from the official Gluon CV tutorial: https://gluon-cv.mxnet.io/build/examples_detection/skip_fintune.html\n", "\n", "\n", "COCO contains the following classes:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"coco classes: \", sorted(net.classes))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# in this demo we reset the detector to the following classes\n", "classes = [\"dog\", \"elephant\", \"zebra\", \"bear\"]\n", "net.reset_class(classes=classes, reuse_weights=classes)\n", "print(\"new classes: \", net.classes)\n", "net.hybridize() # hybridize to optimize computation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Get RGB images from the Caltech 256 dataset `[Griffin, G. Holub, AD. Perona, P. The Caltech 256. Caltech Technical Report.]`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import urllib.request\n", "import os\n", "\n", "list_of_images = [\n", " \"009.bear/009_0001.jpg\",\n", " \"009.bear/009_0002.jpg\",\n", " \"056.dog/056_0023.jpg\",\n", " \"056.dog/056_0001.jpg\",\n", " \"064.elephant-101/064_0003.jpg\",\n", " \"064.elephant-101/064_0004.jpg\",\n", " \"064.elephant-101/064_0006.jpg\",\n", " \"250.zebra/250_0001.jpg\",\n", " \"250.zebra/250_0002.jpg\",\n", "]\n", "\n", "source_url = f\"https://sagemaker-example-files-prod-{sm_sess.boto_region_name}.s3.amazonaws.com/datasets/image/caltech-256/256_ObjectCategories/\"\n", "\n", "if not os.path.exists(TEST_IMAGE_DIR):\n", " os.makedirs(TEST_IMAGE_DIR)\n", "\n", "for image_name in list_of_images:\n", " url = source_url + image_name\n", " file_name = TEST_IMAGE_DIR + \"/\" + image_name.replace(\"/\", \"_\")\n", " urllib.request.urlretrieve(url, file_name)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Test locally" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import glob\n", "\n", "test_images = glob.glob(f\"{TEST_IMAGE_DIR}/*.jpg\")\n", "test_images" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "`gluoncv` comes with built-in pre-processing logic for popular detectors, including YoloV3:\n", "\n", "https://gluon-cv.mxnet.io/_modules/gluoncv/data/transforms/presets/yolo.html\n", "\n", "https://gluon-cv.mxnet.io/build/examples_detection/demo_yolo.html" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's see how the network computes detections in a single image, we have to first resize and reshape, since the original image is loaded with channels in the last dimension and MXNet will expect a shape of (num_batches, channels, width, height)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "transformed_image, _ = data.transforms.presets.yolo.transform_test(image.imread(test_images[-1]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The network returns 3 tensors: class_ids, scores and bounding boxes. The default is up to 100 detections, so we get tensor with shape (num batches, detections, ...) where the last dimension is 4 for the bounding boxes as we have upper right corner, and lower right corner coordinates." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "(cids, scores, bboxs) = net(transformed_image)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cids.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "scores.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "bboxs.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "bboxs[:, 0, :]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "n_pics = len(test_images)\n", "n_cols = 3\n", "n_rows = max(math.ceil(n_pics / n_cols), 2)\n", "fig, axes = plt.subplots(n_rows, n_cols, figsize=(15, 15))\n", "[ax.axis(\"off\") for ax_dim in axes for ax in ax_dim]\n", "for i, pic in enumerate(test_images):\n", " curr_col = i % n_cols\n", " curr_row = i // n_cols\n", " # download and pre-process image\n", " print(pic)\n", " im_array = image.imread(pic)\n", " x, orig_img = data.transforms.presets.yolo.transform_test(im_array)\n", " # forward pass and display\n", " box_ids, scores, bboxes = net(x)\n", " ax = utils.viz.plot_bbox(\n", " orig_img,\n", " bboxes[0],\n", " scores[0],\n", " box_ids[0],\n", " class_names=classes,\n", " thresh=0.9,\n", " ax=axes[curr_row, curr_col],\n", " )\n", " ax.axis(\"off\")\n", " ax.set_title(pic, pad=15)\n", "fig.tight_layout()\n", "fig.show();" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Deploy the detection server\n", " 1. We first need to **send the model to S3**, as we will provide the S3 model path to Amazon SageMaker endpoint creation API\n", " 1. We create a **serving script** containing model deserialization code and inference logic. This logic is in the `repo` folder.\n", " 1. We **deploy the endpoint** with a SageMaker SDK call" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Save local model, compress and send to S3" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Clarify needs a model since it will spin up its own inference endpoint to get explanations. We will now export the local model, archieve it and then create a **SageMaker model** from this archieve which allows to create other resources that depend on this model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# save the full local model (both weights and graph)\n", "net.export(MODEL_NAME, epoch=0)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# compress into a tar file\n", "model_file = \"model.tar.gz\"\n", "tar = tarfile.open(model_file, \"w:gz\")\n", "tar.add(\"{}-symbol.json\".format(MODEL_NAME))\n", "tar.add(\"{}-0000.params\".format(MODEL_NAME))\n", "tar.close()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# upload to s3\n", "model_data_s3_uri = sm_sess.upload_data(model_file, key_prefix=S3_KEY_PREFIX)\n", "model_data_s3_uri" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Instantiate model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We use batching of images on the predictor entry_point in order to achieve higher performance as utilization of resources is better than one image at a time." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model = MXNetModel(\n", " model_data=model_data_s3_uri,\n", " role=sm_role,\n", " py_version=\"py37\",\n", " entry_point=\"detection_server_batch.py\",\n", " source_dir=\"repo\",\n", " framework_version=\"1.8.0\",\n", " sagemaker_session=sm_sess,\n", ")\n", "\n", "container_def = model.prepare_container_def(instance_type=ENDPOINT_INSTANCE_TYPE)\n", "model_name = gen_unique_name(MODEL_NAME)\n", "sm_sess.create_model(role=sm_role, name=model_name, container_defs=[container_def])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## (Optional) Create endpoint and get predictions, model IO in depth" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this optional section we deploy an endpoint to get predictions and dive deep into details that can be helpful to troubleshot issues related to expected model IO format of predictions, serialization and tensor shapes. \n", "\n", "Common pitfalls are usually solved by making sure we are using the right serializer and deserializer and that the model output conforms to the expectations of Clarify in terms of shapes and semantics of the output tensors.\n", "\n", "In general, Clarify expectes that our model receieves a batch of images and outputs a batch of image detections with a tensor having the following elements: **class id, prediction score and normalized bounding box of the detection.**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "endpoint_name = gen_unique_name(MODEL_NAME)\n", "endpoint_name" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Delete any previous enpoint" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "try:\n", " sm_sess.delete_endpoint(endpoint_name)\n", "except:\n", " pass" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Delete any stale endpoint config" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "try:\n", " sm_sess.delete_endpoint_config(endpoint_name)\n", "except botocore.exceptions.ClientError as e:\n", " print(e)\n", " pass" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Deploy the model in a SageMaker endpoint" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import sagemaker.serializers\n", "import sagemaker.deserializers\n", "\n", "print(model.name)\n", "predictor = model.deploy(\n", " initial_instance_count=1,\n", " instance_type=ENDPOINT_INSTANCE_TYPE,\n", " endpoint_name=endpoint_name,\n", " serializer=sagemaker.serializers.NumpySerializer(),\n", " deserializer=sagemaker.deserializers.JSONDeserializer(),\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "predictor.deserializer" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "predictor.serializer" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "predictor.accept" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's go in detail on how the detection server works, let's take the following test image as an example:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "im = Image.open(test_images[0])\n", "im" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since we overrode the `transform_fn` making it support batches and normalizing the detection boxes, we feed a tensor with a single batch, H, W and the 3 color channels as input" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "im_np = np.array([np.asarray(im)])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "im_np.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "(H, W) = im_np.shape[1:3]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "(H, W)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Send the image to the predictor and get detections" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tensor = np.array(predictor.predict(im_np))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tensor" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tensor.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Our prediction has one batch, 3 detections and 6 elements containing class_id, score and normalized box with upper left corner, and lower left corner." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "box_scale = np.array([W, H, W, H])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To display the detections we undo the normalization and split the detection format that clarify uses so we use the gluon plot_bbox function with the non-normalized boxes and separate scores and class ids from detections" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "box_scale" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "numdet = tensor.shape[1]\n", "cids = np.zeros(numdet)\n", "scores = np.zeros(numdet)\n", "bboxes = np.zeros((numdet, 4))\n", "for i, det in enumerate(tensor[0]):\n", " cids[i] = det[0]\n", " scores[i] = det[1]\n", " bboxes[i] = det[2:]\n", " bboxes[i] *= box_scale\n", " bboxes[i] = bboxes[i].astype(\"int\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "bboxes" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "scores" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "utils.viz.plot_bbox(np.asarray(im), bboxes, scores, cids, class_names=classes, thresh=0.8)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can group the logic above in a function to make it more convenient to use" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def detect(pic, predictor):\n", " \"\"\"elementary function to send a picture to a predictor\"\"\"\n", " im = Image.open(pic)\n", " im = im.convert(\"RGB\")\n", " im_np = np.array([np.asarray(im)])\n", " (h, w) = im_np.shape[1:3]\n", " prediction = np.array(predictor.predict(im_np))\n", " box_scale = np.array([w, h, w, h])\n", " numdet = prediction.shape[1]\n", " cids = np.zeros(numdet)\n", " scores = np.zeros(numdet)\n", " bboxes = np.zeros((numdet, 4))\n", " for i, det in enumerate(prediction[0]):\n", " cids[i] = det[0]\n", " scores[i] = det[1]\n", " bboxes[i] = det[2:]\n", " bboxes[i] *= box_scale\n", " bboxes[i] = bboxes[i].astype(\"int\")\n", " return (cids, scores, bboxes)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "pic = test_images[0]\n", "cids, scores, bboxes = detect(pic, predictor)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cids" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "bboxes" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# for local viz we need to resize local pic to the server-side resize\n", "_, orig_img = data.transforms.presets.yolo.load_test(pic)\n", "utils.viz.plot_bbox(orig_img, bboxes, scores, cids, class_names=classes, thresh=0.9)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cids" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There's a single detection of a dog which is class index 0 as in the beginning of the notebook where we called `reset_class`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Amazon Sagemaker Clarify\n", "\n", "We will now showcase how to use SageMaker Clarify to explain detections by the model, for that we have already done some work in `detection_server_batch.py` to filter out missing detections with index `-1` and we have normalized the boxes to the image dimensions. We only need to upload the data to s3, provide the configuration for Clarify in the `analysis_config.json` describing the explainability job parameters and execute the processing job with the data and configuration as inputs. As a result, we will get in S3 the explanation for the detections of the model.\n", "\n", "Clarify expects detections to be in the format explored in the cells above. Detections should come in a tensor of shape `(num_images, batch, detections, 6)`. The first number of each detection is the predicted class label. The second number is the associated confidence score for the detection. The last four numbers represent the bounding box coordinates `[xmin / w, ymin / h, xmax / w, ymax / h]`. These output bounding box corner indices are normalized by the overall image size dimensions, where `w` is the width of the image, and `h` is the height." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Upload some test images to get explanations" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "s3_test_images = f\"{S3_KEY_PREFIX}/test_images\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!mkdir -p test_images\n", "!cp {TEST_IMAGE_DIR}/009.bear_009_0002.jpg test_images\n", "!cp {TEST_IMAGE_DIR}/064.elephant-101_064_0003.jpg test_images" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dataset_uri = sm_sess.upload_data(\"test_images\", key_prefix=s3_test_images)\n", "dataset_uri" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We use this noise image as a baseline to mask different segments of the image during the explainability process" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "baseline_uri = sm_sess.upload_data(\"noise_rgb.png\", key_prefix=S3_KEY_PREFIX)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It's very important that `predictor.content_type` and `predictor.accept_type` in the json fields below match the sagemaker python sdk `predictor.serializer` and `predictor.deserializer` class instances above such as `sagemaker.serializers.NumpySerializer` so Clarify job can use the right (de)serializer." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Clarify job configuration for object detection type of models\n", "We will configure important parameters of the Clarify job for object detection under `image_config`:\n", "\n", " * **num_samples**: This number determines the size of the generated synthetic dataset to compute the SHAP values. More samples will produce more accurate explanations but will consume more computational resources\n", " * **baseline**: image that will be used to mask segments during Kernel SHAP\n", " * **num_segments**: number of segments to partition the detection image into \n", " * **max_objects**: maximum number of objects starting from the first that will be considered sorted by predicted score\n", " * **iou_threshold**: minimum IOU for considering predictions against the original detections, as detection boxes will shift during masking\n", " * **context**: whether to mask the image background when running SHAP, takes values 0 or 1\n", "\n", "\n", "
\n", "\n", "Below we use the [Sagemaker Python SDK](https://sagemaker.readthedocs.io/en/stable/api/training/processing.html?highlight=clarify#module-sagemaker.clarify) which helps create an [Analysis configuration](https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-configure-processing-jobs.html) but using higher level Python classes.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "from sagemaker.clarify import (\n", " SageMakerClarifyProcessor,\n", " ModelConfig,\n", " DataConfig,\n", " SHAPConfig,\n", " ImageConfig,\n", " ModelPredictedLabelConfig,\n", ")\n", "from sagemaker.utils import unique_name_from_base" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Configure parameters of the Clarify Processing job. The job has one input, the config file and one output, the resulting analysis of the model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "analyzer_instance_count = 1\n", "analyzer_instance_type = \"ml.c5.xlarge\"\n", "output_bucket = sm_sess.default_bucket()\n", "# Here we specify where to store the results.\n", "analysis_result_path = \"s3://{}/{}/{}\".format(output_bucket, S3_KEY_PREFIX, \"cv_analysis_result\")\n", "\n", "clarify_processor: SageMakerClarifyProcessor = SageMakerClarifyProcessor(\n", " role=sm_role,\n", " instance_count=analyzer_instance_count,\n", " instance_type=analyzer_instance_type,\n", " max_runtime_in_seconds=3600,\n", " sagemaker_session=sm_sess,\n", ")\n", "\n", "model_config: ModelConfig = ModelConfig(\n", " model_name=model_name,\n", " instance_count=1,\n", " instance_type=ENDPOINT_INSTANCE_TYPE,\n", " content_type=\"application/x-npy\",\n", ")\n", "\n", "\n", "data_config: DataConfig = DataConfig(\n", " s3_data_input_path=dataset_uri,\n", " s3_output_path=analysis_result_path,\n", " dataset_type=\"application/x-image\",\n", ")\n", "\n", "image_config: ImageConfig = ImageConfig(\n", " model_type=\"OBJECT_DETECTION\",\n", " feature_extraction_method=\"segmentation\",\n", " num_segments=20,\n", " segment_compactness=5,\n", " max_objects=5,\n", " iou_threshold=0.5,\n", " context=1.0,\n", ")\n", "\n", "shap_config: SHAPConfig = SHAPConfig(\n", " baseline=baseline_uri,\n", " num_samples=500,\n", " image_config=image_config,\n", ")\n", "\n", "\n", "predictions_config = ModelPredictedLabelConfig(probability_threshold=0.8, label_headers=net.classes)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now run the processing job, it will take approximately 6 minutes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "clarify_processor.run_explainability(\n", " data_config=data_config,\n", " model_config=model_config,\n", " model_scores=predictions_config,\n", " explainability_config=shap_config,\n", " job_name=unique_name_from_base(\"clarify-cv-object-detection\"),\n", " wait=True,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We download the results of the Clarify job and inspect the attributions" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "s3_client = boto3.client(\"s3\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!mkdir cv_analysis_result\n", "s3_client = boto3.client(\"s3\")\n", "for obj in s3_client.list_objects(\n", " Bucket=output_bucket, Prefix=S3_KEY_PREFIX + \"/cv_analysis_result\"\n", ")[\"Contents\"]:\n", " s3_client.download_file(\n", " output_bucket, obj[\"Key\"], \"cv_analysis_result/\" + obj[\"Key\"].split(\"/\")[-1]\n", " )" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "im = Image.open(\"cv_analysis_result/shap_064.elephant-101_064_0003_box1_object.jpeg\")\n", "im" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "im = Image.open(\"cv_analysis_result/shap_064.elephant-101_064_0003_box2_object.jpeg\")\n", "im" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "im = Image.open(\"cv_analysis_result/064.elephant-101_064_0003_objects.jpeg\")\n", "im" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Cleanup of resources" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We delete the previous endpoint" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sm_sess.delete_endpoint(endpoint_name)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Notebook CI Test Results\n", "\n", "This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.\n", "\n", "![This us-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-1/sagemaker-clarify|computer_vision|object_detection|object_detection_clarify.ipynb)\n", "\n", "![This us-east-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-2/sagemaker-clarify|computer_vision|object_detection|object_detection_clarify.ipynb)\n", "\n", "![This us-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-1/sagemaker-clarify|computer_vision|object_detection|object_detection_clarify.ipynb)\n", "\n", "![This ca-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ca-central-1/sagemaker-clarify|computer_vision|object_detection|object_detection_clarify.ipynb)\n", "\n", "![This sa-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/sa-east-1/sagemaker-clarify|computer_vision|object_detection|object_detection_clarify.ipynb)\n", "\n", "![This eu-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-1/sagemaker-clarify|computer_vision|object_detection|object_detection_clarify.ipynb)\n", "\n", "![This eu-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-2/sagemaker-clarify|computer_vision|object_detection|object_detection_clarify.ipynb)\n", "\n", "![This eu-west-3 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-3/sagemaker-clarify|computer_vision|object_detection|object_detection_clarify.ipynb)\n", "\n", "![This eu-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-central-1/sagemaker-clarify|computer_vision|object_detection|object_detection_clarify.ipynb)\n", "\n", "![This eu-north-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-north-1/sagemaker-clarify|computer_vision|object_detection|object_detection_clarify.ipynb)\n", "\n", "![This ap-southeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-1/sagemaker-clarify|computer_vision|object_detection|object_detection_clarify.ipynb)\n", "\n", "![This ap-southeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-2/sagemaker-clarify|computer_vision|object_detection|object_detection_clarify.ipynb)\n", "\n", "![This ap-northeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-1/sagemaker-clarify|computer_vision|object_detection|object_detection_clarify.ipynb)\n", "\n", "![This ap-northeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-2/sagemaker-clarify|computer_vision|object_detection|object_detection_clarify.ipynb)\n", "\n", "![This ap-south-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-south-1/sagemaker-clarify|computer_vision|object_detection|object_detection_clarify.ipynb)\n" ] } ], "metadata": { "availableInstances": [ { "_defaultOrder": 0, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.t3.medium", "vcpuNum": 2 }, { "_defaultOrder": 1, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.t3.large", "vcpuNum": 2 }, { "_defaultOrder": 2, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.t3.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 3, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.t3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 4, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5.large", "vcpuNum": 2 }, { "_defaultOrder": 5, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 6, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 7, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 8, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 9, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 10, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 11, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 12, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5d.large", "vcpuNum": 2 }, { "_defaultOrder": 13, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5d.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 14, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5d.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 15, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5d.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 16, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5d.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 17, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5d.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 18, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5d.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 19, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 20, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": true, "memoryGiB": 0, "name": "ml.geospatial.interactive", "supportedImageNames": [ "sagemaker-geospatial-v1-0" ], "vcpuNum": 0 }, { "_defaultOrder": 21, "_isFastLaunch": true, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.c5.large", "vcpuNum": 2 }, { "_defaultOrder": 22, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.c5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 23, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.c5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 24, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.c5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 25, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 72, "name": "ml.c5.9xlarge", "vcpuNum": 36 }, { "_defaultOrder": 26, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 96, "name": "ml.c5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 27, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 144, "name": "ml.c5.18xlarge", "vcpuNum": 72 }, { "_defaultOrder": 28, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.c5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 29, "_isFastLaunch": true, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g4dn.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 30, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g4dn.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 31, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g4dn.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 32, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g4dn.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 33, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g4dn.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 34, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g4dn.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 35, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 61, "name": "ml.p3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 36, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 244, "name": "ml.p3.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 37, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 488, "name": "ml.p3.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 38, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.p3dn.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 39, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.r5.large", "vcpuNum": 2 }, { "_defaultOrder": 40, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.r5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 41, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.r5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 42, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.r5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 43, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.r5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 44, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.r5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 45, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 512, "name": "ml.r5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 46, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.r5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 47, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 48, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 49, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 50, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 51, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 52, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 53, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.g5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 54, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.g5.48xlarge", "vcpuNum": 192 }, { "_defaultOrder": 55, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 1152, "name": "ml.p4d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 56, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 1152, "name": "ml.p4de.24xlarge", "vcpuNum": 96 } ], "kernelspec": { "display_name": "Python 3 (MXNet 1.9 Python 3.8 CPU Optimized)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-west-2:236514542706:image/mxnet-1.9-cpu-py38-ubuntu20.04-sagemaker-v1.0" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10" } }, "nbformat": 4, "nbformat_minor": 4 }