{ "cells": [ { "cell_type": "markdown", "id": "9fb44f6d", "metadata": {}, "source": [ "# Evaluate Falcon 40B for summarization with CNN Daily Mail and ROUGE\n", "**Please note, this is a modified version of the JumpStart notebook for Falcon 40B. Specifically we add the section on evaluation with ROUGE, in addition to the boto3 invocation**\n", "\n", "---\n", "In this demo notebook, we demonstrate how to use the SageMaker Python SDK to deploy Falcon models for text generation. It is a permissively licensed ([Apache-2.0](https://jumpstart-cache-prod-us-east-2.s3.us-east-2.amazonaws.com/licenses/Apache-License/LICENSE-2.0.txt)) open source model trained on the [RefinedWeb dataset](https://huggingface.co/datasets/tiiuae/falcon-refinedweb). We show several example use cases including code generation, question answering, translation etc.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "9b05b931-992e-4526-978d-f03196874a3b", "metadata": { "tags": [] }, "outputs": [], "source": [ "!pip install sagemaker --quiet --upgrade --force-reinstall\n", "!pip install ipywidgets==7.0.0 --quiet" ] }, { "cell_type": "code", "execution_count": null, "id": "c8dd6de9-0bc2-4d2c-b428-7203bb31fa3c", "metadata": { "jumpStartAlterations": [ "modelIdVersion" ], "tags": [] }, "outputs": [], "source": [ "model_id, model_version, = (\n", " \"huggingface-llm-falcon-40b-instruct-bf16\",\n", " \"*\",\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "70215fdd", "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false }, "tags": [] }, "outputs": [], "source": [ "from ipywidgets import Dropdown\n", "\n", "model_ids = ['huggingface-llm-falcon-40b-bf16',\n", " 'huggingface-llm-falcon-40b-instruct-bf16',\n", " 'huggingface-llm-falcon-7b-bf16',\n", " 'huggingface-llm-falcon-7b-instruct-bf16']\n", "\n", "# display the model-ids in a dropdown to select a model for inference.\n", "model_dropdown = Dropdown(\n", " options=model_ids,\n", " value=model_id,\n", " description=\"Select a model\",\n", " style={\"description_width\": \"initial\"},\n", " layout={\"width\": \"max-content\"},\n", ")\n", "display(model_dropdown)" ] }, { "cell_type": "code", "execution_count": null, "id": "5970aa71", "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false }, "tags": [] }, "outputs": [], "source": [ "model_id = model_dropdown.value" ] }, { "cell_type": "code", "execution_count": null, "id": "85a2a8e5-789f-4041-9927-221257126653", "metadata": { "tags": [] }, "outputs": [], "source": [ "%%time\n", "from sagemaker.jumpstart.model import JumpStartModel\n", "\n", "my_model = JumpStartModel(model_id=model_id)\n", "predictor = my_model.deploy()" ] }, { "cell_type": "markdown", "id": "67abf8ea-16c7-4d55-8500-bfd2a16d1294", "metadata": {}, "source": [ "### Changing instance type\n", "---\n", "\n", "\n", "Models have been tested on the following instance types:\n", "\n", " - Falcon 7B and 7B instruct: `ml.g5.2xlarge`, `ml.g5.2xlarge`, `ml.g5.4xlarge`, `ml.g5.8xlarge`, `ml.g5.16xlarge`, `ml.g5.12xlarge`, `ml.g5.24xlarge`, `ml.g5.48xlarge`, `ml.p4d.24xlarge`\n", " - Falcon 40B and 40B instruct: `ml.g5.12xlarge`, `ml.g5.48xlarge`, `ml.p4d.24xlarge`\n", "\n", "If an instance type is not available in you region, please try a different instance. You can do so by specifying instance type in the JumpStartModel class.\n", "\n", "`my_model = JumpStartModel(model_id=\"huggingface-llm-falcon-40b-instruct-bf16\", instance_type=\"ml.g5.12xlarge\")`\n", "\n", "---" ] }, { "cell_type": "markdown", "id": "23b42484-1770-4084-887c-c48ffccc02cc", "metadata": {}, "source": [ "### Changing number of GPUs\n", "---\n", "Falcon models are served with HuggingFace (HF) LLM DLC which requires specifying number of GPUs during model deployment. \n", "\n", "**Falcon 7B and 7B instruct:** HF LLM DLC currently does not support sharding for 7B model. Thus, even if more than one GPU is available on the instance, please do not increase number of GPUs. \n", "\n", "**Falcon 40B and 40B instruct:** By default number of GPUs are set to 4. However, if you are using `ml.g5.48xlarge` or `ml.p4d.24xlarge`, you can increase number of GPUs to be 8 as follows: \n", "\n", "`my_model = JumpStartModel(model_id=\"huggingface-llm-falcon-40b-instruct-bf16\", instance_type=\"ml.g5.48xlarge\")`\n", "\n", "`my_model.env['SM_NUM_GPUS'] = '8'`\n", "\n", "`predictor = my_model.deploy()`\n", "\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "406b6155-d9e7-46e9-a6ac-70c7f7905c17", "metadata": { "tags": [] }, "outputs": [], "source": [ "import boto3\n", "\n", "client = boto3.client('sagemaker-runtime')" ] }, { "cell_type": "code", "execution_count": null, "id": "3cbb73ce-60ab-4f72-af9d-e97a1e900ac2", "metadata": { "tags": [] }, "outputs": [], "source": [ "# if you are in the same Jupyter session with a defined predictor object, you use\n", "# predictor.predict(payload) in the SageMaker Python SDK\n", "# otherwise if you are using a precreated endpoint, you can the endpoint name with boto3 \n", "endpoint_name = 'hf-llm-falcon-40b-instruct-bf16-2023-07-10-15-19-42-754'" ] }, { "cell_type": "code", "execution_count": null, "id": "2f2dd621-2c65-4827-a7af-1e13b0872677", "metadata": { "tags": [] }, "outputs": [], "source": [ "%%time\n", "import json\n", "\n", "data = {\n", " \"inputs\": \"What is the purpose of life?\",\n", " \"properties\": {\n", " \"min_length\": 100,\n", " \"max_length\": 150,\n", " \"do_sample\": True,\n", " },\n", "}\n", "\n", "response_model = client.invoke_endpoint(\n", " EndpointName=endpoint_name,\n", " Body=json.dumps(data),\n", " ContentType=\"application/json\",\n", ")\n", "\n", "response_model[\"Body\"].read().decode(\"utf8\")" ] }, { "cell_type": "code", "execution_count": null, "id": "c1a55aa5-f2ad-4db8-9718-b76f969cffbe", "metadata": { "tags": [] }, "outputs": [], "source": [ "%%time\n", "\n", "\n", "prompt = \"Tell me about Amazon SageMaker.\"\n", "\n", "payload = {\n", " \"inputs\": prompt,\n", " \"parameters\": {\n", " \"do_sample\": True,\n", " \"top_p\": 0.9,\n", " \"temperature\": 0.8,\n", " \"max_new_tokens\": 1024,\n", " \"stop\": [\"<|endoftext|>\", \"\"]\n", " }\n", "}\n", "\n", "response_model = client.invoke_endpoint(\n", " EndpointName=endpoint_name,\n", " Body=json.dumps(payload),\n", " ContentType=\"application/json\",\n", ")\n", "\n", "response_model[\"Body\"].read().decode(\"utf8\")" ] }, { "cell_type": "markdown", "id": "15da465e-b855-4249-93b8-f80a3627a62b", "metadata": { "jumpStartAlterations": [], "tags": [] }, "source": [ "### About the model\n", "\n", "---\n", "Falcon is a causal decoder-only model built by [Technology Innovation Institute](https://www.tii.ae/) (TII) and trained on more than 1 trillion tokens of RefinedWeb enhanced with curated corpora. It was built using custom-built tooling for data pre-processing and model training built on Amazon SageMaker. As of June 6, 2023, it is the best open-source model currently available. Falcon-40B outperforms LLaMA, StableLM, RedPajama, MPT, etc. To see comparison, see [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). It features an architecture optimized for inference, with FlashAttention and multiquery. \n", "\n", "\n", "[Refined Web Dataset](https://huggingface.co/datasets/tiiuae/falcon-refinedweb): Falcon RefinedWeb is a massive English web dataset built by TII and released under an Apache 2.0 license. It is a highly filtered dataset with large scale de-duplication of CommonCrawl. It is observed that models trained on RefinedWeb achieve performance equal to or better than performance achieved by training model on curated datasets, while only relying on web data.\n", "\n", "**Model Sizes:**\n", "- **Falcon-7b**: It is a 7 billion parameter model trained on 1.5 trillion tokens. It outperforms comparable open-source models (e.g., MPT-7B, StableLM, RedPajama etc.). To see comparison, see [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). To use this model, please select `model_id` in the cell above to be \"huggingface-textgeneration-falcon-7b-bf16\".\n", "- **Falcon-40B**: It is a 40 billion parameter model trained on 1 trillion tokens. It has surpassed renowned models like LLaMA-65B, StableLM, RedPajama and MPT on the public leaderboard maintained by Hugging Face, demonstrating its exceptional performance without specialized fine-tuning. To see comparison, see [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). \n", "\n", "**Instruct models (Falcon-7b-instruct/Falcon-40B-instruct):** Instruct models are base falcon models fine-tuned on a mixture of chat and instruction datasets. They are ready-to-use chat/instruct models. To use these models, please select `model_id` in the cell above to be \"huggingface-textgeneration-falcon-7b-instruct-bf16\" or \"huggingface-textgeneration-falcon-40b-instruct-bf16\".\n", "\n", "It is [recommended](https://huggingface.co/tiiuae/falcon-7b) that Instruct models should be used without fine-tuning and base models should be fine-tuned further on the specific task.\n", "\n", "**Limitations:**\n", "\n", "- Falcon models are mostly trained on English data and may not generalize to other languages. \n", "- Falcon carries the stereotypes and biases commonly encountered online and in the training data. Hence, it is recommended to develop guardrails and to take appropriate precautions for any production use. This is a raw, pretrained model, which should be further finetuned for most usecases.\n", "\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "719560fd-c9b2-4d4c-a7de-914b8aa72557", "metadata": { "tags": [] }, "outputs": [], "source": [ "def query_endpoint(payload, endpoint_name = 'hf-llm-falcon-40b-instruct-bf16-2023-07-10-15-19-42-754'):\n", " \n", " response_model = client.invoke_endpoint(\n", " EndpointName=endpoint_name,\n", " Body=json.dumps(payload),\n", " ContentType=\"application/json\",\n", " )\n", "\n", " res = response_model[\"Body\"].read().decode(\"utf8\")\n", " \n", " out = json.loads(res)\n", " \n", " return out[0]['generated_text']" ] }, { "cell_type": "code", "execution_count": null, "id": "7816dd22-fd8f-4374-ba9d-a62b941ebb16", "metadata": { "tags": [] }, "outputs": [], "source": [ "payload = {\n", " \"inputs\": \"Building a website can be done in 10 simple steps:\",\n", " \"parameters\":{\n", " \"max_new_tokens\": 300,\n", " \"no_repeat_ngram_size\": 3\n", " }\n", "}\n", "print(query_endpoint(payload))" ] }, { "cell_type": "code", "execution_count": null, "id": "c4aac6cc-d282-428a-b334-5dcdd505e480", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Translation\n", "payload = {\n", " \"inputs\": \"\"\"Translate English to French:\n", "\n", " sea otter => loutre de mer\n", "\n", " peppermint => menthe poivrée\n", "\n", " plush girafe => girafe peluche\n", "\n", " cheese =>\"\"\",\n", " \"parameters\":{\n", " \"max_new_tokens\": 3\n", " }\n", "}\n", "\n", "\n", "\n", "print(query_endpoint(payload))" ] }, { "cell_type": "code", "execution_count": null, "id": "117b6645-56c5-4f0d-bbab-75bae8955943", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Sentiment-analysis\n", "payload = {\n", " \"inputs\": \"\"\"\"I hate it when my phone battery dies.\"\n", " Sentiment: Negative\n", " ###\n", " Tweet: \"My day has been :+1:\"\n", " Sentiment: Positive\n", " ###\n", " Tweet: \"This is the link to the article\"\n", " Sentiment: Neutral\n", " ###\n", " Tweet: \"This new music video was incredibile\"\n", " Sentiment:\"\"\",\n", " \"parameters\": {\n", " \"max_new_tokens\":2\n", " }\n", "}\n", "print(query_endpoint(payload))" ] }, { "cell_type": "code", "execution_count": null, "id": "4594ede3-1272-4e56-8926-ccaf9e66f314", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Question answering\n", "payload = {\n", " \"inputs\": \"Could you remind me when was the C programming language invented?\",\n", " \"parameters\":{\n", " \"max_new_tokens\": 50\n", " }\n", "}\n", "print(query_endpoint(payload))" ] }, { "cell_type": "code", "execution_count": null, "id": "fa8d82be-3c33-47c8-b0d7-d8675484b1d7", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Recipe generation\n", "payload = {\"inputs\": \"What is the recipe for a delicious lemon cheesecake?\", \"parameters\":{\"max_new_tokens\": 400}}\n", "print(query_endpoint(payload))" ] }, { "cell_type": "code", "execution_count": null, "id": "d4f125c1-eae1-45ad-a065-651d9e13dfa8", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Summarization\n", "\n", "payload = {\n", " \"inputs\":\"\"\"Starting today, the state-of-the-art Falcon 40B foundation model from Technology\n", " Innovation Institute (TII) is available on Amazon SageMaker JumpStart, SageMaker's machine learning (ML) hub\n", " that offers pre-trained models, built-in algorithms, and pre-built solution templates to help you quickly get\n", " started with ML. You can deploy and use this Falcon LLM with a few clicks in SageMaker Studio or\n", " programmatically through the SageMaker Python SDK.\n", " Falcon 40B is a 40-billion-parameter large language model (LLM) available under the Apache 2.0 license that\n", " ranked #1 in Hugging Face Open LLM leaderboard, which tracks, ranks, and evaluates LLMs across multiple\n", " benchmarks to identify top performing models. Since its release in May 2023, Falcon 40B has demonstrated\n", " exceptional performance without specialized fine-tuning. To make it easier for customers to access this\n", " state-of-the-art model, AWS has made Falcon 40B available to customers via Amazon SageMaker JumpStart.\n", " Now customers can quickly and easily deploy their own Falcon 40B model and customize it to fit their specific\n", " needs for applications such as translation, question answering, and summarizing information.\n", " Falcon 40B are generally available today through Amazon SageMaker JumpStart in US East (Ohio),\n", " US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Mumbai),\n", " Europe (London), Europe (Frankfurt), Europe (Ireland), and Canada (Central),\n", " with availability in additional AWS Regions coming soon. To learn how to use this new feature,\n", " please see SageMaker JumpStart documentation, the Introduction to SageMaker JumpStart –\n", " Text Generation with Falcon LLMs example notebook, and the blog Technology Innovation Institute trainsthe\n", " state-of-the-art Falcon LLM 40B foundation model on Amazon SageMaker. Summarize the article above:\"\"\",\n", " \"parameters\":{\n", " \"max_new_tokens\":60\n", " }\n", " }\n", "print(query_endpoint(payload))" ] }, { "cell_type": "markdown", "id": "ca31f9d6-735f-44ea-af7e-2aad43621903", "metadata": {}, "source": [ "# Evaluate the LLM for summarization performance with CNN Dailymail and ROUGE" ] }, { "cell_type": "code", "execution_count": null, "id": "b0eaf502-5a8a-4eb5-aa54-a3d3ef8d9f44", "metadata": { "tags": [] }, "outputs": [], "source": [ "%%writefile requirements.txt\n", "torch\n", "transformers\n", "datasets\n", "evaluate\n", "rouge_score\n", "absl-py" ] }, { "cell_type": "code", "execution_count": null, "id": "ce89eafa-ba7c-4676-be02-4f58480ae80d", "metadata": { "tags": [] }, "outputs": [], "source": [ "!pip install -r requirements.txt" ] }, { "cell_type": "code", "execution_count": null, "id": "61ce4e53-d397-4ff7-a8c5-1a3d9980a167", "metadata": { "tags": [] }, "outputs": [], "source": [ "from datasets import load_dataset\n", "\n", "dataset = load_dataset(\"cnn_dailymail\", '3.0.0')" ] }, { "cell_type": "code", "execution_count": null, "id": "032ff2b5-bd5b-4aa4-b093-f7349a743c17", "metadata": { "tags": [] }, "outputs": [], "source": [ "dataset" ] }, { "cell_type": "code", "execution_count": null, "id": "2c044394-f44c-4970-b75e-e8eab73a55cb", "metadata": { "tags": [] }, "outputs": [], "source": [ "article_id = 1\n", "article = dataset['train'][article_id]['article']" ] }, { "cell_type": "code", "execution_count": null, "id": "1da76832-7891-4326-8b1c-f9073de08b85", "metadata": { "tags": [] }, "outputs": [], "source": [ "article" ] }, { "cell_type": "code", "execution_count": null, "id": "351307ec-a53c-4572-aa42-8c96b90278f3", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Summarization\n", "\n", "char_cutoff = 2000\n", "\n", "payload = {\n", " # we'll skip articles longer than the cutoff in the eval loop\n", " \"inputs\":f'{article[:char_cutoff]} Highlights:',\n", " \"parameters\":{\n", " \"max_new_tokens\":66\n", " }\n", " }\n", "predictions = query_endpoint(payload)" ] }, { "cell_type": "code", "execution_count": null, "id": "77bc87d0-d29e-4788-9df2-49f8f32bdb46", "metadata": { "tags": [] }, "outputs": [], "source": [ "print(predictions)" ] }, { "cell_type": "code", "execution_count": null, "id": "2cfb521a-a077-4ac0-8d66-faf945ff95cf", "metadata": { "tags": [] }, "outputs": [], "source": [ "# these are not objective summaries. they are news highlights - picked specifically to optimize for clicks and making a story\n", "dataset['train'][article_id]['highlights']" ] }, { "cell_type": "code", "execution_count": null, "id": "8c7850ee-6a0c-48fc-aa18-e3773d8dc7c3", "metadata": { "tags": [] }, "outputs": [], "source": [ "import evaluate\n", "\n", "rouge = evaluate.load('rouge')" ] }, { "cell_type": "code", "execution_count": null, "id": "b5f36731-6abb-46de-943b-e0e35c74071a", "metadata": { "tags": [] }, "outputs": [], "source": [ "def equalize_rouge_inputs(predictions, label):\n", " if len(predictions) < len(label):\n", " label = label[:len(predictions)]\n", "\n", " elif len(label) < len(predictions):\n", " predictions = predictions[:len(label)]\n", "\n", " assert len(label) == len(predictions)\n", " \n", " return predictions, label" ] }, { "cell_type": "code", "execution_count": null, "id": "8a105296-c368-415b-92c1-2480b3ec5cfd", "metadata": { "tags": [] }, "outputs": [], "source": [ "predictions, label = equalize_rouge_inputs(predictions, label)\n", "\n", "results = rouge.compute(predictions=predictions,\n", " references=label)\n", "\n", "rouge1 = results['rouge1']" ] }, { "cell_type": "code", "execution_count": null, "id": "d7afef72-0f56-4a66-923f-16b14760ed67", "metadata": { "tags": [] }, "outputs": [], "source": [ "# 1 for rouge1 means the result is perfect\n", "results['rouge1']" ] }, { "cell_type": "code", "execution_count": null, "id": "57004c52-60ab-481c-a34a-0618d5aa4daa", "metadata": { "tags": [] }, "outputs": [], "source": [ "import time\n", "\n", "num_samples = 10\n", "\n", "for idx in range(num_samples):\n", " \n", " article = dataset['train'][idx]['article'][:char_cutoff]\n", " \n", " label = dataset['train'][idx]['highlights']\n", " \n", " payload = { \"inputs\":f'{article} Highlights:',\n", " \"parameters\":{\n", " # may want to parameterize this by the highlight length as well\n", " \"max_new_tokens\":66\n", " }\n", " }\n", " \n", " predictions = query_endpoint(payload)\n", " \n", " predictions, label = equalize_rouge_inputs(predictions, label)\n", "\n", " results = rouge.compute(predictions=predictions,\n", " references=label)\n", "\n", " rouge1 = results['rouge1']\n", "\n", " with open('summarization_results.txt', 'a') as f:\n", " f.write(f'=========== Article {idx}====== \\n')\n", " f.write(predictions)\n", " f.write(f' \\n ROUGE1: {rouge1} \\n')\n", " \n", " time.sleep(1)" ] }, { "cell_type": "markdown", "id": "132d3fee-5ce2-4ff2-93fe-5779762ce2cf", "metadata": {}, "source": [ "### Supported parameters\n", "\n", "***\n", "Some of the supported parameters while performing inference are the following:\n", "\n", "* **max_length:** Model generates text until the output length (which includes the input context length) reaches `max_length`. If specified, it must be a positive integer.\n", "* **max_new_tokens:** Model generates text until the output length (excluding the input context length) reaches `max_new_tokens`. If specified, it must be a positive integer.\n", "* **num_beams:** Number of beams used in the greedy search. If specified, it must be integer greater than or equal to `num_return_sequences`.\n", "* **no_repeat_ngram_size:** Model ensures that a sequence of words of `no_repeat_ngram_size` is not repeated in the output sequence. If specified, it must be a positive integer greater than 1.\n", "* **temperature:** Controls the randomness in the output. Higher temperature results in output sequence with low-probability words and lower temperature results in output sequence with high-probability words. If `temperature` -> 0, it results in greedy decoding. If specified, it must be a positive float.\n", "* **early_stopping:** If True, text generation is finished when all beam hypotheses reach the end of sentence token. If specified, it must be boolean.\n", "* **do_sample:** If True, sample the next word as per the likelihood. If specified, it must be boolean.\n", "* **top_k:** In each step of text generation, sample from only the `top_k` most likely words. If specified, it must be a positive integer.\n", "* **top_p:** In each step of text generation, sample from the smallest possible set of words with cumulative probability `top_p`. If specified, it must be a float between 0 and 1.\n", "* **return_full_text:** If True, input text will be part of the output generated text. If specified, it must be boolean. The default value for it is False.\n", "* **stop**: If specified, it must a list of strings. Text generation stops if any one of the specified strings is generated.\n", "\n", "We may specify any subset of the parameters mentioned above while invoking an endpoint. \n", "\n", "For more parameters and information on HF LLM DLC, please see [this article](https://huggingface.co/blog/sagemaker-huggingface-llm#4-run-inference-and-chat-with-our-model).\n", "***" ] }, { "cell_type": "markdown", "id": "b04d99e2", "metadata": {}, "source": [ "### 5. Clean up the endpoint" ] }, { "cell_type": "code", "execution_count": null, "id": "6c3b60c2", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Delete the SageMaker endpoint\n", "predictor.delete_model()\n", "predictor.delete_endpoint()" ] } ], "metadata": { "availableInstances": [ { "_defaultOrder": 0, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.t3.medium", "vcpuNum": 2 }, { "_defaultOrder": 1, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.t3.large", "vcpuNum": 2 }, { "_defaultOrder": 2, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.t3.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 3, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.t3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 4, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5.large", "vcpuNum": 2 }, { "_defaultOrder": 5, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 6, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 7, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 8, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 9, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 10, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 11, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 12, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5d.large", "vcpuNum": 2 }, { "_defaultOrder": 13, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5d.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 14, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5d.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 15, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5d.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 16, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5d.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 17, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5d.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 18, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5d.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 19, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 20, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": true, "memoryGiB": 0, "name": "ml.geospatial.interactive", "supportedImageNames": [ "sagemaker-geospatial-v1-0" ], "vcpuNum": 0 }, { "_defaultOrder": 21, "_isFastLaunch": true, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.c5.large", "vcpuNum": 2 }, { "_defaultOrder": 22, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.c5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 23, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.c5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 24, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.c5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 25, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 72, "name": "ml.c5.9xlarge", "vcpuNum": 36 }, { "_defaultOrder": 26, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 96, "name": "ml.c5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 27, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 144, "name": "ml.c5.18xlarge", "vcpuNum": 72 }, { "_defaultOrder": 28, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.c5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 29, "_isFastLaunch": true, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g4dn.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 30, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g4dn.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 31, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g4dn.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 32, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g4dn.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 33, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g4dn.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 34, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g4dn.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 35, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 61, "name": "ml.p3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 36, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 244, "name": "ml.p3.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 37, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 488, "name": "ml.p3.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 38, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.p3dn.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 39, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.r5.large", "vcpuNum": 2 }, { "_defaultOrder": 40, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.r5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 41, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.r5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 42, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.r5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 43, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.r5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 44, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.r5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 45, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 512, "name": "ml.r5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 46, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.r5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 47, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 48, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 49, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 50, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 51, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 52, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 53, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.g5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 54, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.g5.48xlarge", "vcpuNum": 192 }, { "_defaultOrder": 55, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 1152, "name": "ml.p4d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 56, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 1152, "name": "ml.p4de.24xlarge", "vcpuNum": 96 } ], "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (Data Science)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/datascience-1.0" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.10" } }, "nbformat": 4, "nbformat_minor": 5 }