{ "cells": [ { "cell_type": "markdown", "id": "4a1a52b6", "metadata": {}, "source": [ "# SageMaker JumpStart Foundation Models - HuggingFace Text2Text Generation" ] }, { "attachments": {}, "cell_type": "markdown", "id": "a5fe53da", "metadata": {}, "source": [ "---\n", "\n", "This notebook's CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook. \n", "\n", "![This us-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-2/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-flan-t5.ipynb)\n", "\n", "---" ] }, { "cell_type": "markdown", "id": "5acea92d", "metadata": {}, "source": [ "---\n", "Welcome to Amazon [SageMaker JumpStart](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-jumpstart.html)! You can use SageMaker JumpStart to solve many Machine Learning tasks through one-click in SageMaker Studio, or through [SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable/overview.html#use-prebuilt-models-with-sagemaker-jumpstart).\n", "\n", "\n", "In this demo notebook, we demonstrate how to use the SageMaker Python SDK for deploying Foundation Models as an endpoint and use them for various NLP tasks. The Foundation models perform **Text2Text Generation**. It takes a prompting text as an input, and returns the text generated by the model according to the prompt.\n", "\n", "Here, we show how to use the state-of-the-art pre-trained **[FLAN T5 models](https://huggingface.co/docs/transformers/model_doc/flan-t5)** and **[FLAN UL2](https://huggingface.co/google/flan-ul2)** for Text2Text Generation in the following tasks. You can directly use FLAN-T5 model for many NLP tasks, without fine-tuning the model.\n", "\n", "\n", "* Text summarization\n", "* Common sense reasoning / natural language inference\n", "* Question and answering\n", "* Sentence / sentiment classification\n", "* Translation\n", "* Pronoun resolution\n", "\n", "---" ] }, { "cell_type": "markdown", "id": "815c0bc7", "metadata": {}, "source": [ "1. [Set Up](#1.-Set-Up)\n", "2. [Select a model](#2.-Select-a-model)\n", "3. [Retrieve Artifacts & Deploy an Endpoint](#3.-Retrieve-Artifacts-&-Deploy-an-Endpoint)\n", "4. [Query endpoint and parse response](#4.-Query-endpoint-and-parse-response)\n", "5. [Advanced features: How to use various parameters to control the generated text](#5.-Advanced-features:-How-to-use-various-advanced-parameters-to-control-the-generated-text)\n", "6. [Advanced features: How to use prompts engineering to solve different tasks](#6.-Advacned-features:-How-to-use-prompts-engineering-to-solve-different-tasks)\n", "5. [Clean up the endpoint](#5.-Clean-up-the-endpoint)" ] }, { "cell_type": "markdown", "id": "a7e35194", "metadata": {}, "source": [ "Note: This notebook was tested on ml.t3.medium instance in Amazon SageMaker Studio with Python 3 (Data Science) kernel and in Amazon SageMaker Notebook instance with conda_python3 kernel." ] }, { "cell_type": "markdown", "id": "d2f8dfad", "metadata": {}, "source": [ "### 1. Set Up" ] }, { "cell_type": "markdown", "id": "32f31be0", "metadata": {}, "source": [ "---\n", "Before executing the notebook, there are some initial steps required for set up. This notebook requires ipywidgets.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "eb67d497", "metadata": { "tags": [] }, "outputs": [], "source": [ "!pip install ipywidgets==7.0.0 --quiet\n", "!pip install --upgrade sagemaker --quiet" ] }, { "cell_type": "markdown", "id": "769f5d81", "metadata": {}, "source": [ "#### Permissions and environment variables\n", "\n", "---\n", "To host on Amazon SageMaker, we need to set up and authenticate the use of AWS services. Here, we use the execution role associated with the current notebook as the AWS account role with SageMaker access. \n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "67131eee", "metadata": { "tags": [] }, "outputs": [], "source": [ "import sagemaker, boto3, json\n", "from sagemaker.session import Session\n", "\n", "sagemaker_session = Session()\n", "aws_role = sagemaker_session.get_caller_identity_arn()\n", "aws_region = boto3.Session().region_name\n", "sess = sagemaker.Session()" ] }, { "cell_type": "markdown", "id": "69849d02", "metadata": {}, "source": [ "## 2. Select a pre-trained model\n", "***\n", "You can continue with the default model, or can choose a different model from the dropdown generated upon running the next cell. A complete list of SageMaker pre-trained models can also be accessed at [SageMaker pre-trained Models](https://sagemaker.readthedocs.io/en/stable/doc_utils/pretrainedmodels.html#).\n", "***" ] }, { "cell_type": "code", "execution_count": null, "id": "652b2d4f", "metadata": { "jumpStartAlterations": [ "modelIdVersion" ], "tags": [] }, "outputs": [], "source": [ "model_id, model_version = (\n", " \"huggingface-text2text-flan-t5-xl\",\n", " \"*\",\n", ")" ] }, { "cell_type": "markdown", "id": "170e1228", "metadata": {}, "source": [ "***\n", "[Optional] Select a different SageMaker pre-trained model. Here, we download the model_manifest file from the Built-In Algorithms s3 bucket, filter-out all the Text Generation models and select a model for inference.\n", "***" ] }, { "cell_type": "code", "execution_count": null, "id": "0d8a1f7e", "metadata": { "tags": [] }, "outputs": [], "source": [ "from ipywidgets import Dropdown\n", "from sagemaker.jumpstart.notebook_utils import list_jumpstart_models\n", "\n", "# Retrieves all Text Generation models available by SageMaker Built-In Algorithms.\n", "filter_value = \"task == text2text\"\n", "text_generation_models = list_jumpstart_models(filter=filter_value)\n", "\n", "# display the model-ids in a dropdown to select a model for inference.\n", "model_dropdown = Dropdown(\n", " options=text_generation_models,\n", " value=model_id,\n", " description=\"Select a model\",\n", " style={\"description_width\": \"initial\"},\n", " layout={\"width\": \"max-content\"},\n", ")" ] }, { "cell_type": "markdown", "id": "a28d45e5", "metadata": {}, "source": [ "#### Choose a model for Inference" ] }, { "cell_type": "code", "execution_count": null, "id": "52b7a67a", "metadata": { "tags": [] }, "outputs": [], "source": [ "display(model_dropdown)" ] }, { "cell_type": "code", "execution_count": null, "id": "271642dc", "metadata": { "tags": [] }, "outputs": [], "source": [ "# model_version=\"*\" fetches the latest version of the model\n", "model_id, model_version = model_dropdown.value, \"*\"" ] }, { "cell_type": "markdown", "id": "0b08aa4a", "metadata": {}, "source": [ "### 3. Retrieve Artifacts & Deploy an Endpoint\n", "\n", "***\n", "\n", "Using SageMaker, we can perform inference on the pre-trained model, even without fine-tuning it first on a new dataset. We start by retrieving the `deploy_image_uri`, `deploy_source_uri`, and `model_uri` for the pre-trained model. To host the pre-trained model, we create an instance of [`sagemaker.model.Model`](https://sagemaker.readthedocs.io/en/stable/api/inference/model.html) and deploy it. This may take a few minutes.\n", "\n", "***" ] }, { "cell_type": "code", "execution_count": null, "id": "c122ee7e-4180-4f4b-8cc9-cf60d3e2b6f2", "metadata": { "tags": [] }, "outputs": [], "source": [ "def get_sagemaker_session(local_download_dir) -> sagemaker.Session:\n", " \"\"\"Return the SageMaker session.\"\"\"\n", "\n", " sagemaker_client = boto3.client(\n", " service_name=\"sagemaker\", region_name=boto3.Session().region_name\n", " )\n", "\n", " session_settings = sagemaker.session_settings.SessionSettings(\n", " local_download_dir=local_download_dir\n", " )\n", "\n", " # the unit test will ensure you do not commit this change\n", " session = sagemaker.session.Session(\n", " sagemaker_client=sagemaker_client, settings=session_settings\n", " )\n", "\n", " return session" ] }, { "cell_type": "markdown", "id": "44051043-7405-457e-809f-a7c004646943", "metadata": {}, "source": [ "We need to create a directory to host the downloaded model. " ] }, { "cell_type": "code", "execution_count": null, "id": "5fcfc73e-00b1-4672-b327-99a477abfbc8", "metadata": { "tags": [] }, "outputs": [], "source": [ "!mkdir -p download_dir" ] }, { "attachments": {}, "cell_type": "markdown", "id": "e06d1b55", "metadata": {}, "source": [ "---\n", "This text-to-text generation task supports a wide variety of model sizes that have different compute requirements. Here, we specify the instance type for several large models along with an environment variable to set the multi-model endpoint number of workers to 1. This ensures we can support the largest possible token lengths since additional models are not consuming GPU memory resources.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "63611d47", "metadata": {}, "outputs": [], "source": [ "_large_model_env = {\"SAGEMAKER_MODEL_SERVER_WORKERS\": \"1\", \"TS_DEFAULT_WORKERS_PER_MODEL\": \"1\"}\n", "\n", "_model_config_map = {\n", " \"huggingface-text2text-flan-t5-xxl\": {\n", " \"instance_type\": \"ml.g5.12xlarge\",\n", " \"env\": _large_model_env,\n", " },\n", " \"huggingface-text2text-flan-t5-xxl-fp16\": {\n", " \"instance_type\": \"ml.g5.12xlarge\",\n", " \"env\": _large_model_env,\n", " },\n", " \"huggingface-text2text-flan-t5-xxl-bnb-int8\": {\n", " \"instance_type\": \"ml.g5.xlarge\",\n", " \"env\": _large_model_env,\n", " },\n", " \"huggingface-text2text-flan-t5-xl\": {\n", " \"instance_type\": \"ml.g5.2xlarge\",\n", " \"env\": {\"MMS_DEFAULT_WORKERS_PER_MODEL\": \"1\"},\n", " },\n", " \"huggingface-text2text-flan-t5-large\": {\n", " \"instance_type\": \"ml.g5.2xlarge\",\n", " \"env\": {\"MMS_DEFAULT_WORKERS_PER_MODEL\": \"1\"},\n", " },\n", " \"huggingface-text2text-flan-ul2-bf16\": {\n", " \"instance_type\": \"ml.g5.12xlarge\",\n", " \"env\": _large_model_env,\n", " },\n", "}" ] }, { "cell_type": "code", "execution_count": null, "id": "631ae768", "metadata": { "tags": [] }, "outputs": [], "source": [ "from sagemaker import image_uris, model_uris, script_uris, hyperparameters\n", "from sagemaker.model import Model\n", "from sagemaker.predictor import Predictor\n", "from sagemaker.utils import name_from_base\n", "\n", "\n", "endpoint_name = name_from_base(f\"jumpstart-example-{model_id}\")\n", "\n", "if model_id in _model_config_map:\n", " inference_instance_type = _model_config_map[model_id][\"instance_type\"]\n", "else:\n", " inference_instance_type = \"ml.g5.xlarge\"\n", "\n", "# Retrieve the inference docker container uri. This is the base HuggingFace container image for the default model above.\n", "deploy_image_uri = image_uris.retrieve(\n", " region=None,\n", " framework=None, # automatically inferred from model_id\n", " image_scope=\"inference\",\n", " model_id=model_id,\n", " model_version=model_version,\n", " instance_type=inference_instance_type,\n", ")\n", "\n", "# Retrieve the inference script uri. This includes all dependencies and scripts for model loading, inference handling etc.\n", "deploy_source_uri = script_uris.retrieve(\n", " model_id=model_id, model_version=model_version, script_scope=\"inference\"\n", ")\n", "\n", "# Retrieve the model uri.\n", "model_uri = model_uris.retrieve(\n", " model_id=model_id, model_version=model_version, model_scope=\"inference\"\n", ")\n", "\n", "# Create the SageMaker model instance\n", "if model_id in _model_config_map:\n", " # For those large models, we already repack the inference script and model\n", " # artifacts for you, so the `source_dir` argument to Model is not required.\n", " model = Model(\n", " image_uri=deploy_image_uri,\n", " model_data=model_uri,\n", " role=aws_role,\n", " predictor_cls=Predictor,\n", " name=endpoint_name,\n", " env=_model_config_map[model_id][\"env\"],\n", " )\n", "else:\n", " model = Model(\n", " image_uri=deploy_image_uri,\n", " source_dir=deploy_source_uri,\n", " model_data=model_uri,\n", " entry_point=\"inference.py\", # entry point file in source_dir and present in deploy_source_uri\n", " role=aws_role,\n", " predictor_cls=Predictor,\n", " name=endpoint_name,\n", " sagemaker_session=get_sagemaker_session(\"download_dir\"),\n", " )\n", "\n", "# deploy the Model. Note that we need to pass Predictor class when we deploy model through Model class,\n", "# for being able to run inference through the sagemaker API.\n", "model_predictor = model.deploy(\n", " initial_instance_count=1,\n", " instance_type=inference_instance_type,\n", " predictor_cls=Predictor,\n", " endpoint_name=endpoint_name,\n", ")" ] }, { "cell_type": "markdown", "id": "1f9c254b", "metadata": {}, "source": [ "### 4. Query endpoint and parse response\n", "\n", "---\n", "Input to the endpoint is any string of text formatted as json and encoded in `utf-8` format. Output of the endpoint is a `json` with generated text.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "439998c0", "metadata": { "tags": [] }, "outputs": [], "source": [ "newline, bold, unbold = \"\\n\", \"\\033[1m\", \"\\033[0m\"\n", "\n", "\n", "def query_endpoint(encoded_text, endpoint_name):\n", " client = boto3.client(\"runtime.sagemaker\")\n", " response = client.invoke_endpoint(\n", " EndpointName=endpoint_name, ContentType=\"application/x-text\", Body=encoded_text\n", " )\n", " return response\n", "\n", "\n", "def parse_response(query_response):\n", " model_predictions = json.loads(query_response[\"Body\"].read())\n", " generated_text = model_predictions[\"generated_text\"]\n", " return generated_text" ] }, { "cell_type": "markdown", "id": "fc5d644d", "metadata": {}, "source": [ "---\n", "Below, we put in some example input text. You can put in any text and the model predicts next words in the sequence. Longer sequences of text can be generated by calling the model repeatedly.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "8262dfcc", "metadata": { "tags": [] }, "outputs": [], "source": [ "newline, bold, unbold = \"\\n\", \"\\033[1m\", \"\\033[0m\"\n", "\n", "text1 = \"Translate to German: My name is Arthur\"\n", "text2 = \"A step by step recipe to make bolognese pasta:\"\n", "\n", "\n", "for text in [text1, text2]:\n", " query_response = query_endpoint(text.encode(\"utf-8\"), endpoint_name=endpoint_name)\n", " generated_text = parse_response(query_response)\n", " print(\n", " f\"Inference:{newline}\"\n", " f\"input text: {text}{newline}\"\n", " f\"generated text: {bold}{generated_text}{unbold}{newline}\"\n", " )" ] }, { "cell_type": "markdown", "id": "a2554851-cbcc-4ef9-864e-776a3550ceca", "metadata": { "tags": [] }, "source": [ "### 5. Advanced features: How to use various advanced parameters to control the generated text\n", "\n", "***\n", "This model also supports many advanced parameters while performing inference. They include:\n", "\n", "* **max_length:** Model generates text until the output length (which includes the input context length) reaches `max_length`. If specified, it must be a positive integer.\n", "* **num_return_sequences:** Number of output sequences returned. If specified, it must be a positive integer.\n", "* **num_beams:** Number of beams used in the greedy search. If specified, it must be integer greater than or equal to `num_return_sequences`.\n", "* **no_repeat_ngram_size:** Model ensures that a sequence of words of `no_repeat_ngram_size` is not repeated in the output sequence. If specified, it must be a positive integer greater than 1.\n", "* **temperature:** Controls the randomness in the output. Higher temperature results in output sequence with low-probability words and lower temperature results in output sequence with high-probability words. If `temperature` -> 0, it results in greedy decoding. If specified, it must be a positive float.\n", "* **early_stopping:** If True, text generation is finished when all beam hypotheses reach the end of sentence token. If specified, it must be boolean.\n", "* **do_sample:** If True, sample the next word as per the likelihood. If specified, it must be boolean.\n", "* **top_k:** In each step of text generation, sample from only the `top_k` most likely words. If specified, it must be a positive integer.\n", "* **top_p:** In each step of text generation, sample from the smallest possible set of words with cumulative probability `top_p`. If specified, it must be a float between 0 and 1.\n", "* **seed:** Fix the randomized state for reproducibility. If specified, it must be an integer.\n", "\n", "We may specify any subset of the parameters mentioned above while invoking an endpoint. Next, we show an example of how to invoke endpoint with these arguments\n", "\n", "***" ] }, { "cell_type": "code", "execution_count": null, "id": "1af6f7b0-4093-48c9-acdb-54b05886b2dc", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Input must be a json\n", "payload = {\n", " \"text_inputs\": \"Tell me the steps to make a pizza\",\n", " \"max_length\": 50,\n", " \"num_return_sequences\": 3,\n", " \"top_k\": 50,\n", " \"top_p\": 0.95,\n", " \"do_sample\": True,\n", "}\n", "\n", "\n", "def query_endpoint_with_json_payload(encoded_json, endpoint_name):\n", " client = boto3.client(\"runtime.sagemaker\")\n", " response = client.invoke_endpoint(\n", " EndpointName=endpoint_name, ContentType=\"application/json\", Body=encoded_json\n", " )\n", " return response\n", "\n", "\n", "query_response = query_endpoint_with_json_payload(\n", " json.dumps(payload).encode(\"utf-8\"), endpoint_name=endpoint_name\n", ")\n", "\n", "\n", "def parse_response_multiple_texts(query_response):\n", " model_predictions = json.loads(query_response[\"Body\"].read())\n", " generated_text = model_predictions[\"generated_texts\"]\n", " return generated_text\n", "\n", "\n", "generated_texts = parse_response_multiple_texts(query_response)\n", "print(generated_texts)" ] }, { "cell_type": "markdown", "id": "c54977c4-d91a-4489-b137-26fa1d5f1f2d", "metadata": {}, "source": [ "### 6. Advanced features: How to use prompts engineering to solve different tasks\n", "\n", "Below we demonstrate solving 5 key tasks with Flan T5 model. The tasks are: **text summarization**, **common sense reasoning / question answering**, **sentence classification**, **translation**, **pronoun resolution**.\n", "\n", "\n", "Note . **The notebook in the following sections are particularly designed for Flan T5 models (small, base, large, xl). There are other models like T5-one-line-summary which are designed for text summarization in particular. In that case, such models cannot perform all the following tasks.**" ] }, { "cell_type": "markdown", "id": "6ec5b958-1f20-4899-a02f-e3854e804c27", "metadata": {}, "source": [ "### 6.1. Summarization" ] }, { "cell_type": "markdown", "id": "68a1b142-bf45-4e4e-9535-57184ad986df", "metadata": {}, "source": [ "Define the text article you want to summarize." ] }, { "cell_type": "code", "execution_count": null, "id": "84c4893d-5673-428d-90b8-09583987151a", "metadata": {}, "outputs": [], "source": [ "text = \"\"\"Amazon Comprehend uses natural language processing (NLP) to extract insights about the content of documents. It develops insights by recognizing the entities, key phrases, language, sentiments, and other common elements in a document. Use Amazon Comprehend to create new products based on understanding the structure of documents. For example, using Amazon Comprehend you can search social networking feeds for mentions of products or scan an entire document repository for key phrases. \n", "You can access Amazon Comprehend document analysis capabilities using the Amazon Comprehend console or using the Amazon Comprehend APIs. You can run real-time analysis for small workloads or you can start asynchronous analysis jobs for large document sets. You can use the pre-trained models that Amazon Comprehend provides, or you can train your own custom models for classification and entity recognition. \n", "All of the Amazon Comprehend features accept UTF-8 text documents as the input. In addition, custom classification and custom entity recognition accept image files, PDF files, and Word files as input. \n", "Amazon Comprehend can examine and analyze documents in a variety of languages, depending on the specific feature. For more information, see Languages supported in Amazon Comprehend. Amazon Comprehend's Dominant language capability can examine documents and determine the dominant language for a far wider selection of languages.\"\"\"" ] }, { "cell_type": "code", "execution_count": null, "id": "9e746d6f-317b-4cb4-a334-11b9b3eb55d5", "metadata": { "tags": [] }, "outputs": [], "source": [ "prompts = [\n", " \"Briefly summarize this sentence: {text}\",\n", " \"Write a short summary for this text: {text}\",\n", " \"Generate a short summary this sentence:\\n{text}\",\n", " \"{text}\\n\\nWrite a brief summary in a sentence or less\",\n", " \"{text}\\nSummarize the aforementioned text in a single phrase.\",\n", " \"{text}\\nCan you generate a short summary of the above paragraph?\",\n", " \"Write a sentence based on this summary: {text}\",\n", " \"Write a sentence based on '{text}'\",\n", " \"Summarize this article:\\n\\n{text}\",\n", "]\n", "\n", "num_return_sequences = 3\n", "parameters = {\n", " \"max_length\": 50,\n", " \"num_return_sequences\": num_return_sequences,\n", " \"top_k\": 50,\n", " \"top_p\": 0.95,\n", " \"do_sample\": True,\n", "}\n", "\n", "print(f\"{bold}Number of return sequences are set as {num_return_sequences}{unbold}{newline}\")\n", "for each_prompt in prompts:\n", " payload = {\"text_inputs\": each_prompt.replace(\"{text}\", text), **parameters}\n", " query_response = query_endpoint_with_json_payload(\n", " json.dumps(payload).encode(\"utf-8\"), endpoint_name=endpoint_name\n", " )\n", " generated_texts = parse_response_multiple_texts(query_response)\n", " print(f\"{bold} For prompt: '{each_prompt}'{unbold}{newline}\")\n", " print(f\"{bold} The {num_return_sequences} summarized results are{unbold}:{newline}\")\n", " for idx, each_generated_text in enumerate(generated_texts):\n", " print(f\"{bold}Result {idx}{unbold}: {each_generated_text}{newline}\")" ] }, { "cell_type": "markdown", "id": "1925b70a-548f-4a11-8912-c69004380583", "metadata": {}, "source": [ "### 6.2. Common sense reasoning / natural language inference\n", "\n", "In the common sense reasoning, you can design a prompt and combine it with the premise, hypothesis, and options, send the combined text into the endpoint to get an answer. Examples are demonstrated as below." ] }, { "cell_type": "markdown", "id": "55757d1d-a9e9-4307-8c1c-ff919474979e", "metadata": {}, "source": [ "Define the premise, hypothesis, and options that you hope the model to reason." ] }, { "cell_type": "code", "execution_count": null, "id": "51d4f714-2484-4901-ada3-7c9cc6360881", "metadata": {}, "outputs": [], "source": [ "premise = \"The world cup has kicked off in Los Angeles, United States.\"\n", "hypothesis = \"The world cup takes place in United States.\"\n", "options = \"\"\"[\"yes\", \"no\"]\"\"\"" ] }, { "cell_type": "code", "execution_count": null, "id": "1322b063-7c56-45c3-9ab0-5cd69e3aa50a", "metadata": { "tags": [] }, "outputs": [], "source": [ "prompts = [\n", " \"\"\"{premise}\\n\\nBased on the paragraph above can we conclude that \"\\\"{hypothesis}\\\"?\\n\\n{options_}\"\"\",\n", " \"\"\"{premise}\\n\\nBased on that paragraph can we conclude that this sentence is true?\\n{hypothesis}\\n\\n{options_}\"\"\",\n", " \"\"\"{premise}\\n\\nCan we draw the following conclusion?\\n{hypothesis}\\n\\n{options_}\"\"\",\n", " \"\"\"{premise}\\nDoes this next sentence follow, given the preceding text?\\n{hypothesis}\\n\\n{options_}\"\"\",\n", " \"\"\"{premise}\\nCan we infer the following?\\n{hypothesis}\\n\\n{options_}\"\"\",\n", " \"\"\"Read the following paragraph and determine if the hypothesis is true:\\n\\n{premise}\\n\\nHypothesis: {hypothesis}\\n\\n{options_}\"\"\",\n", " \"\"\"Read the text and determine if the sentence is true:\\n\\n{premise}\\n\\nSentence: {hypothesis}\\n\\n{options_}\"\"\",\n", " \"\"\"Can we draw the following hypothesis from the context? \\n\\nContext:\\n\\n{premise}\\n\\nHypothesis: {hypothesis}\\n\\n{options_}\"\"\",\n", " \"\"\"Determine if the sentence is true based on the text below:\\n{hypothesis}\\n\\n{premise}\\n{options_}\"\"\",\n", "]\n", "\n", "parameters = {\n", " \"max_length\": 50,\n", " \"num_return_sequences\": 1,\n", " \"top_k\": 50,\n", " \"top_p\": 0.95,\n", " \"do_sample\": True,\n", "}\n", "\n", "\n", "for each_prompt in prompts:\n", " input_text = each_prompt.replace(\"{premise}\", premise)\n", " input_text = input_text.replace(\"{hypothesis}\", hypothesis)\n", " input_text = input_text.replace(\"{options_}\", options)\n", " print(f\"{bold} For prompt{unbold}: '{input_text}'{newline}\")\n", " payload = {\"text_inputs\": input_text, **parameters}\n", " query_response = query_endpoint_with_json_payload(\n", " json.dumps(payload).encode(\"utf-8\"), endpoint_name=endpoint_name\n", " )\n", " generated_texts = parse_response_multiple_texts(query_response)\n", " print(f\"{bold} The reasoning result is{unbold}: '{generated_texts}'{newline}\")" ] }, { "cell_type": "markdown", "id": "c00febfb-9f98-4f45-8204-a79f134ed17e", "metadata": {}, "source": [ "### 6.3. Question and Answering\n", "\n", "Now, let's try another reasoning task with a different type of prompt template. You can simply provide context and question as shown below." ] }, { "cell_type": "code", "execution_count": null, "id": "a0a08432-60ef-430f-b564-1c59c84bd2ee", "metadata": {}, "outputs": [], "source": [ "context = \"\"\"The newest and most innovative Kindle yet lets you take notes on millions of books and documents, write lists and journals, and more. \n", "\n", "For readers who have always wished they could write in their eBooks, Amazon’s new Kindle lets them do just that. The Kindle Scribe is the first Kindle for reading and writing and allows users to supplement their books and documents with notes, lists, and more.\n", "\n", "Here’s everything you need to know about the Kindle Scribe, including frequently asked questions.\n", "\n", "The Kindle Scribe makes it easy to read and write like you would on paper \n", "\n", "The Kindle Scribe features a 10.2-inch, glare-free screen (the largest of all Kindle devices), crisp 300 ppi resolution, and 35 LED front lights that automatically adjust to your environment. Further personalize your experience with the adjustable warm light, font sizes, line spacing, and more.\n", "\n", "It comes with your choice of the Basic Pen or the Premium Pen, which you use to write on the screen like you would on paper. They also attach magnetically to your Kindle and never need to be charged. The Premium Pen includes a dedicated eraser and a customizable shortcut button.\n", "\n", "The Kindle Scribe has the most storage options of all Kindle devices: choose from 8 GB, 16 GB, or 32 GB to suit your level of reading and writing.\n", "\"\"\"\n", "question = \"what are the key features of new Kindle?\"" ] }, { "cell_type": "code", "execution_count": null, "id": "2a70c89f-42d5-4e2c-b3c1-4cfaf4e3f655", "metadata": { "tags": [] }, "outputs": [], "source": [ "prompts = [\n", " \"\"\"Answer based on context:\\n\\n{context}\\n\\n{question}\"\"\",\n", " \"\"\"{context}\\n\\nAnswer this question based on the article: {question}\"\"\",\n", " \"\"\"{context}\\n\\n{question}\"\"\",\n", " \"\"\"{context}\\nAnswer this question: {question}\"\"\",\n", " \"\"\"Read this article and answer this question {context}\\n{question}\"\"\",\n", " \"\"\"{context}\\n\\nBased on the above article, answer a question. {question}\"\"\",\n", " \"\"\"Write an article that answers the following question: {question} {context}\"\"\",\n", "]\n", "\n", "\n", "parameters = {\n", " \"max_length\": 50,\n", " \"num_return_sequences\": 1,\n", " \"top_k\": 50,\n", " \"top_p\": 0.95,\n", " \"do_sample\": True,\n", "}\n", "\n", "\n", "for each_prompt in prompts:\n", " input_text = each_prompt.replace(\"{context}\", context)\n", " input_text = input_text.replace(\"{question}\", question)\n", " print(f\"{bold} For prompt{unbold}: '{each_prompt}'{newline}\")\n", " payload = {\"text_inputs\": input_text, **parameters}\n", " query_response = query_endpoint_with_json_payload(\n", " json.dumps(payload).encode(\"utf-8\"), endpoint_name=endpoint_name\n", " )\n", " generated_texts = parse_response_multiple_texts(query_response)\n", " print(f\"{bold} The reasoning result is{unbold}: '{generated_texts}'{newline}\")" ] }, { "cell_type": "markdown", "id": "13ab5c92-5210-4ffe-842a-78e8ea0a6521", "metadata": {}, "source": [ "### 6.4. Sentence / Sentiment Classification" ] }, { "cell_type": "markdown", "id": "2f140ae5-54c0-4738-b2c2-d715f8ae8050", "metadata": {}, "source": [ "Define the sentence you want to classify and the corresponded options." ] }, { "cell_type": "code", "execution_count": null, "id": "dd29e5fb-d7f9-43bd-ad03-4887c1381169", "metadata": {}, "outputs": [], "source": [ "sentence = \"This moive is so great and once again dazzles and delights us\"\n", "options_ = \"\"\"OPTIONS:\\n-positive \\n-negative \"\"\"" ] }, { "cell_type": "code", "execution_count": null, "id": "edb65e09-8c2f-4a8c-87bd-64ab49a067b0", "metadata": { "tags": [] }, "outputs": [], "source": [ "prompts = [\n", " \"\"\"Review:\\n{sentence}\\nIs this movie review sentence negative or positive?\\n{options_}\"\"\",\n", " \"\"\"Short movie review: {sentence}\\nDid the critic think positively or negatively of the movie?\\n{options_}\"\"\",\n", " \"\"\"Sentence from a movie review: {sentence}\\nWas the movie seen positively or negatively based on the preceding review? \\n\\n{options_}\"\"\",\n", " \"\"\"\\\"{sentence}\\\"\\nHow would the sentiment of this sentence be perceived?\\n\\n{options_}\"\"\",\n", " \"\"\"Is the sentiment of the following sentence positive or negative?\\n{sentence}\\n{options_}\"\"\",\n", " \"\"\"What is the sentiment of the following movie review sentence?\\n{sentence}\\n{options_}\"\"\",\n", "]\n", "\n", "parameters = {\n", " \"max_length\": 50,\n", " \"num_return_sequences\": 1,\n", " \"top_k\": 50,\n", " \"top_p\": 0.95,\n", " \"do_sample\": True,\n", "}\n", "\n", "\n", "for each_prompt in prompts:\n", " input_text = each_prompt.replace(\"{sentence}\", sentence)\n", " input_text = input_text.replace(\"{options_}\", options_)\n", " print(f\"{bold} For prompt{unbold}: '{input_text}'{newline}\")\n", " payload = {\"text_inputs\": input_text, **parameters}\n", " query_response = query_endpoint_with_json_payload(\n", " json.dumps(payload).encode(\"utf-8\"), endpoint_name=endpoint_name\n", " )\n", " generated_texts = parse_response_multiple_texts(query_response)\n", " print(f\"{bold} The reasoning result is{unbold}: '{generated_texts}'{newline}\")" ] }, { "cell_type": "markdown", "id": "3046a9ef-d64e-42ec-a1ca-107a15f7164f", "metadata": {}, "source": [ "### 6.5. Translation" ] }, { "cell_type": "markdown", "id": "6f151f47-a656-42aa-9b6b-8cc6e4ced876", "metadata": {}, "source": [ "Define the sentence and the language you want to translate the sentence to." ] }, { "cell_type": "code", "execution_count": null, "id": "e5f8885f-76bd-4b42-92ac-cdeb905c97a6", "metadata": {}, "outputs": [], "source": [ "sent1 = \"My name is Arthur\"\n", "lang2 = \"\"\"German\"\"\"" ] }, { "cell_type": "code", "execution_count": null, "id": "b7259e6a-45c1-49af-a7c8-f176f3552ee8", "metadata": { "tags": [] }, "outputs": [], "source": [ "prompts = [\n", " \"\"\"{sent1}\\n\\nTranslate to {lang2}\"\"\",\n", " \"\"\"{sent1}\\n\\nCould you please translate this to {lang2}?\"\"\",\n", " \"\"\"Translate to {lang2}:\\n\\n{sent1}\"\"\",\n", " \"\"\"Translate the following sentence to {lang2}:\\n{sent1}\"\"\",\n", " \"\"\"How is \\\"{sent1}\\\" said in {lang2}?\"\"\",\n", " \"\"\"Translate \\\"{sent1}\\\" to {lang2}?\"\"\",\n", "]\n", "\n", "parameters = {\n", " \"max_length\": 50,\n", " \"num_return_sequences\": 1,\n", " \"top_k\": 50,\n", " \"top_p\": 0.95,\n", " \"do_sample\": True,\n", "}\n", "\n", "\n", "for each_prompt in prompts:\n", " input_text = each_prompt.replace(\"{sent1}\", sent1)\n", " input_text = input_text.replace(\"{lang2}\", lang2)\n", " print(f\"{bold} For prompt{unbold}: '{input_text}'{newline}\")\n", " payload = {\"text_inputs\": input_text, **parameters}\n", " query_response = query_endpoint_with_json_payload(\n", " json.dumps(payload).encode(\"utf-8\"), endpoint_name=endpoint_name\n", " )\n", " generated_texts = parse_response_multiple_texts(query_response)\n", " print(f\"{bold} The translated result is{unbold}: '{generated_texts}'{newline}\")" ] }, { "cell_type": "markdown", "id": "ca7602e2-27f0-4fe7-86dd-806081810265", "metadata": {}, "source": [ "### 6.6. Pronoun resolution" ] }, { "cell_type": "markdown", "id": "74c0b210-1c03-4af9-a2aa-a6a7225b738d", "metadata": {}, "source": [ "Define the sentence, pronoun, and options you want to reason." ] }, { "cell_type": "code", "execution_count": null, "id": "d099455d-7275-46cb-a73b-75ffaf918796", "metadata": {}, "outputs": [], "source": [ "sentence = \"George talked to Mike because he had experiences in many aspects.\"\n", "pronoun = \"he\"\n", "options_ = \"\"\"\\n(A)George \\n(B)Mike \"\"\"" ] }, { "cell_type": "code", "execution_count": null, "id": "974aca77-2154-4162-af03-49a4f62cda9b", "metadata": { "tags": [] }, "outputs": [], "source": [ "prompts = [\n", " \"\"\"sentence}\\n\\nWho is {pronoun} referring to?\\n{options_}\"\"\",\n", " \"\"\"{sentence}\\n\\nWho is \\\"{pronoun}\\\" in this prior sentence?\\n{options_}\"\"\",\n", " \"\"\"{sentence}\\n\\nWho is {pronoun} referring to in this sentence?\\n{options_}\"\"\",\n", " \"\"\"{sentence}\\nTell me who {pronoun} is.\\n{options_}\"\"\",\n", " \"\"\"{sentence}\\nBased on this sentence, who is {pronoun}?\\n\\n{options_}\"\"\",\n", " \"\"\"Who is {pronoun} in the following sentence?\\n\\n{sentence}\\n\\n{options_}\"\"\",\n", " \"\"\"Which entity is {pronoun} this sentence?\\n\\n{sentence}\\n\\n{options_}\"\"\",\n", "]\n", "\n", "parameters = {\n", " \"max_length\": 50,\n", " \"num_return_sequences\": 1,\n", " \"top_k\": 50,\n", " \"top_p\": 0.95,\n", " \"do_sample\": True,\n", "}\n", "\n", "\n", "for each_prompt in prompts:\n", " input_text = each_prompt.replace(\"{sentence}\", sentence)\n", " input_text = input_text.replace(\"{pronoun}\", pronoun)\n", " input_text = input_text.replace(\"{options_}\", options_)\n", " print(f\"{bold} For prompt{unbold}: '{input_text}'{newline}\")\n", " payload = {\"text_inputs\": input_text, **parameters}\n", " query_response = query_endpoint_with_json_payload(\n", " json.dumps(payload).encode(\"utf-8\"), endpoint_name=endpoint_name\n", " )\n", " generated_texts = parse_response_multiple_texts(query_response)\n", " print(f\"{bold} The reasoning result is{unbold}: '{generated_texts}'{newline}\")" ] }, { "cell_type": "markdown", "id": "6b84befc-8e8f-4bd9-9f70-ae11baf329a3", "metadata": {}, "source": [ "## 6.7. Imaginary article generation based on a title" ] }, { "cell_type": "code", "execution_count": null, "id": "044d4b31-132e-400f-bd31-a7d3e3a1673a", "metadata": {}, "outputs": [], "source": [ "title = \"University has new facility coming up\"" ] }, { "cell_type": "code", "execution_count": null, "id": "9d844a27-3922-4c87-b8cc-06e6c857ebb5", "metadata": {}, "outputs": [], "source": [ "prompts = [\n", " \"\"\"Title: \\\"{title}\\\"\\\\nGiven the above title of an imaginary article, imagine the article.\\\\n\"\"\"\n", "]\n", "\n", "\n", "parameters = {\n", " \"max_length\": 5000,\n", " \"num_return_sequences\": 1,\n", " \"top_k\": 50,\n", " \"top_p\": 0.95,\n", " \"do_sample\": True,\n", "}\n", "\n", "\n", "for each_prompt in prompts:\n", " input_text = each_prompt.replace(\"{title}\", title)\n", " print(f\"{bold} For prompt{unbold}: '{input_text}'{newline}\")\n", " payload = {\"text_inputs\": input_text, **parameters}\n", " query_response = query_endpoint_with_json_payload(\n", " json.dumps(payload).encode(\"utf-8\"), endpoint_name=endpoint_name\n", " )\n", " generated_texts = parse_response_multiple_texts(query_response)\n", " print(f\"{bold} The reasoning result is{unbold}: '{generated_texts}'{newline}\")" ] }, { "cell_type": "markdown", "id": "2586eb9b-04bc-4316-b6a6-0d1552f0f5d6", "metadata": {}, "source": [ "## 6.8 Summarize a title based on the article" ] }, { "cell_type": "code", "execution_count": null, "id": "69ae9617-c92b-4a4a-8b17-761addf8ae5b", "metadata": {}, "outputs": [], "source": [ "article = \"\"\"The newest and most innovative Kindle yet lets you take notes on millions of books and documents, write lists and journals, and more. \n", "\n", "For readers who have always wished they could write in their eBooks, Amazon’s new Kindle lets them do just that. The Kindle Scribe is the first Kindle for reading and writing and allows users to supplement their books and documents with notes, lists, and more.\n", "\n", "Here’s everything you need to know about the Kindle Scribe, including frequently asked questions.\n", "\n", "The Kindle Scribe makes it easy to read and write like you would on paper \n", "\n", "The Kindle Scribe features a 10.2-inch, glare-free screen (the largest of all Kindle devices), crisp 300 ppi resolution, and 35 LED front lights that automatically adjust to your environment. Further personalize your experience with the adjustable warm light, font sizes, line spacing, and more.\n", "\n", "It comes with your choice of the Basic Pen or the Premium Pen, which you use to write on the screen like you would on paper. They also attach magnetically to your Kindle and never need to be charged. The Premium Pen includes a dedicated eraser and a customizable shortcut button.\n", "\n", "The Kindle Scribe has the most storage options of all Kindle devices: choose from 8 GB, 16 GB, or 32 GB to suit your level of reading and writing.\"\"\"" ] }, { "cell_type": "code", "execution_count": null, "id": "eeccb85c-b8b3-4b14-a8bf-23e5f59331b1", "metadata": {}, "outputs": [], "source": [ "prompts = [\"\"\"'\\'{article} \\n\\n \\\\n\\\\nGive me a good title for the article above.\"\"\"]\n", "\n", "parameters = {\n", " \"max_length\": 2000,\n", " \"num_return_sequences\": 1,\n", " \"top_k\": 50,\n", " \"top_p\": 0.95,\n", " \"do_sample\": True,\n", "}\n", "\n", "\n", "for each_prompt in prompts:\n", " input_text = each_prompt.replace(\"{article}\", article)\n", " print(f\"{bold} For prompt{unbold}: '{input_text}'{newline}\")\n", " payload = {\"text_inputs\": input_text, **parameters}\n", " query_response = query_endpoint_with_json_payload(\n", " json.dumps(payload).encode(\"utf-8\"), endpoint_name=endpoint_name\n", " )\n", " generated_texts = parse_response_multiple_texts(query_response)\n", " print(f\"{bold} The reasoning result is{unbold}: '{generated_texts}'{newline}\")" ] }, { "cell_type": "markdown", "id": "aa5de21f", "metadata": {}, "source": [ "### 7. Clean up the endpoint" ] }, { "cell_type": "code", "execution_count": null, "id": "69b588d1", "metadata": {}, "outputs": [], "source": [ "# Delete the SageMaker endpoint\n", "model_predictor.delete_model()\n", "model_predictor.delete_endpoint()" ] }, { "attachments": {}, "cell_type": "markdown", "id": "fef48bf8", "metadata": {}, "source": [ "## Notebook CI Test Results\n", "\n", "This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.\n", "\n", "![This us-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-1/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-flan-t5.ipynb)\n", "\n", "![This us-east-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-2/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-flan-t5.ipynb)\n", "\n", "![This us-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-1/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-flan-t5.ipynb)\n", "\n", "![This ca-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ca-central-1/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-flan-t5.ipynb)\n", "\n", "![This sa-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/sa-east-1/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-flan-t5.ipynb)\n", "\n", "![This eu-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-1/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-flan-t5.ipynb)\n", "\n", "![This eu-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-2/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-flan-t5.ipynb)\n", "\n", "![This eu-west-3 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-3/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-flan-t5.ipynb)\n", "\n", "![This eu-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-central-1/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-flan-t5.ipynb)\n", "\n", "![This eu-north-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-north-1/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-flan-t5.ipynb)\n", "\n", "![This ap-southeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-1/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-flan-t5.ipynb)\n", "\n", "![This ap-southeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-2/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-flan-t5.ipynb)\n", "\n", "![This ap-northeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-1/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-flan-t5.ipynb)\n", "\n", "![This ap-northeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-2/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-flan-t5.ipynb)\n", "\n", "![This ap-south-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-south-1/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-flan-t5.ipynb)\n" ] } ], "metadata": { "availableInstances": [ { "_defaultOrder": 0, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "memoryGiB": 4, "name": "ml.t3.medium", "vcpuNum": 2 }, { "_defaultOrder": 1, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 8, "name": "ml.t3.large", "vcpuNum": 2 }, { "_defaultOrder": 2, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 16, "name": "ml.t3.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 3, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 32, "name": "ml.t3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 4, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "memoryGiB": 8, "name": "ml.m5.large", "vcpuNum": 2 }, { "_defaultOrder": 5, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 16, "name": "ml.m5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 6, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 32, "name": "ml.m5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 7, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 64, "name": "ml.m5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 8, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 128, "name": "ml.m5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 9, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 192, "name": "ml.m5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 10, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 256, "name": "ml.m5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 11, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 384, "name": "ml.m5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 12, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 8, "name": "ml.m5d.large", "vcpuNum": 2 }, { "_defaultOrder": 13, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 16, "name": "ml.m5d.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 14, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 32, "name": "ml.m5d.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 15, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 64, "name": "ml.m5d.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 16, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 128, "name": "ml.m5d.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 17, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 192, "name": "ml.m5d.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 18, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 256, "name": "ml.m5d.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 19, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 384, "name": "ml.m5d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 20, "_isFastLaunch": true, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 4, "name": "ml.c5.large", "vcpuNum": 2 }, { "_defaultOrder": 21, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 8, "name": "ml.c5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 22, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 16, "name": "ml.c5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 23, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 32, "name": "ml.c5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 24, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 72, "name": "ml.c5.9xlarge", "vcpuNum": 36 }, { "_defaultOrder": 25, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 96, "name": "ml.c5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 26, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 144, "name": "ml.c5.18xlarge", "vcpuNum": 72 }, { "_defaultOrder": 27, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 192, "name": "ml.c5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 28, "_isFastLaunch": true, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 16, "name": "ml.g4dn.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 29, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 32, "name": "ml.g4dn.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 30, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 64, "name": "ml.g4dn.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 31, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 128, "name": "ml.g4dn.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 32, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "memoryGiB": 192, "name": "ml.g4dn.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 33, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 256, "name": "ml.g4dn.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 34, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 61, "name": "ml.p3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 35, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "memoryGiB": 244, "name": "ml.p3.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 36, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "memoryGiB": 488, "name": "ml.p3.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 37, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "memoryGiB": 768, "name": "ml.p3dn.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 38, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 16, "name": "ml.r5.large", "vcpuNum": 2 }, { "_defaultOrder": 39, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 32, "name": "ml.r5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 40, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 64, "name": "ml.r5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 41, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 128, "name": "ml.r5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 42, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 256, "name": "ml.r5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 43, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 384, "name": "ml.r5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 44, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 512, "name": "ml.r5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 45, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 768, "name": "ml.r5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 46, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 16, "name": "ml.g5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 47, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 32, "name": "ml.g5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 48, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 64, "name": "ml.g5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 49, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 128, "name": "ml.g5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 50, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 256, "name": "ml.g5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 51, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "memoryGiB": 192, "name": "ml.g5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 52, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "memoryGiB": 384, "name": "ml.g5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 53, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "memoryGiB": 768, "name": "ml.g5.48xlarge", "vcpuNum": 192 } ], "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "conda_pytorch_p39", "language": "python", "name": "conda_pytorch_p39" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.13" } }, "nbformat": 4, "nbformat_minor": 5 }