{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# SageMaker JumpStart Foundation Models - HuggingFace Text2Text Generation Batch Transform and Real-Time Batch Inference\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "This notebook's CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook.\n", "\n", "![This us-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-2/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-Batch-Transform.ipynb)\n", "\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "Welcome to Amazon [SageMaker JumpStart](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-jumpstart.html)! You can use Sagemaker JumpStart to solve many Machine Learning tasks through one-click in SageMaker Studio, or through [SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable/overview.html#use-prebuilt-models-with-sagemaker-jumpstart).\n", "\n", "\n", "In this demo notebook, we demonstrate how to use the SageMaker Python SDK for doing the [Batch Transform](https://docs.aws.amazon.com/sagemaker/latest/dg/batch-transform.html) and Real-Time Batch Inference for various NLP tasks. Batch transform allows you to perform inference on large datasets without the need for real-time interaction with the model. In batch transform, you start by creating a batch transform job, which takes as input a dataset and a pre-trained model, and outputs predictions for each data point in the dataset. Batch transform can be useful in situations where you have a large dataset that needs to be processed in a time-efficient manner. \n", "\n", "\n", "Real-time batch inference is a technique that combines the benefits of real-time and batch inference. In this approach, data is processed in batches, but the batches are small enough to enable real-time processing. Real-time batch inference is useful in situations where you need to process a continuous stream of data in near real-time, but it is not feasible to process each data point individually due to time or resource constraints. Instead, you process the data in small batches, which enables you to take advantage of parallel processing while still maintaining low latency. By using real-time batch inference, you can achieve a balance between low latency and high throughput, enabling you to process large volumes of data in a timely and efficient manner.\n", "\n", "Here, we show how to use the state-of-the-art pre-trained **text2text FLAN T5, FLAN UL2, T0pp models** from [Hugging Face](https://huggingface.co/docs/transformers/model_doc/flan-t5) for batch transform and real-Time batch inference on the HuggingFace [cnn_dailymail](https://huggingface.co/datasets/cnn_dailymail) dataset for Text summarization task.\n", "\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note: This notebook was tested on ml.t3.medium instance in Amazon SageMaker Studio with Python 3 (Data Science) kernel and in Amazon SageMaker Notebook instance with conda_python3 kernel with Python 3.10.10." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "1. [Set Up](#1.-Set-Up)\n", "2. [Select a pre-trained model](#2.-Select-a-pre-trained-model)\n", "3. [Retrieve Artifacts for Model](#3.-Retrieve-Artifacts-for-Model)\n", "4. [Specify Batch Transform Job HyperParameters ](#4.-Specify-Batch-Transform-Job-HyperParameters )\n", "5. [Prepare Data for Batch Transform](#5.-Prepare-Data-for-Batch-Transform)\n", "6. [Run Batch Transform Job ](#6.-Run-Batch-Transform-Job )\n", "7. [Computing Rouge Score](#7.-Computing-Rouge-Score)\n", "8. [Real-Time Batch Inference](#8.-Real-Time-Batch-Inference)\n", "9. [Conclusion](#9.-Conclusion)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 1. Set Up" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "Before executing the notebook, there are some initial steps required for set up. This notebook requires additional packages. If you are running the following cell for the first time, it is recommended to restart your kernel after installing packages.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "!pip install datasets==2.12.0 --quiet\n", "!pip install evaluate==0.4.0 --quiet\n", "!pip install ipywidgets==8.0.6 --quiet\n", "!pip install rouge_score==0.1.2 --quiet\n", "!pip install sagemaker==2.165.0 --quiet" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "from platform import python_version\n", "\n", "tested_version = \"3.10.\"\n", "\n", "version = python_version()\n", "print(f\"You are using Python {version}\")\n", "\n", "if not version.startswith(tested_version):\n", " print(f\"This notebook was tested with {tested_version}\")\n", " print(\"Some parts might behave unexpectedly with a different Python version\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Permissions and environment variables\n", "\n", "---\n", "To host on Amazon SageMaker, we need to set up and authenticate the use of AWS services. Here, we use the execution role associated with the current notebook as the AWS account role with SageMaker access. \n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "import sagemaker, boto3, json\n", "from sagemaker.session import Session\n", "\n", "sagemaker_session = Session()\n", "aws_role = sagemaker_session.get_caller_identity_arn()\n", "aws_region = boto3.Session().region_name\n", "sess = sagemaker.Session()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. Select a pre-trained model\n", "***\n", "You can continue with the default model, or can choose a different model from the dropdown generated upon running the next cell. A complete list of SageMaker pre-trained models can also be accessed at [Sagemaker pre-trained Models](https://sagemaker.readthedocs.io/en/stable/doc_utils/pretrainedmodels.html#).\n", "***" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "model_id = \"huggingface-text2text-flan-t5-large\"\n", "model_version = \"*\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***\n", "[Optional] Select a different Sagemaker pre-trained model. Here, we download the `model_manifest` file from the Built-In Algorithms s3 bucket, filter-out all the Text2Text Generation models and select a model for inference.\n", "***" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "from ipywidgets import widgets\n", "from sagemaker.jumpstart.notebook_utils import list_jumpstart_models\n", "\n", "# Retrieves all Text2Text Generation models available by SageMaker Built-In Algorithms.\n", "filter_value = \"task == text2text\"\n", "text2text_generation_models = list_jumpstart_models(filter=filter_value)\n", "\n", "# display the model-ids in a dropdown to select a model for inference.\n", "model_dropdown = widgets.Dropdown(\n", " options=text2text_generation_models,\n", " value=model_id,\n", " description=\"Select a model\",\n", " style={\"description_width\": \"initial\"},\n", " layout={\"width\": \"max-content\"},\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Choose a model for Inference" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "display(model_dropdown)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# model_version=\"*\" fetches the latest version of the model\n", "model_id, model_version = model_dropdown.value, \"*\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 3. Retrieve Artifacts for Model\n", "\n", "***\n", "\n", "Using SageMaker, we can perform inference on the pre-trained model, even without fine-tuning it first on a new dataset. We start by retrieving the `deploy_image_uri`, and `model_uri` for the pre-trained model.\n", "\n", "***" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "from sagemaker import image_uris, model_uris\n", "from sagemaker.jumpstart.model import JumpStartModel\n", "from sagemaker.predictor import Predictor\n", "\n", "inference_instance_type = \"ml.p3.8xlarge\"\n", "\n", "# Note that larger instances, e.g., \"ml.g5.12xlarge\" might be required for larger models,\n", "# such as huggingface-text2text-flan-t5-xxl or huggingface-text2text-flan-ul2-bf16\n", "# However, at present ml.g5.* instances are not supported in batch transforms.\n", "# Thus, if using such an instance, please skip Sections 6 and 7 of this notebook.\n", "\n", "# Retrieve the inference docker container uri. This is the base HuggingFace container image for the default model above.\n", "deploy_image_uri = image_uris.retrieve(\n", " region=None,\n", " framework=None, # automatically inferred from model_id\n", " image_scope=\"inference\",\n", " model_id=model_id,\n", " model_version=model_version,\n", " instance_type=inference_instance_type,\n", ")\n", "\n", "# Retrieve the model uri.\n", "model_uri = model_uris.retrieve(\n", " model_id=model_id, model_version=model_version, model_scope=\"inference\"\n", ")\n", "\n", "model = JumpStartModel(\n", " model_id=model_id,\n", " image_uri=deploy_image_uri,\n", " model_data=model_uri,\n", " role=aws_role,\n", " predictor_cls=Predictor,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 4. Specify Batch Transform Job HyperParameters \n", "\n", "***\n", "Batch transform jobs support many advanced parameters while performing inference. They include:\n", "\n", "* **max_length:** Model generates text until the output length (which includes the input context length) reaches `max_length`. If specified, it must be a positive integer.\n", "* **num_return_sequences:** Number of output sequences returned. If specified, it must be a positive integer.\n", "* **num_beams:** Number of beams used in the greedy search. If specified, it must be integer greater than or equal to `num_return_sequences`.\n", "* **no_repeat_ngram_size:** Model ensures that a sequence of words of `no_repeat_ngram_size` is not repeated in the output sequence. If specified, it must be a positive integer greater than 1.\n", "* **temperature:** Controls the randomness in the output. Higher temperature results in output sequence with low-probability words and lower temperature results in output sequence with high-probability words. If `temperature` -> 0, it results in greedy decoding. If specified, it must be a positive float.\n", "* **early_stopping:** If True, text generation is finished when all beam hypotheses reach the end of stence token. If specified, it must be boolean.\n", "* **do_sample:** If True, sample the next word as per the likelyhood. If specified, it must be boolean.\n", "* **top_k:** In each step of text generation, sample from only the `top_k` most likely words. If specified, it must be a positive integer.\n", "* **top_p:** In each step of text generation, sample from the smallest possible set of words with cumulative probability `top_p`. If specified, it must be a float between 0 and 1.\n", "* **seed:** Fix the randomized state for reproducibility. If specified, it must be an integer.\n", "* **batch_size:** Batching could speed things up, it may be useful to try tuning the batch_size parameter. In some cases can actually be quite slower as explained [here](https://huggingface.co/docs/transformers/main_classes/pipelines#pipeline-batching). We will use a batch_size of 4. But If you get Cuda Out of Memory Error please decrease it accordingly. If specified, it must be a positive integer.\n", "\n", "We may specify any subset of the hyper parameters mentioned above for a batch transform job. These hyperparameters will be passed as an env variables to the batch transform job. If any of these are passed, then the advanced hyper-parameters from the individual examples in the JSONLINES payload will not be used. If you don't want to pass it, set it to None, and in that case advanced hyper-parameters from the individual examples in the JSONLINES payload will be used and inference will be run on each example at a time." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# Specify the batch job hyperparameters here, If you want to treate each example hyperparameters different please pass hyper_params_dict as None\n", "hyper_params = {\"max_length\": 30, \"top_k\": 50, \"top_p\": 0.95, \"do_sample\": True}\n", "hyper_params_dict = {\"HYPER_PARAMS\": str(hyper_params)}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 5. Prepare Data for Batch Transform\n", "***\n", "If you want to specify different parameters for each test input rather than same for whole batch please do so here while creating the dataset. \n", "- The input format for the batch transform job is a jsonl file with entries as -> \\\n", " ` {\"id\":1,\"text_inputs\":\"Translate to German: My name is Arthur\", \"max_length\":50,.....}` \\\n", " ` {\"id\":2,\"text_inputs\":\"Tell me the steps to make a pizza\", \"max_length\":50,.....}`\n", "- While the output format is ->\\\n", " ` {\"id\":1, generated_texts':['Ich bin Arthur']}` \\\n", " ` {\"id\":2, 'generated_texts':['Preheat oven to 400 degrees F. Spread pizza sauce on a pizza pan. Bake for']}`\n", "\n", "***" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# We will use the cnn_dailymail dataset from HuggingFace over here\n", "from datasets import load_dataset\n", "\n", "cnn_test = load_dataset(\"cnn_dailymail\", \"3.0.0\", split=\"test\")\n", "# Choosing a smaller dataset for demo purposes. You can use the complete dataset as well.\n", "cnn_test = cnn_test.select(list(range(20)))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# We will use a default s3 bucket for providing the input and output paths for batch transform\n", "output_bucket = sess.default_bucket()\n", "output_prefix = \"jumpstart-example-text2text-batch-transform\"\n", "\n", "s3_input_data_path = f\"s3://{output_bucket}/{output_prefix}/batch_input/\"\n", "s3_output_data_path = f\"s3://{output_bucket}/{output_prefix}/batch_output/\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# You can specify a prompt here\n", "prompt = \"Briefly summarize this text: \"" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "import json\n", "import boto3\n", "import os\n", "\n", "# Provide the test data and the ground truth file name\n", "test_data_file_name = \"articles.jsonl\"\n", "test_reference_file_name = \"highlights.jsonl\"\n", "\n", "test_articles = []\n", "test_highlights = []\n", "\n", "# We will go over each data entry and create the data in the input required format as described above\n", "for i, test_entry in enumerate(cnn_test):\n", " article = test_entry[\"article\"]\n", " highlights = test_entry[\"highlights\"]\n", " # Create a payload like this if you want to have different hyperparameters for each test input\n", " # payload = {\"id\": id,\"text_inputs\": f\"{prompt}{article}\", \"max_length\": 100, \"temperature\": 0.95}\n", " # Note that if you specify hyperparameter for each payload individually,\n", " # you may want to ensure that hyper_params_dict is set to None instead\n", " payload = {\"id\": i, \"text_inputs\": f\"{prompt}{article}\"}\n", " test_articles.append(payload)\n", " test_highlights.append({\"id\": i, \"highlights\": highlights})\n", "\n", "with open(test_data_file_name, \"w\") as outfile:\n", " for entry in test_articles:\n", " outfile.write(f\"{json.dumps(entry)}\\n\")\n", "\n", "with open(test_reference_file_name, \"w\") as outfile:\n", " for entry in test_highlights:\n", " outfile.write(f\"{json.dumps(entry)}\\n\")\n", "\n", "# Uploading the data\n", "s3 = boto3.client(\"s3\")\n", "s3.upload_file(test_data_file_name, output_bucket, f\"{output_prefix}/batch_input/articles.jsonl\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### 6. Run Batch Transform Job\n", "When you start a batch transform job, Amazon SageMaker launches the necessary compute resources to process the data, including CPU or GPU instances depending on the selected instance type. During the batch transform job, Amazon SageMaker automatically provisions and manages the compute resources required to process the data, including instances, storage, and networking resources. Once the batch transform job has completed, the compute resources are automatically cleaned up by Amazon SageMaker. This means that the instances and storage used during the job are terminated and removed, freeing up resources and minimizing costs." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# Creating the batch transformer object. If you have a large dataset you can\n", "# divide it into smaller chunks and use more instances for faster inference\n", "batch_transformer = model.transformer(\n", " instance_count=1,\n", " instance_type=inference_instance_type,\n", " output_path=s3_output_data_path,\n", " assemble_with=\"Line\",\n", " accept=\"text/csv\",\n", " max_payload=1,\n", ")\n", "batch_transformer.env = hyper_params_dict\n", "\n", "# Making the predictions on the input data\n", "batch_transformer.transform(\n", " s3_input_data_path, content_type=\"application/jsonlines\", split_type=\"Line\"\n", ")\n", "\n", "batch_transformer.wait()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 7. Computing Rouge Score\n", "\n", "[ROUGE](https://huggingface.co/spaces/evaluate-metric/rouge), or Recall-Oriented Understudy for Gisting Evaluation, is a set of metrics and a software package used for evaluating automatic summarization and machine translation in natural language processing. The metrics compare an automatically produced summary or translation against a reference or a set of references (human-produced) summary or translation. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "import ast\n", "import evaluate\n", "import pandas as pd\n", "\n", "# Downloading the predictions\n", "s3.download_file(\n", " output_bucket, output_prefix + \"/batch_output/\" + \"articles.jsonl.out\", \"predict.jsonl\"\n", ")\n", "\n", "with open(\"predict.jsonl\", \"r\") as json_file:\n", " json_list = list(json_file)\n", "\n", "# Creating the prediction list for the dataframe\n", "predict_dict_list = []\n", "for predict in json_list:\n", " if len(predict) > 1:\n", " predict_dict = ast.literal_eval(predict)\n", " predict_dict_req = {\n", " \"id\": predict_dict[\"id\"],\n", " \"prediction\": predict_dict[\"generated_texts\"][0],\n", " }\n", " predict_dict_list.append(predict_dict_req)\n", "\n", "# Creating the predictions dataframe\n", "predict_df = pd.DataFrame(predict_dict_list)\n", "\n", "test_highlights_df = pd.DataFrame(test_highlights)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# Combining the predict dataframe with the original summarization on id to compute the rouge score\n", "df_merge = test_highlights_df.merge(predict_df, on=\"id\", how=\"left\")\n", "\n", "rouge = evaluate.load(\"rouge\")\n", "results = rouge.compute(\n", " predictions=list(df_merge[\"prediction\"]), references=list(df_merge[\"highlights\"])\n", ")\n", "print(results)\n", "\n", "## Delete the SageMaker model\n", "batch_transformer.delete_model()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 8. Real-Time Batch Inference\n", "\n", "***\n", "We can also run the real-time batch inference on an endpoint by providing the inputs as a list. Real-time batch inference is useful in situations where you need to process a continuous stream of data in near real-time, but it is not feasible to process each data point individually due to time or resource constraints. Instead, you process the data in small batches, which enables you to take advantage of parallel processing while still maintaining low latency.\n", "\n", "***" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 8.1 Deploying the Model" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "from sagemaker.utils import name_from_base\n", "\n", "endpoint_name = name_from_base(f\"jumpstart-example-{model_id}\")\n", "# Deploy the Model. Note that we need to pass Predictor class when we deploy model through Model class,\n", "# for being able to run inference through the Sagemaker API.\n", "model_predictor = model.deploy(\n", " initial_instance_count=1,\n", " instance_type=inference_instance_type,\n", " endpoint_name=endpoint_name,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 8.2 Running Inference on the Model" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# Provide all the text inputs to the model as a list\n", "text_inputs = [entry[\"text_inputs\"] for entry in test_articles[:10]]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# The information about the different parameters is provided above\n", "payload = {\n", " \"text_inputs\": text_inputs,\n", " \"max_length\": 30,\n", " \"num_return_sequences\": 1,\n", " \"top_k\": 50,\n", " \"top_p\": 0.95,\n", " \"do_sample\": True,\n", "}\n", "\n", "\n", "def query_endpoint_with_json_payload(encoded_json, endpoint_name):\n", " client = boto3.client(\"runtime.sagemaker\")\n", " response = client.invoke_endpoint(\n", " EndpointName=endpoint_name, ContentType=\"application/json\", Body=encoded_json\n", " )\n", " return response\n", "\n", "\n", "def parse_response_multiple_texts(query_response):\n", " model_predictions = json.loads(query_response[\"Body\"].read())\n", " return model_predictions\n", "\n", "\n", "query_response = query_endpoint_with_json_payload(\n", " json.dumps(payload).encode(\"utf-8\"), endpoint_name=endpoint_name\n", ")\n", "generated_text_list = parse_response_multiple_texts(query_response)\n", "print(*generated_text_list, sep=\"\\n\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# Delete the SageMaker endpoint\n", "model_predictor.delete_model()\n", "model_predictor.delete_endpoint()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 9. Conclusion\n", "\n", "***\n", "In this notebook, we conducted batch transform and real-time batch inference. Additionally, we used the Rouge score to compare the test data summarization with the model-generated summarization. We found that batch transform is advantageous in obtaining inferences from large datasets without requiring a persistent endpoint. Furthermore, we linked input records with inferences to aid in result interpretation. On the other hand, teal-time batch inference is beneficial in achieving high throughput.\n", "***" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "## Notebook CI Test Results\n", "\n", "This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.\n", "\n", "![This us-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-1/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-Batch-Transform.ipynb)\n", "\n", "![This us-east-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-2/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-Batch-Transform.ipynb)\n", "\n", "![This us-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-1/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-Batch-Transform.ipynb)\n", "\n", "![This ca-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ca-central-1/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-Batch-Transform.ipynb)\n", "\n", "![This sa-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/sa-east-1/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-Batch-Transform.ipynb)\n", "\n", "![This eu-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-1/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-Batch-Transform.ipynb)\n", "\n", "![This eu-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-2/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-Batch-Transform.ipynb)\n", "\n", "![This eu-west-3 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-3/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-Batch-Transform.ipynb)\n", "\n", "![This eu-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-central-1/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-Batch-Transform.ipynb)\n", "\n", "![This eu-north-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-north-1/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-Batch-Transform.ipynb)\n", "\n", "![This ap-southeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-1/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-Batch-Transform.ipynb)\n", "\n", "![This ap-southeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-2/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-Batch-Transform.ipynb)\n", "\n", "![This ap-northeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-1/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-Batch-Transform.ipynb)\n", "\n", "![This ap-northeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-2/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-Batch-Transform.ipynb)\n", "\n", "![This ap-south-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-south-1/introduction_to_amazon_algorithms|jumpstart-foundation-models|text2text-generation-Batch-Transform.ipynb)\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "conda_python3", "language": "python", "name": "conda_python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.10" } }, "nbformat": 4, "nbformat_minor": 4 }