{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Retrieval-Augmented Generation: Question Answering based on Custom Dataset with Open-sourced [LangChain](https://python.langchain.com/en/latest/index.html) Library\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "This notebook's CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook. \n", "\n", "![This us-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-2/introduction_to_amazon_algorithms|jumpstart-foundation-models|question_answerIng_retrieval_augmented_generation_jumpstart|question_answerIng_langchain_jumpstart.ipynb)\n", "\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "Many use cases such as building a chatbot require text (text2text) generation models like **[BloomZ 7B1](https://huggingface.co/bigscience/bloomz-7b1)**, **[Flan T5 XXL](https://huggingface.co/google/flan-t5-xxl)**, and **[Flan T5 UL2](https://huggingface.co/google/flan-ul2)** to respond to user questions with insightful answers. The **BloomZ 7B1**, **Flan T5 XXL**, and **Flan T5 UL2** models have picked up a lot of general knowledge in training, but we often need to ingest and use a large library of more specific information.\n", "\n", "In this notebook we will demonstrate how to use **BloomZ 7B1**, **Flan T5 XXL**, and **Flan T5 UL2** to answer questions using a library of documents as a reference, by using document embeddings and retrieval. The embeddings are generated from **GPT-J-6B** embedding model. \n", "\n", "**This notebook serves a template such that you can easily replace the example dataset by your own to build a custom question and asnwering application.**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 1. Deploy large language model (LLM) and embedding model in SageMaker JumpStart\n", "\n", "To better illustrate the idea, let's first deploy all the models that are required to perform the demo. You can choose either deploying all three Flan T5 XL, BloomZ 7B1, and Flan UL2 models as the large language model (LLM) to compare their model performances, or select **subset** of the models based on your preference. To do that, you need modify the `_MODEL_CONFIG_` python dictionary defined as below." ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false }, "pycharm": { "name": "#%%\n" }, "tags": [] }, "outputs": [], "source": [ "# !pip install --upgrade sagemaker --quiet\n", "# !pip install ipywidgets==7.0.0 --quiet\n", "# !pip install langchain==0.0.148 --quiet\n", "# !pip install faiss-cpu --quiet" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "tags": [] }, "outputs": [], "source": [ "import time\n", "import sagemaker, boto3, json\n", "from sagemaker.session import Session\n", "from sagemaker.model import Model\n", "from sagemaker import image_uris, model_uris, script_uris, hyperparameters\n", "from sagemaker.predictor import Predictor\n", "from sagemaker.utils import name_from_base\n", "from typing import Any, Dict, List, Optional\n", "from langchain.embeddings import SagemakerEndpointEmbeddings\n", "from langchain.llms.sagemaker_endpoint import ContentHandlerBase\n", "\n", "sagemaker_session = Session()\n", "aws_role = sagemaker_session.get_caller_identity_arn()\n", "aws_region = boto3.Session().region_name\n", "sess = sagemaker.Session()\n", "model_version = \"*\"" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "tags": [] }, "outputs": [], "source": [ "def query_endpoint_with_json_payload(encoded_json, endpoint_name, content_type=\"application/json\"):\n", " client = boto3.client(\"runtime.sagemaker\")\n", " response = client.invoke_endpoint(\n", " EndpointName=endpoint_name, ContentType=content_type, Body=encoded_json\n", " )\n", " return response\n", "\n", "\n", "def parse_response_model_flan_t5(query_response):\n", " model_predictions = json.loads(query_response[\"Body\"].read())\n", " generated_text = model_predictions[\"generated_texts\"]\n", " return generated_text\n", "\n", "\n", "def parse_response_multiple_texts_bloomz(query_response):\n", " generated_text = []\n", " model_predictions = json.loads(query_response[\"Body\"].read())\n", " for x in model_predictions[0]:\n", " generated_text.append(x[\"generated_text\"])\n", " return generated_text" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Deploy SageMaker endpoint(s) for large language models and GPT-J 6B embedding model. Please uncomment the entries as below if you want to deploy multiple LLM models to compare their performance." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "tags": [] }, "outputs": [], "source": [ "_MODEL_CONFIG_ = {\n", " \"huggingface-text2text-flan-t5-xxl\": {\n", " \"instance type\": \"ml.g5.12xlarge\",\n", " \"env\": {\"TS_DEFAULT_WORKERS_PER_MODEL\": \"1\"},\n", " \"parse_function\": parse_response_model_flan_t5,\n", " \"prompt\": \"\"\"Answer based on context:\\n\\n{context}\\n\\n{question}\"\"\",\n", " },\n", " \"huggingface-textembedding-gpt-j-6b\": {\n", " \"instance type\": \"ml.g5.24xlarge\",\n", " \"env\": {\"TS_DEFAULT_WORKERS_PER_MODEL\": \"1\"},\n", " },\n", " # \"huggingface-textgeneration1-bloomz-7b1-fp16\": {\n", " # \"instance type\": \"ml.g5.12xlarge\",\n", " # \"env\": {},\n", " # \"parse_function\": parse_response_multiple_texts_bloomz,\n", " # \"prompt\": \"\"\"question: \\\"{question}\"\\\\n\\nContext: \\\"{context}\"\\\\n\\nAnswer:\"\"\",\n", " # },\n", " # \"huggingface-text2text-flan-ul2-bf16\": {\n", " # \"instance type\": \"ml.g5.24xlarge\",\n", " # \"env\": {\"TS_DEFAULT_WORKERS_PER_MODEL\": \"1\"},\n", " # \"parse_function\": parse_response_model_flan_t5,\n", " # \"prompt\": \"\"\"Answer based on context:\\n\\n{context}\\n\\n{question}\"\"\",\n", " # },\n", "}" ] }, { "cell_type": "code", "execution_count": 39, "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "{'huggingface-text2text-flan-t5-xxl': {'instance type': 'ml.g5.12xlarge',\n", " 'env': {'TS_DEFAULT_WORKERS_PER_MODEL': '1'},\n", " 'parse_function': ,\n", " 'prompt': 'Answer based on context:\\n\\n{context}\\n\\n{question}',\n", " 'endpoint_name': 'jumpstart-example-raglc-huggingface-tex-2023-05-29-13-26-05-347'},\n", " 'huggingface-textembedding-gpt-j-6b': {'instance type': 'ml.g5.24xlarge',\n", " 'env': {'TS_DEFAULT_WORKERS_PER_MODEL': '1'},\n", " 'endpoint_name': 'jumpstart-example-raglc-huggingface-tex-2023-05-29-13-33-09-311'}}" ] }, "execution_count": 39, "metadata": {}, "output_type": "execute_result" } ], "source": [ "_MODEL_CONFIG_" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "-------------!\u001b[1mModel huggingface-text2text-flan-t5-xxl has been deployed successfully.\u001b[0m\n", "\n", "---------!\u001b[1mModel huggingface-textembedding-gpt-j-6b has been deployed successfully.\u001b[0m\n", "\n" ] } ], "source": [ "newline, bold, unbold = \"\\n\", \"\\033[1m\", \"\\033[0m\"\n", "\n", "for model_id in _MODEL_CONFIG_:\n", " endpoint_name = name_from_base(f\"jumpstart-example-raglc-{model_id}\")\n", " inference_instance_type = _MODEL_CONFIG_[model_id][\"instance type\"]\n", "\n", " # Retrieve the inference container uri. This is the base HuggingFace container image for the default model above.\n", " deploy_image_uri = image_uris.retrieve(\n", " region=None,\n", " framework=None, # automatically inferred from model_id\n", " image_scope=\"inference\",\n", " model_id=model_id,\n", " model_version=model_version,\n", " instance_type=inference_instance_type,\n", " )\n", " # Retrieve the model uri.\n", " model_uri = model_uris.retrieve(\n", " model_id=model_id, model_version=model_version, model_scope=\"inference\"\n", " )\n", " model_inference = Model(\n", " image_uri=deploy_image_uri,\n", " model_data=model_uri,\n", " role=aws_role,\n", " predictor_cls=Predictor,\n", " name=endpoint_name,\n", " env=_MODEL_CONFIG_[model_id][\"env\"],\n", " )\n", " model_predictor_inference = model_inference.deploy(\n", " initial_instance_count=1,\n", " instance_type=inference_instance_type,\n", " predictor_cls=Predictor,\n", " endpoint_name=endpoint_name,\n", " )\n", " print(f\"{bold}Model {model_id} has been deployed successfully.{unbold}{newline}\")\n", " _MODEL_CONFIG_[model_id][\"endpoint_name\"] = endpoint_name" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 2. Ask a question to LLM without providing the context\n", "\n", "To better illustrate why we need retrieval-augmented generation (RAG) based approach to solve the question and anwering problem. Let's directly ask the model a question and see how they respond." ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "question = \"Which instances can I use with Managed Spot Training in SageMaker?\"" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "For model: huggingface-text2text-flan-t5-xxl, the generated output is: Mac OS X 10.11 or later, iPhone 4, iPhone 4, iPhone 4S, and iPhone 3GS, iPad\n", "\n" ] } ], "source": [ "payload = {\n", " \"text_inputs\": question,\n", " \"max_length\": 100,\n", " \"num_return_sequences\": 1,\n", " \"top_k\": 50,\n", " \"top_p\": 0.95,\n", " \"do_sample\": True,\n", "}\n", "\n", "list_of_LLMs = list(_MODEL_CONFIG_.keys())\n", "list_of_LLMs.remove(\"huggingface-textembedding-gpt-j-6b\") # remove the embedding model\n", "\n", "\n", "for model_id in list_of_LLMs:\n", " endpoint_name = _MODEL_CONFIG_[model_id][\"endpoint_name\"]\n", " query_response = query_endpoint_with_json_payload(\n", " json.dumps(payload).encode(\"utf-8\"), endpoint_name=endpoint_name\n", " )\n", " generated_texts = _MODEL_CONFIG_[model_id][\"parse_function\"](query_response)\n", " print(f\"For model: {model_id}, the generated output is: {generated_texts[0]}\\n\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can see the generated answer is wrong or doesn't make much sense. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 3. Improve the answer to the same question using **prompt engineering** with insightful context\n", "\n", "\n", "To better answer the question well, we provide extra contextual information, combine it with a prompt, and send it to model together with the question. Below is an example." ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "context = \"\"\"Managed Spot Training can be used with all instances supported in Amazon SageMaker. Managed Spot Training is supported in all AWS Regions where Amazon SageMaker is currently available.\"\"\"" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[1mFor model: huggingface-text2text-flan-t5-xxl, the generated output is: all instances supported in Amazon SageMaker\u001b[0m\n", "\n" ] } ], "source": [ "parameters = {\n", " \"max_length\": 200,\n", " \"num_return_sequences\": 1,\n", " \"top_k\": 250,\n", " \"top_p\": 0.95,\n", " \"do_sample\": False,\n", " \"temperature\": 1,\n", "}\n", "\n", "for model_id in list_of_LLMs:\n", " endpoint_name = _MODEL_CONFIG_[model_id][\"endpoint_name\"]\n", "\n", " prompt = _MODEL_CONFIG_[model_id][\"prompt\"]\n", "\n", " text_input = prompt.replace(\"{context}\", context)\n", " text_input = text_input.replace(\"{question}\", question)\n", " payload = {\"text_inputs\": text_input, **parameters}\n", "\n", " query_response = query_endpoint_with_json_payload(\n", " json.dumps(payload).encode(\"utf-8\"), endpoint_name=endpoint_name\n", " )\n", " generated_texts = _MODEL_CONFIG_[model_id][\"parse_function\"](query_response)\n", " print(\n", " f\"{bold}For model: {model_id}, the generated output is: {generated_texts[0]}{unbold}{newline}\"\n", " )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The output from step 3 tells us the chance to get the correct response significantly correlates with the insightful context you send into the LLM. \n", "\n", "**Now, the question becomes where can I find the insightful context based on the user query? The answer is to use a pre-stored knowledge data base with retrieval augmented generation, as shown in step 4 below.**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 4. Use RAG based approach with [LangChain](https://python.langchain.com/en/latest/index.html) and SageMaker endpoints to build a simplified question and answering application.\n", "\n", "\n", "We plan to use document embeddings to fetch the most relevant documents in our document knowledge library and combine them with the prompt that we provide to LLM.\n", "\n", "To achieve that, we will do following.\n", "\n", "1. **Generate embedings for each of document in the knowledge library with SageMaker GPT-J-6B embedding model.**\n", "2. **Identify top K most relevant documents based on user query.**\n", " - 2.1 **For a query of your interest, generate the embedding of the query using the same embedding model.**\n", " - 2.2 **Search the indexes of top K most relevant documents in the embedding space using in-memory Faiss search.**\n", " - 2.3 **Use the indexes to retrieve the corresponded documents.**\n", "3. **Combine the retrieved documents with prompt and question and send them into SageMaker LLM.**\n", "\n", "\n", "\n", "Note: The retrieved document/text should be large enough to contain enough information to answer a question; but small enough to fit into the LLM prompt -- maximum sequence length of 1024 tokens. \n", "\n", "---\n", "To build a simiplied QA application with LangChain, we need: \n", "1. Wrap up our SageMaker endpoints for embedding model and LLM into `langchain.embeddings.SagemakerEndpointEmbeddings` and `langchain.llms.sagemaker_endpoint.SagemakerEndpoint`. That requires a small overwritten of `SagemakerEndpointEmbeddings` class to make it compatible with SageMaker embedding mdoel.\n", "2. Prepare the dataset to build the knowledge data base. \n", "\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Wrap up our SageMaker endpoints for embedding model into `langchain.embeddings.SagemakerEndpointEmbeddings`. That requires a small overwritten of `SagemakerEndpointEmbeddings` class to make it compatible with SageMaker embedding mdoel." ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "tags": [] }, "outputs": [], "source": [ "from langchain.embeddings.sagemaker_endpoint import EmbeddingsContentHandler\n", "\n", "\n", "class SagemakerEndpointEmbeddingsJumpStart(SagemakerEndpointEmbeddings):\n", " def embed_documents(self, texts: List[str], chunk_size: int = 5) -> List[List[float]]:\n", " \"\"\"Compute doc embeddings using a SageMaker Inference Endpoint.\n", "\n", " Args:\n", " texts: The list of texts to embed.\n", " chunk_size: The chunk size defines how many input texts will\n", " be grouped together as request. If None, will use the\n", " chunk size specified by the class.\n", "\n", " Returns:\n", " List of embeddings, one for each text.\n", " \"\"\"\n", " results = []\n", " _chunk_size = len(texts) if chunk_size > len(texts) else chunk_size\n", "\n", " for i in range(0, len(texts), _chunk_size):\n", " response = self._embedding_func(texts[i : i + _chunk_size])\n", " print\n", " results.extend(response)\n", " return results\n", "\n", "\n", "class ContentHandler(EmbeddingsContentHandler):\n", " content_type = \"application/json\"\n", " accepts = \"application/json\"\n", "\n", " def transform_input(self, prompt: str, model_kwargs={}) -> bytes:\n", " input_str = json.dumps({\"text_inputs\": prompt, **model_kwargs})\n", " return input_str.encode(\"utf-8\")\n", "\n", " def transform_output(self, output: bytes) -> str:\n", " response_json = json.loads(output.read().decode(\"utf-8\"))\n", " embeddings = response_json[\"embedding\"]\n", " return embeddings\n", "\n", "\n", "content_handler = ContentHandler()\n", "\n", "embeddings = SagemakerEndpointEmbeddingsJumpStart(\n", " endpoint_name=_MODEL_CONFIG_[\"huggingface-textembedding-gpt-j-6b\"][\"endpoint_name\"],\n", " region_name=aws_region,\n", " content_handler=content_handler,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we wrap up our SageMaker endpoints for LLM into `langchain.llms.sagemaker_endpoint.SagemakerEndpoint`. " ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [], "source": [ "from langchain.llms.sagemaker_endpoint import LLMContentHandler, SagemakerEndpoint\n", "\n", "parameters = {\n", " \"max_length\": 200,\n", " \"num_return_sequences\": 1,\n", " \"top_k\": 250,\n", " \"top_p\": 0.95,\n", " \"do_sample\": False,\n", " \"temperature\": 1,\n", "}\n", "\n", "\n", "class ContentHandler(LLMContentHandler):\n", " content_type = \"application/json\"\n", " accepts = \"application/json\"\n", "\n", " def transform_input(self, prompt: str, model_kwargs={}) -> bytes:\n", " input_str = json.dumps({\"text_inputs\": prompt, **model_kwargs})\n", " return input_str.encode(\"utf-8\")\n", "\n", " def transform_output(self, output: bytes) -> str:\n", " response_json = json.loads(output.read().decode(\"utf-8\"))\n", " return response_json[\"generated_texts\"][0]\n", "\n", "\n", "content_handler = ContentHandler()\n", "\n", "sm_llm = SagemakerEndpoint(\n", " endpoint_name=_MODEL_CONFIG_[\"huggingface-text2text-flan-t5-xxl\"][\"endpoint_name\"],\n", " region_name=aws_region,\n", " model_kwargs=parameters,\n", " content_handler=content_handler,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, let's download the example data and prepare it for demonstration. We will use [Amazon SageMaker FAQs](https://aws.amazon.com/sagemaker/faqs/) as knowledge library. The data are formatted in a CSV file with two columns Question and Answer. We use the Answer column as the documents of knowledge library, from which relevant documents are retrieved based on a query. \n", "\n", "**For your purpose, you can replace the example dataset of your own to build a custom question and answering application.**" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "download: s3://jumpstart-cache-prod-us-east-2/training-datasets/Amazon_SageMaker_FAQs/Amazon_SageMaker_FAQs.csv to rag_data/Amazon_SageMaker_FAQs.csv\n" ] } ], "source": [ "original_data = \"s3://jumpstart-cache-prod-us-east-2/training-datasets/Amazon_SageMaker_FAQs/\"\n", "\n", "!mkdir -p rag_data\n", "!aws s3 cp --recursive $original_data rag_data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For the case when you have data saved in multiple subsets. The following code will read all files that end with `.csv` and concatenate them together. Please ensure each `csv` file has the same format." ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "tags": [] }, "outputs": [], "source": [ "import glob\n", "import os\n", "import pandas as pd\n", "\n", "all_files = glob.glob(os.path.join(\"rag_data/\", \"*.csv\"))\n", "\n", "df_knowledge = pd.concat(\n", " (pd.read_csv(f, header=None, names=[\"Question\", \"Answer\"]) for f in all_files),\n", " axis=0,\n", " ignore_index=True,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Drop the `Question` column as it is not used in this demonstration." ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "tags": [] }, "outputs": [], "source": [ "df_knowledge.drop([\"Question\"], axis=1, inplace=True)" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "tags": [] }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
Answer
0Amazon SageMaker is a fully managed service to...
1For a list of the supported Amazon SageMaker A...
2Amazon SageMaker is designed for high availabi...
3Amazon SageMaker stores code in ML storage vol...
4Amazon SageMaker ensures that ML model artifac...
\n", "
" ], "text/plain": [ " Answer\n", "0 Amazon SageMaker is a fully managed service to...\n", "1 For a list of the supported Amazon SageMaker A...\n", "2 Amazon SageMaker is designed for high availabi...\n", "3 Amazon SageMaker stores code in ML storage vol...\n", "4 Amazon SageMaker ensures that ML model artifac..." ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df_knowledge.head(5)" ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "tags": [] }, "outputs": [], "source": [ "df_knowledge.to_csv(\"rag_data/processed_data.csv\", header=False, index=False)" ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "tags": [] }, "outputs": [], "source": [ "from langchain.chains import RetrievalQA\n", "from langchain.llms import OpenAI\n", "from langchain.document_loaders import TextLoader\n", "from langchain.indexes import VectorstoreIndexCreator\n", "from langchain.vectorstores import Chroma, AtlasDB, FAISS\n", "from langchain.text_splitter import CharacterTextSplitter\n", "from langchain import PromptTemplate\n", "from langchain.chains.question_answering import load_qa_chain\n", "from langchain.document_loaders.csv_loader import CSVLoader" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Use langchain to read the `csv` data. There are multiple built-in functions in LangChain to read different format of files such as `txt`, `html`, and `pdf`. For details, see [LangChain document loaders](https://python.langchain.com/en/latest/modules/indexes/document_loaders.html)." ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "tags": [] }, "outputs": [], "source": [ "loader = CSVLoader(file_path=\"rag_data/processed_data.csv\")" ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "tags": [] }, "outputs": [], "source": [ "documents = loader.load()\n", "# text_splitter = CharacterTextSplitter(chunk_size=300, chunk_overlap=0)\n", "# texts = text_splitter.split_documents(documents) ### if you use langchain.document_loaders.TextLoader to load text file. You can uncomment the code\n", "## to split the text." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Now, we can build an QA application. LangChain makes it extremly simple with following few lines of code.**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Based on the question below, we can achieven the points in Step 4 with just a few lines of code as shown below." ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Which instances can I use with Managed Spot Training in SageMaker?'" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "question" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [], "source": [ "index_creator = VectorstoreIndexCreator(\n", " vectorstore_cls=FAISS,\n", " embedding=embeddings,\n", " text_splitter=CharacterTextSplitter(chunk_size=300, chunk_overlap=0),\n", ")" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [], "source": [ "index = index_creator.from_loaders([loader])" ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Amazon EC2 Spot instances'" ] }, "execution_count": 29, "metadata": {}, "output_type": "execute_result" } ], "source": [ "index.query(question=question, llm=sm_llm)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 5. Customize the QA application above with different prompt.\n", "\n", "Now, we see how simple it is to use LangChain to achieve question and answering application with just few lines of code. Let's break down the above `VectorstoreIndexCreator` and see what's happening under the hood. Furthermore, we will see how to incorporate a customize prompt rather than using a default prompt with `VectorstoreIndexCreator`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Firstly, we **generate embedings for each of document in the knowledge library with SageMaker GPT-J-6B embedding model.**" ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [], "source": [ "docsearch = FAISS.from_documents(documents, embeddings)" ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Which instances can I use with Managed Spot Training in SageMaker?'" ] }, "execution_count": 31, "metadata": {}, "output_type": "execute_result" } ], "source": [ "question" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Based on the question above, we then **identify top K most relevant documents based on user query, where K = 3 in this setup**." ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [], "source": [ "docs = docsearch.similarity_search(question, k=3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Print out the top 3 most relevant docuemnts as below." ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='Amazon SageMaker is a fully managed service to prepare data and build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows.: Once a Managed Spot Training job is completed, you can see the savings in the AWS Management Console and also calculate the cost savings as the percentage difference between the duration for which the training job ran and the duration for which you were billed. Regardless of how many times your Managed Spot Training jobs are interrupted, you are charged only once for the duration for which the data was downloaded.', metadata={'source': 'rag_data/processed_data.csv', 'row': 88}),\n", " Document(page_content='Amazon SageMaker is a fully managed service to prepare data and build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows.: Managed Spot Training uses Amazon EC2 Spot instances for training, and these instances can be pre-empted when AWS needs capacity. As a result, Managed Spot Training jobs can run in small increments as and when capacity becomes available. The training jobs need not be restarted from scratch when there is an interruption, as Amazon SageMaker can resume the training jobs using the latest model checkpoint. The built-in frameworks and the built-in computer vision algorithms with SageMaker enable periodic checkpoints, and you can enable checkpoints with custom models.', metadata={'source': 'rag_data/processed_data.csv', 'row': 86}),\n", " Document(page_content='Amazon SageMaker is a fully managed service to prepare data and build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows.: If you have a consistent amount of Amazon SageMaker instance usage (measured in $/hour) and use multiple SageMaker components or expect your technology configuration (such as instance family, or Region) to change over time, SageMaker Savings Plans make it simpler to maximize your savings while providing flexibility to change the underlying technology configuration based on application needs or new innovation. The Savings Plans rate applies automatically to all eligible ML instance usage with no manual modifications required.', metadata={'source': 'rag_data/processed_data.csv', 'row': 149})]" ] }, "execution_count": 33, "metadata": {}, "output_type": "execute_result" } ], "source": [ "docs" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we **combine the retrieved documents with prompt and question and send them into SageMaker LLM.** \n", "\n", "We define a customized prompt as below." ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [], "source": [ "prompt_template = \"\"\"Answer based on context:\\n\\n{context}\\n\\n{question}\"\"\"\n", "\n", "PROMPT = PromptTemplate(template=prompt_template, input_variables=[\"context\", \"question\"])" ] }, { "cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [], "source": [ "chain = load_qa_chain(llm=sm_llm, prompt=PROMPT)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Send the top 3 most relevant docuemnts and question into LLM to get a answer." ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [], "source": [ "result = chain({\"input_documents\": docs, \"question\": question}, return_only_outputs=True)[\n", " \"output_text\"\n", "]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Print the final answer from LLM as below, which is accurate." ] }, { "cell_type": "code", "execution_count": 37, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Amazon EC2 Spot instances'" ] }, "execution_count": 37, "metadata": {}, "output_type": "execute_result" } ], "source": [ "result" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Notebook CI Test Results\n", "\n", "This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.\n", "\n", "![This us-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-1/introduction_to_amazon_algorithms|jumpstart-foundation-models|question_answerIng_retrieval_augmented_generation_jumpstart|question_answerIng_langchain_jumpstart.ipynb)\n", "\n", "![This us-east-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-2/introduction_to_amazon_algorithms|jumpstart-foundation-models|question_answerIng_retrieval_augmented_generation_jumpstart|question_answerIng_langchain_jumpstart.ipynb)\n", "\n", "![This us-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-1/introduction_to_amazon_algorithms|jumpstart-foundation-models|question_answerIng_retrieval_augmented_generation_jumpstart|question_answerIng_langchain_jumpstart.ipynb)\n", "\n", "![This ca-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ca-central-1/introduction_to_amazon_algorithms|jumpstart-foundation-models|question_answerIng_retrieval_augmented_generation_jumpstart|question_answerIng_langchain_jumpstart.ipynb)\n", "\n", "![This sa-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/sa-east-1/introduction_to_amazon_algorithms|jumpstart-foundation-models|question_answerIng_retrieval_augmented_generation_jumpstart|question_answerIng_langchain_jumpstart.ipynb)\n", "\n", "![This eu-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-1/introduction_to_amazon_algorithms|jumpstart-foundation-models|question_answerIng_retrieval_augmented_generation_jumpstart|question_answerIng_langchain_jumpstart.ipynb)\n", "\n", "![This eu-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-2/introduction_to_amazon_algorithms|jumpstart-foundation-models|question_answerIng_retrieval_augmented_generation_jumpstart|question_answerIng_langchain_jumpstart.ipynb)\n", "\n", "![This eu-west-3 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-3/introduction_to_amazon_algorithms|jumpstart-foundation-models|question_answerIng_retrieval_augmented_generation_jumpstart|question_answerIng_langchain_jumpstart.ipynb)\n", "\n", "![This eu-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-central-1/introduction_to_amazon_algorithms|jumpstart-foundation-models|question_answerIng_retrieval_augmented_generation_jumpstart|question_answerIng_langchain_jumpstart.ipynb)\n", "\n", "![This eu-north-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-north-1/introduction_to_amazon_algorithms|jumpstart-foundation-models|question_answerIng_retrieval_augmented_generation_jumpstart|question_answerIng_langchain_jumpstart.ipynb)\n", "\n", "![This ap-southeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-1/introduction_to_amazon_algorithms|jumpstart-foundation-models|question_answerIng_retrieval_augmented_generation_jumpstart|question_answerIng_langchain_jumpstart.ipynb)\n", "\n", "![This ap-southeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-2/introduction_to_amazon_algorithms|jumpstart-foundation-models|question_answerIng_retrieval_augmented_generation_jumpstart|question_answerIng_langchain_jumpstart.ipynb)\n", "\n", "![This ap-northeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-1/introduction_to_amazon_algorithms|jumpstart-foundation-models|question_answerIng_retrieval_augmented_generation_jumpstart|question_answerIng_langchain_jumpstart.ipynb)\n", "\n", "![This ap-northeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-2/introduction_to_amazon_algorithms|jumpstart-foundation-models|question_answerIng_retrieval_augmented_generation_jumpstart|question_answerIng_langchain_jumpstart.ipynb)\n", "\n", "![This ap-south-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-south-1/introduction_to_amazon_algorithms|jumpstart-foundation-models|question_answerIng_retrieval_augmented_generation_jumpstart|question_answerIng_langchain_jumpstart.ipynb)\n" ] } ], "metadata": { "availableInstances": [ { "_defaultOrder": 0, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.t3.medium", "vcpuNum": 2 }, { "_defaultOrder": 1, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.t3.large", "vcpuNum": 2 }, { "_defaultOrder": 2, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.t3.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 3, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.t3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 4, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5.large", "vcpuNum": 2 }, { "_defaultOrder": 5, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 6, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 7, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 8, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 9, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 10, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 11, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 12, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5d.large", "vcpuNum": 2 }, { "_defaultOrder": 13, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5d.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 14, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5d.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 15, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5d.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 16, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5d.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 17, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5d.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 18, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5d.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 19, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 20, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": true, "memoryGiB": 0, "name": "ml.geospatial.interactive", "supportedImageNames": [ "sagemaker-geospatial-v1-0" ], "vcpuNum": 0 }, { "_defaultOrder": 21, "_isFastLaunch": true, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.c5.large", "vcpuNum": 2 }, { "_defaultOrder": 22, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.c5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 23, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.c5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 24, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.c5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 25, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 72, "name": "ml.c5.9xlarge", "vcpuNum": 36 }, { "_defaultOrder": 26, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 96, "name": "ml.c5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 27, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 144, "name": "ml.c5.18xlarge", "vcpuNum": 72 }, { "_defaultOrder": 28, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.c5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 29, "_isFastLaunch": true, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g4dn.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 30, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g4dn.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 31, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g4dn.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 32, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g4dn.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 33, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g4dn.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 34, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g4dn.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 35, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 61, "name": "ml.p3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 36, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 244, "name": "ml.p3.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 37, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 488, "name": "ml.p3.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 38, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.p3dn.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 39, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.r5.large", "vcpuNum": 2 }, { "_defaultOrder": 40, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.r5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 41, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.r5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 42, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.r5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 43, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.r5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 44, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.r5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 45, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 512, "name": "ml.r5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 46, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.r5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 47, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 48, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 49, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 50, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 51, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 52, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 53, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.g5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 54, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.g5.48xlarge", "vcpuNum": 192 } ], "instance_type": "ml.m5.2xlarge", "kernelspec": { "display_name": "Python 3 (PyTorch 1.13 Python 3.9 CPU Optimized)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/pytorch-1.13-cpu-py39" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.16" } }, "nbformat": 4, "nbformat_minor": 4 }