{ "cells": [ { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "# Retrieval Augmented Question & Answering with Amazon Bedrock using LangChain\n", "\n", "### Context\n", "Previously we saw that the model told us how to to change the tire, however we had to manually provide it with the relevant data and provide the contex ourselves. We explored the approach to leverage the model availabe under Bedrock and ask questions based on it's knowledge learned during training as well as providing manual context. While that approach works with short documents or single-ton applications, it fails to scale to enterprise level question answering where there could be large enterprise documents which cannot all be fit into the prompt sent to the model. \n", "\n", "### Pattern\n", "We can improve upon this process by implementing an architecure called Retreival Augmented Generation (RAG). RAG retrieves data from outside the language model (non-parametric) and augments the prompts by adding the relevant retrieved data in context. \n", "\n", "In this notebook we explain how to approach the pattern of Question Answering to find and leverage the documents to provide answers to the user questions.\n", "\n", "### Challenges\n", "- How to manage large document(s) that exceed the token limit\n", "- How to find the document(s) relevant to the question being asked\n", "\n", "### Proposal\n", "To the above challenges, this notebook proposes the following strategy\n", "#### Prepare documents\n", "![Embeddings](./images/Embeddings_lang.png)\n", "\n", "Before being able to answer the questions, the documents must be processed and a stored in a document store index\n", "- Load the documents\n", "- Process and split them into smaller chunks\n", "- Create a numerical vector representation of each chunk using Amazon Bedrock Titan Embeddings model\n", "- Create an index using the chunks and the corresponding embeddings\n", "#### Ask question\n", "![Question](./images/Chatbot_lang.png)\n", "\n", "When the documents index is prepared, you are ready to ask the questions and relevant documents will be fetched based on the question being asked. Following steps will be executed.\n", "- Create an embedding of the input question\n", "- Compare the question embedding with the embeddings in the index\n", "- Fetch the (top N) relevant document chunks\n", "- Add those chunks as part of the context in the prompt\n", "- Send the prompt to the model under Amazon Bedrock\n", "- Get the contextual answer based on the documents retrieved" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Usecase\n", "#### Dataset\n", "To explain this architecture pattern we are using the documents from IRS. These documents explain topics such as:\n", "- Original Issue Discount (OID) Instruments\n", "- Reporting Cash Payments of Over $10,000 to IRS\n", "- Employer's Tax Guide\n", "\n", "#### Persona\n", "Let's assume a persona of a layman who doesn't have an understanding of how IRS works and if some actions have implications or not.\n", "\n", "The model will try to answer from the documents in easy language.\n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Implementation\n", "In order to follow the RAG approach this notebook is using the LangChain framework where it has integrations with different services and tools that allow efficient building of patterns such as RAG. We will be using the following tools:\n", "\n", "- **LLM (Large Language Model)**: Anthropic Claude V1 available through Amazon Bedrock\n", "\n", " This model will be used to understand the document chunks and provide an answer in human friendly manner.\n", "- **Embeddings Model**: Amazon Titan Embeddings available through Amazon Bedrock\n", "\n", " This model will be used to generate a numerical representation of the textual documents\n", "- **Document Loader**: PDF Loader available through LangChain\n", "\n", " This is the loader that can load the documents from a source, for the sake of this notebook we are loading the sample files from a local path. This could easily be replaced with a loader to load documents from enterprise internal systems.\n", "\n", "- **Vector Store**: FAISS available through LangChain\n", "\n", " In this notebook we are using this in-memory vector-store to store both the embeddings and the documents. In an enterprise context this could be replaced with a persistent store such as AWS OpenSearch, RDS Postgres with pgVector, ChromaDB, Pinecone or Weaviate.\n", "- **Index**: VectorIndex\n", "\n", " The index helps to compare the input embedding and the document embeddings to find relevant document\n", "- **Wrapper**: wraps index, vector store, embeddings model and the LLM to abstract away the logic from the user.\n", "\n", "### Setup\n", "To run this notebook you would need to install 2 more dependencies, [PyPDF](https://pypi.org/project/pypdf/) and [FAISS vector store](https://github.com/facebookresearch/faiss).\n", "\n", "\n", "\n", "Then begin with instantiating the LLM and the Embeddings model. Here we are using Anthropic Claude to demonstrate the use case.\n", "\n", "Note: It is possible to choose other models available with Bedrock. You can replace the `model_id` as follows to change the model.\n", "\n", "`llm = Bedrock(model_id=\"amazon.titan-tg1-large\")`\n", "\n", "Available models under Bedrock have the following IDs:\n", "- `amazon.titan-tg1-large`\n", "- `ai21.j2-grande-instruct`\n", "- `ai21.j2-jumbo-instruct`\n", "- `anthropic.claude-instant-v1`\n", "- `anthropic.claude-v1`" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "#### ⚠️⚠️⚠️ Execute the following cells before running this notebook ⚠️⚠️⚠️\n", "\n", "For a detailed description on what the following cells do refer to [Bedrock boto3 setup](../00_Intro/bedrock_boto3_setup.ipynb) notebook." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Make sure you run `download-dependencies.sh` from the root of the repository to download the dependencies before running this cell\n", "%pip install ../dependencies/botocore-1.29.162-py3-none-any.whl ../dependencies/boto3-1.26.162-py3-none-any.whl ../dependencies/awscli-1.27.162-py3-none-any.whl --force-reinstall\n", "%pip install langchain==0.0.190 --quiet\n", "%pip install pypdf==3.8.1 faiss-cpu==1.7.4 --quiet" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#### Un comment the following lines to run from your local environment outside of the AWS account with Bedrock access\n", "\n", "#import os\n", "#os.environ['BEDROCK_ASSUME_ROLE'] = ''\n", "#os.environ['AWS_PROFILE'] = ''" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import boto3\n", "import json\n", "import os\n", "import sys\n", "\n", "module_path = \"..\"\n", "sys.path.append(os.path.abspath(module_path))\n", "from utils import bedrock, print_ww\n", "\n", "os.environ['AWS_DEFAULT_REGION'] = 'us-east-1'\n", "boto3_bedrock = bedrock.get_bedrock_client(os.environ.get('BEDROCK_ASSUME_ROLE', None))" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Setup langchain\n", "\n", "We create an instance of the Bedrock classes for the LLM and the embedding models. At the time of writing, Bedrock supports one embedding model and therefore we do not need to specify any model id." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# We will be using the Titan Embeddings Model to generate our Embeddings.\n", "from langchain.embeddings import BedrockEmbeddings\n", "from langchain.llms.bedrock import Bedrock\n", "\n", "# - create the Anthropic Model\n", "llm = Bedrock(model_id=\"anthropic.claude-v1\", client=boto3_bedrock, model_kwargs={'max_tokens_to_sample':200})\n", "bedrock_embeddings = BedrockEmbeddings(client=boto3_bedrock)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Data Preparation\n", "Let's first download some of the files to build our document store. For this example we will be using public IRS documents from [here](https://www.irs.gov/publications)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from urllib.request import urlretrieve\n", "files = [\n", " 'https://www.irs.gov/pub/irs-pdf/p1544.pdf',\n", " 'https://www.irs.gov/pub/irs-pdf/p15.pdf',\n", " 'https://www.irs.gov/pub/irs-pdf/p1212.pdf'\n", "]\n", "for url in files:\n", " file_path = './data/' + url.split('/')[-1]\n", " urlretrieve(url, file_path)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "After downloading we can load the documents with the help of [DirectoryLoader from PyPDF available under LangChain](https://python.langchain.com/en/latest/reference/modules/document_loaders.html) and splitting them into smaller chunks.\n", "\n", "Note: The retrieved document/text should be large enough to contain enough information to answer a question; but small enough to fit into the LLM prompt. Also the embeddings model has a limit of the length of input tokens limited to 512 tokens, which roughly translates to ~2000 characters. For the sake of this use-case we are creating chunks of roughly 1000 characters with an overlap of 100 characters using [RecursiveCharacterTextSplitter](https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/recursive_text_splitter.html)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "from langchain.text_splitter import CharacterTextSplitter, RecursiveCharacterTextSplitter\n", "from langchain.document_loaders import PyPDFLoader, PyPDFDirectoryLoader\n", "\n", "loader = PyPDFDirectoryLoader(\"./data/\")\n", "\n", "documents = loader.load()\n", "# - in our testing Character split works better with this PDF data set\n", "text_splitter = RecursiveCharacterTextSplitter(\n", " # Set a really small chunk size, just to show.\n", " chunk_size = 1000,\n", " chunk_overlap = 100,\n", ")\n", "docs = text_splitter.split_documents(documents)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "avg_doc_length = lambda documents: sum([len(doc.page_content) for doc in documents])//len(documents)\n", "avg_char_count_pre = avg_doc_length(documents)\n", "avg_char_count_post = avg_doc_length(docs)\n", "print(f'Average length among {len(documents)} documents loaded is {avg_char_count_pre} characters.')\n", "print(f'After the split we have {len(docs)} documents more than the original {len(documents)}.')\n", "print(f'Average length among {len(docs)} documents (after split) is {avg_char_count_post} characters.')" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "We had 3 PDF documents which have been split into smaller ~500 chunks.\n", "\n", "Now we can see how a sample embedding would look like for one of those chunks" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sample_embedding = np.array(bedrock_embeddings.embed_query(docs[0].page_content))\n", "print(\"Sample embedding of a document chunk: \", sample_embedding)\n", "print(\"Size of the embedding: \", sample_embedding.shape)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Following the similar pattern embeddings could be generated for the entire corpus and stored in a vector store.\n", "\n", "This can be easily done using [FAISS](https://github.com/facebookresearch/faiss) implementation inside [LangChain](https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/faiss.html) which takes input the embeddings model and the documents to create the entire vector store. Using the Index Wrapper we can abstract away most of the heavy lifting such as creating the prompt, getting embeddings of the query, sampling the relevant documents and calling the LLM. [VectorStoreIndexWrapper](https://python.langchain.com/en/latest/modules/indexes/getting_started.html#one-line-index-creation) helps us with that.\n", "\n", "**⚠️⚠️⚠️ NOTE: it might take few minutes to run the following cell ⚠️⚠️⚠️**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from langchain.chains.question_answering import load_qa_chain\n", "from langchain.vectorstores import FAISS\n", "from langchain.indexes import VectorstoreIndexCreator\n", "from langchain.indexes.vectorstore import VectorStoreIndexWrapper\n", "\n", "vectorstore_faiss = FAISS.from_documents(\n", " docs,\n", " bedrock_embeddings,\n", ")\n", "\n", "wrapper_store_faiss = VectorStoreIndexWrapper(vectorstore=vectorstore_faiss)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Question Answering\n", "\n", "Now that we have our vector store in place, we can start asking questions." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "query = \"Is it possible that I get sentenced to jail due to failure in filings?\"" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "The first step would be to create an embedding of the query such that it could be compared with the documents" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "query_embedding = vectorstore_faiss.embedding_function(query)\n", "np.array(query_embedding)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "We can use this embedding of the query to then fetch relevant documents.\n", "Now our query is represented as embeddings we can do a similarity search of our query against our data store providing us with the most relevant information." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "relevant_documents = vectorstore_faiss.similarity_search_by_vector(query_embedding)\n", "print(f'{len(relevant_documents)} documents are fetched which are relevant to the query.')\n", "print('----')\n", "for i, rel_doc in enumerate(relevant_documents):\n", " print_ww(f'## Document {i+1}: {rel_doc.page_content}.......')\n", " print('---')" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Now we have the relevant documents, it's time to use the LLM to generate an answer based on these documents. \n", "\n", "We will take our inital prompt, together with our relevant documents which were retreived based on the results of our similarity search. We then by combining these create a prompt that we feed back to the model to get our result. At this point our model should give us highly informed information on how we can change the tire of our specific car as it was outlined in our manual.\n", "\n", "LangChain provides an abstraction of how this can be done easily." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Quick way\n", "You have the possibility to use the wrapper provided by LangChain which wraps around the Vector Store and takes input the LLM.\n", "This wrapper performs the following steps behind the scences:\n", "- Takes input the question\n", "- Create question embedding\n", "- Fetch relevant documents\n", "- Stuff the documents and the question into a prompt\n", "- Invoke the model with the prompt and generate the answer in a human readable manner." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "answer = wrapper_store_faiss.query(question=query, llm=llm)\n", "print_ww(answer)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Let's ask a different question:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "query_2 = \"What is the difference between market discount and qualified stated interest\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "answer_2 = wrapper_store_faiss.query(question=query_2, llm=llm)\n", "print_ww(answer_2)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Customisable option\n", "In the above scenario you explored the quick and easy way to get a context-aware answer to your question. Now let's have a look at a more customizable option with the helpf of [RetrievalQA](https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa.html) where you can customize how the documents fetched should be added to prompt using `chain_type` parameter. Also, if you want to control how many relevant documents should be retrieved then change the `k` parameter in the cell below to see different outputs. In many scenarios you might want to know which were the source documents that the LLM used to generate the answer, you can get those documents in the output using `return_source_documents` which returns the documents that are added to the context of the LLM prompt. `RetrievalQA` also allows you to provide a custom [prompt template](https://python.langchain.com/en/latest/modules/prompts/prompt_templates/getting_started.html) which can be specific to the model.\n", "\n", "Note: In this example we are using Anthropic Claude as the LLM under Amazon Bedrock, this particular model performs best if the inputs are provided under `Human:` and the model is requested to generate an output after `Assistant:`. In the cell below you see an example of how to control the prompt such that the LLM stays grounded and doesn't answer outside the context." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\n", "from langchain.chains import RetrievalQA\n", "from langchain.prompts import PromptTemplate\n", "\n", "prompt_template = \"\"\"Human: Use the following pieces of context to provide a concise answer to the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n", "\n", "{context}\n", "\n", "Question: {question}\n", "Assistant:\"\"\"\n", "PROMPT = PromptTemplate(\n", " template=prompt_template, input_variables=[\"context\", \"question\"]\n", ")\n", "\n", "qa = RetrievalQA.from_chain_type(\n", " llm=llm,\n", " chain_type=\"stuff\",\n", " retriever=vectorstore_faiss.as_retriever(\n", " search_type=\"similarity\", search_kwargs={\"k\": 3}\n", " ),\n", " return_source_documents=True,\n", " chain_type_kwargs={\"prompt\": PROMPT}\n", ")\n", "query = \"Is it possible that I get sentenced to jail due to failure in filings?\"\n", "result = qa({\"query\": query})\n", "print_ww(result['result'])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "result['source_documents']" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Conclusion\n", "Congratulations on completing this moduel on retrieval augmented generation! This is an important technique that combines the power of large language models with the precision of retrieval methods. By augmenting generation with relevant retrieved examples, the responses we recieved become more coherent, consistent and grounded. You should feel proud of learning this innovative approach. I'm sure the knowledge you've gained will be very useful for building creative and engaging language generation systems. Well done!\n", "\n", "In the above implementation of RAG based Question Answering we have explored the following concepts and how to implement them using Amazon Bedrock and it's LangChain integration.\n", "\n", "- Loading documents and generating embeddings to create a vector store\n", "- Retrieving documents to the question\n", "- Preparing a prompt which goes as input to the LLM\n", "- Present an answer in a human friendly manner\n", "\n", "### Take-aways\n", "- Experiment with different Vector Stores\n", "- Leverage various models available under Amazon Bedrock to see alternate outputs\n", "- Explore options such as persistent storage of embeddings and document chunks\n", "- Integration with enterprise data stores\n", "\n", "# Thank You" ] } ], "metadata": { "kernelspec": { "display_name": "bedrock", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.16" }, "orig_nbformat": 4 }, "nbformat": 4, "nbformat_minor": 2 }