{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Document Extraction\n", "\n", "In this lab we will look at a method of how to extract table information out of the documents.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "- [Step 1: Setup notebook](#step1)\n", "- [Step 2: Extract unstructured data with Amazon Textract](#step2)\n", "- [Step 3: Extract table data using Amazon Textract](#step3)\n", "- [Step 4: Extract forms (key/value) data using Amazon Textract](#step4)\n", "- [Step 5: Query based extraction using Amazon Textract](#step5)\n", "- [Step 6: Signature detection with Amazon Textract](#step6)\n", "- [Step 7: Extracting invoices/receipts with Amazon Textract](#step7)\n", "- [Step 8: Extracting identity documents with Amazon Textract](#step8)\n", "- [Cleanup](#cleanup)\n", "- [Conclusion](#conclusion)\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Step 1: Setup notebook \n", "\n", "In this step, we will import some necessary libraries that will be used throughout this notebook. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "!python -m pip install -q amazon-textract-response-parser --upgrade\n", "!python -m pip install -q amazon-textract-caller --upgrade\n", "!python -m pip install -q amazon-textract-prettyprinter==0.0.16\n", "!python -m pip install -q amazon-textract-textractor --upgrade" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#Restart the kernel\n", "import IPython\n", "IPython.Application.instance().kernel.do_shutdown(True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "import boto3\n", "import botocore\n", "import sagemaker\n", "import pandas as pd\n", "from IPython.display import Image, display, JSON\n", "from textractcaller.t_call import call_textract, Textract_Features, call_textract_expense\n", "from textractprettyprinter.t_pretty_print import convert_table_to_list\n", "from trp import Document\n", "import os\n", "\n", "# variables\n", "data_bucket = sagemaker.Session().default_bucket()\n", "region = boto3.session.Session().region_name\n", "account_id = boto3.client('sts').get_caller_identity().get('Account')\n", "\n", "os.environ[\"BUCKET\"] = data_bucket\n", "os.environ[\"REGION\"] = region\n", "role = sagemaker.get_execution_role()\n", "\n", "print(f\"SageMaker role is: {role}\\nDefault SageMaker Bucket: s3://{data_bucket}\")\n", "\n", "s3=boto3.client('s3')\n", "textract = boto3.client('textract', region_name=region)\n", "comprehend=boto3.client('comprehend', region_name=region)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's select a bank statement we classified in the previous exercise" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "import random\n", "prefix = 'idp/comprehend/classified-docs/bank-statements'\n", "start_after = 'idp/comprehend/classified-docs/bank-statements/'\n", "\n", "paginator = s3.get_paginator('list_objects_v2')\n", "operation_parameters = {'Bucket': data_bucket,\n", " 'Prefix': prefix,\n", " 'StartAfter':start_after}\n", "list_items=[]\n", "page_iterator = paginator.paginate(**operation_parameters)\n", "\n", "for page in page_iterator:\n", " if \"Contents\" in page:\n", " for item in page['Contents']:\n", " list_items.append(f's3://{data_bucket}/{item[\"Key\"]}')\n", " else:\n", " list_items.append('./samples/mixedbag/document_0.png')\n", "\n", "file = random.sample(list_items, k=1)[0] #select a random bank statement document from the list\n", "\n", "if \"s3://\" in file:\n", " file_key=file.replace(f\"s3://{data_bucket}/\",\"\")\n", "else:\n", " print(f\"S3 File not found, using file from local {file}\\n\")\n", " file_key=f\"idp/textract/sample/{os.path.basename(file)}\"\n", " !aws s3 cp {file} s3://{data_bucket}/{file_key} --only-show-errors\n", "\n", "display(Image(url=s3.generate_presigned_url('get_object', Params={'Bucket': data_bucket, 'Key': file_key}), width=600))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "# Step 2: Extract unstructured data with Amazon Textract \n", "\n", "Amazon Textract is an ML powered OCR service that is capable of detecting and extracting text from documents. Text data in the form of WORDS and LINES can be extracted from documents using Amazon Textract `DetectDocumentText` API. Let's extract the words and lines from the bank statement." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true, "tags": [] }, "outputs": [], "source": [ "# Call Amazon Textract\n", "response = textract.detect_document_text(\n", " Document={\n", " 'S3Object': {\n", " 'Bucket': data_bucket,\n", " 'Name': file_key\n", " }\n", " })\n", "\n", "\n", "# Print detected text\n", "for item in response[\"Blocks\"]:\n", " if item[\"BlockType\"] == \"LINE\":\n", " print (item[\"Text\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As you can notice, we were able to extract the LINES and WORDS from the document, but we also lost some of the structural formatting within the document. For example the document contains a few tables and we would like to extract the table information in a tabular structure. So let's do that next." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "# Step 3: Extract table data using Amazon Textract \n", "\n", "In this step we will take a brief look at how to extract table information from the bank statemente. Our bank statement has two tables. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true, "tags": [] }, "outputs": [], "source": [ "response = textract.analyze_document(\n", " Document={\n", " 'S3Object': {\n", " 'Bucket': data_bucket,\n", " 'Name': file_key\n", " }\n", " },\n", " FeatureTypes=[\"TABLES\"])\n", "\n", "response" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As you can see, the response from Amazon Textract is a large JSON object that contains a lot of information. Let's parse out the table data from this reponse. To do this, we will see how to extract the tables using the textract response parser tool that we installed earlier. To learn about how Textract Table response works, refer to the [documentation](https://docs.aws.amazon.com/textract/latest/dg/how-it-works-tables.html)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true, "tags": [] }, "outputs": [], "source": [ "#print(response)\n", "doc = Document(response)\n", "for page in doc.pages:\n", " # Print tables\n", " for table in page.tables:\n", " for r, row in enumerate(table.rows):\n", " for c, cell in enumerate(row.cells):\n", " print(\"Table[{}][{}] = {}\".format(r, c, cell.text))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the code cells above, we used the Textract `AnalyzeDocument` API to extract info from the document and subsequently used textract response parser `Document` to parse out the tables from the JSON response. We can further use additional tooling to call the Textract API and use textract pretty printer tool to view the tables in a slightly more human readable way. We will see how to extract the tables using the Textract pretty printer tool. We will also use `call_textract` method from the Textract Caller tool that we installed earlier. These set of tools make it easy for us to make Textract API calls and parse it's JSON output. In our subsequent sections, we will make use of these tools to make API calls and subsequently to parse the JSON response." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "resp = call_textract(input_document=file, features=[Textract_Features.TABLES])\n", "tdoc = Document(resp)\n", "dfs = list()\n", "\n", "for page in tdoc.pages:\n", " for table in page.tables:\n", " tab_list = convert_table_to_list(trp_table=table)\n", " print(tab_list)\n", " dfs.append(pd.DataFrame(tab_list))\n", "\n", "df1 = dfs[0]\n", "df2 = dfs[1]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the code cell above, we extracted the tables as a Python List and then converted them to Pandas DataFrame. You can also extract tables in other formats such as CSV, TSV etc. Refer to the [PrettyPrinter](https://github.com/aws-samples/amazon-textract-textractor/tree/master/prettyprinter) documentation for more. Now let's look at the DataFrames." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "df1" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "df2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "# Step 4: Extract forms (key/value) data using Amazon Textract \n", "\n", "Let's look at how Amazon Textract can be used to extract form data from the document. In this example, we will use a sample Employment Verification form." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "display(Image(url=\"./samples/textract/Employment_Verification.png\", width=600))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In our previous example, our document was in S3 and we called Amazon Textract by specifying the S3 location of the document. In this case our document is present locally, we can either upload this document into S3, or we can use the document's Byte Array from our local environment to call the API. Let's use the document Byte Array for this example. Note that this method only applies to Textract Sync (real-time) APIs, since the async APIs only support documents placed in S#. In the code cell below, we first convert our document to a Byte array, and then call the `AnalyzeDocument` API with `FORMS` feature. Subsequently we use textract response parser tool to parse out the form key/value pairs and print them out." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true, "tags": [] }, "outputs": [], "source": [ "# Read document content\n", "documentName=\"./samples/textract/Employment_Verification.png\"\n", "with open(documentName, 'rb') as document:\n", " imageBytes = bytearray(document.read())\n", "\n", "# Call Amazon Textract\n", "response = call_textract(input_document=imageBytes, features=[Textract_Features.FORMS])\n", "\n", "doc = Document(response)\n", "\n", "for page in doc.pages:\n", " # Print fields\n", " print(\"Fields:\")\n", " for field in page.form.fields:\n", " print(\"Key: {}, Value: {}\".format(field.key, field.value))\n", "\n", " # Get field by key\n", " print(\"\\nGet Field by Key (Base Pay):\")\n", " key = \"Base Pay\"\n", " field = page.form.getFieldByKey(key)\n", " if(field):\n", " print(\"Key: {}, Value: {}\".format(field.key, field.value))\n", "\n", " # Search fields by key\n", " print(\"\\nSearch Fields (address):\")\n", " key = \"address\"\n", " fields = page.form.searchFieldsByKey(key)\n", " for field in fields:\n", " print(\"Key: {}, Value: {}\".format(field.key, field.value))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "# Step 5: Query based extraction using Amazon Textract \n", "\n", "When processing a document with Amazon Textract, you may add queries to your analysis to specify what information you need. This involves passing a question, such as \"What is the customer's social security number?\" to Amazon Textract. Amazon Textract will then find the information in the document for that question and return it in a response structure separate from the rest of the document's information. Queries can be processed alone, or in combination with any other FeatureType, such as TABLES or FORMS. Queries can be a powerful tool in situations where only a few pieces of critical information is desired from a document. There are limits to how many queries you can pass, please refer to the [Set Quotas in Amazon Textract](https://docs.aws.amazon.com/textract/latest/dg/limits-document.html) document for more info.\n", "\n", "Let's pass a couple of Queries to extract from our Employment Verification form." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "from textractcaller import QueriesConfig, Query\n", "import trp.trp2 as t2 \n", "\n", "# Setup the queries\n", "query1 = Query(text=\"Who is the applicant's date of employmet?\" , alias=\"EMPLOYMENT_DATE\", pages=[\"1\"])\n", "query2 = Query(text=\"What is the probability of continued emplyment?\", alias=\"CONTINUED_EMPLYMT_PROB\", pages=[\"1\"])\n", "\n", "#Setup the query config with the above queries\n", "queries_config = QueriesConfig(queries=[query1, query2])\n", "\n", "documentName=\"./samples/textract/Employment_Verification.png\"\n", "with open(documentName, 'rb') as document:\n", " imageBytes = bytearray(document.read())\n", "\n", "response = call_textract(input_document=imageBytes,\n", " features=[Textract_Features.QUERIES],\n", " queries_config=queries_config)\n", "doc_ev = Document(response)\n", "\n", "doc_ev: t2.TDocumentSchema = t2.TDocumentSchema().load(response)\n", "\n", "entities = {}\n", "for page in doc_ev.pages:\n", " query_answers = doc_ev.get_query_answers(page=page)\n", " if query_answers:\n", " for answer in query_answers:\n", " entities[answer[1]] = answer[2]\n", " \n", "display(JSON(entities, root='Query Answers'))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "# Step 6: Signature detection with Amazon Textract \n", "\n", "Amazon Textract can detect the presence of signatures in documents. The AnalyzeDocument API has the following four feature types – Forms, Tables, Queries, and Signatures. The Signatures feature can be used by itself or in combination with other feature types. When used by itself, Signatures feature type provides a json response that includes a) location and confidence scores of the detected signatures and b) raw text (words and lines) from the documents. If the Signatures feature is used along with Forms feature that extracts key value pairs in a form, the detected signature will be associated as a value to the relevant key. Similarly, when used along with Tables feature type, the detected signature will be associated to a cell within the table.\n", "\n", "Let's try to detect the signatures in our Employment Verification form." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# Read document content\n", "from textractor.parsers import response_parser\n", "\n", "documentName=\"./samples/textract/Employment_Verification.png\"\n", "with open(documentName, 'rb') as document:\n", " imageBytes = bytearray(document.read())\n", "\n", "# Call Amazon Textract\n", "response = call_textract(input_document=imageBytes,\n", " features=[Textract_Features.SIGNATURES])\n", "tdoc = response_parser.parse(response)\n", "\n", "for signature in tdoc.signatures:\n", " print(signature.bbox)\n", " print(f\"Confidence: {signature.confidence}\\n\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Textract has detected three signatures in the document along with their bounding box information along with the confidence scores." ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "---\n", "# Step 7: Extracting invoices/receipts with Amazon Textract \n", "\n", "Let's now look at the `AnalyzeExpense` API to extract information from an invoice document." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "documentName = \"./samples/textract/invoice.png\"\n", "display(Image(filename=documentName, width=600)) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It is important to note that textract provides the ability to seperately extract the \"line items\" in the invoice and the \"Summary\" of the invoice." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true, "tags": [] }, "outputs": [], "source": [ "with open(documentName, 'rb') as document:\n", " imageBytes = bytearray(document.read())\n", " \n", "# expense_resp = call_textract_expense(input_document=imageBytes) \n", "expense_resp = textract.analyze_expense(Document={'Bytes': imageBytes}) " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "summary_entities_values = []\n", "summary_fields = []\n", "expense_item = []\n", "\n", "for expense_doc in expense_resp[\"ExpenseDocuments\"]:\n", " for field in expense_doc[\"SummaryFields\"]:\n", " kvs = {}\n", " if \"LabelDetection\" in field:\n", " if \"ValueDetection\" in field:\n", " kvs[field[\"LabelDetection\"][\"Text\"]] = field[\"ValueDetection\"][\"Text\"]\n", " else:\n", " kvs[field[\"Type\"][\"Text\"]] = field[\"ValueDetection\"][\"Text\"]\n", " summary_entities_values.append(kvs.copy())\n", " kvs = None\n", "\n", " for line_item_group in expense_doc[\"LineItemGroups\"]:\n", " for line_items in line_item_group[\"LineItems\"]:\n", " for field in line_items[\"LineItemExpenseFields\"]:\n", " kvs = {}\n", " if \"LabelDetection\" in field:\n", " if \"ValueDetection\" in field:\n", " kvs[field[\"LabelDetection\"][\"Text\"]] = field[\"ValueDetection\"][\"Text\"]\n", " else:\n", " kvs[field[\"Type\"][\"Text\"]] = field[\"ValueDetection\"][\"Text\"]\n", " expense_item.append(kvs.copy())\n", " kvs = None\n", "print(\"Invoice Summary:\")\n", "print(\"==========================================\")\n", "print(*summary_entities_values, sep='\\n')\n", "print(\"\\nInvoice Line Items:\")\n", "print(\"==========================================\")\n", "print(*expense_item, sep='\\n')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "# Step 8: Extracting identity documents with Amazon Textract \n", " \n", "To see how extraction of identity documents works with Amazon Textract we will use a sample Passport document. Passport is a special document, i.e. an Identity document. To extract infromation from US passports and driver's license, Amazon Textract's AnalyzeID API can be used." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "documentName = \"./samples/textract/Passport.png\"\n", "\n", "display(Image(url=documentName, width=500));" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will use the call_textract_analyzeid tool from the amazon-textract-textractor library." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "from textractcaller import call_textract_analyzeid\n", "import trp.trp2_analyzeid as t2id\n", "\n", "with open(documentName, 'rb') as document:\n", " imageBytes = bytearray(document.read())\n", "\n", "response_passport = call_textract_analyzeid(document_pages=[imageBytes])\n", "doc_passport: t2id.TAnalyzeIdDocument = t2id.TAnalyzeIdDocumentSchema().load(response_passport) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that in the call to `call_textract_analyzeid` you can also pass an S3 path to the parameter `document_pages` as\n", "\n", "```\n", "call_textract_analyzeid(document_pages=[\"s3://bucket/prefix/doc.png\"])\n", "```\n", "\n", "Let's look at the extracted information from the Passport document. Notice that the Keys are normalized, this means it makes it easy to parse out the required information from the response JSON from Textract." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ " for id_docs in response_passport['IdentityDocuments']:\n", " id_doc_kvs={}\n", " for field in id_docs['IdentityDocumentFields']:\n", " id_doc_kvs[field['Type']['Text']] = field['ValueDetection']['Text']\n", "\n", "display(JSON(id_doc_kvs, root='ID Document Key-values', expanded=True))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "# Cleanup \n", "\n", "Cleanup is optional if you want to execute subsequent notebooks. \n", "\n", "Refer to the `05-idp-cleanup.ipynb` for cleanup and deletion of resources." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "# Conclusion \n", "\n", "In this notebook we did a table extraction from a bank statement and further looked on a few additional ways Amazon Textract can help extract specific structured and semi-structured data such as forms data from our documents. In the next notebook we will extract entity information from our documents using Amazon Comprehend." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "You can further explore all Amazon Textract capabilities by cloning the entire code repository using the `git clone` command below.\n", "\n", "`git clone https://github.com/aws-samples/amazon-textract-code-samples`" ] } ], "metadata": { "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (Data Science)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-2:429704687514:image/datascience-1.0" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.10" } }, "nbformat": 4, "nbformat_minor": 4 }