{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Medical Document Processing with Amazon Textract, Amazon Comprehend, and Amazon Comprehend Medical\n", "\n", "In this notebook, we will walkthrough on how to build a data processing pipeline that will process electronic medical reports (EMR) in PDF format to extract relevant medical information by using the following AWS services:\n", "\n", "- [Textract](https://aws.amazon.com/textract/): To extract text from the PDF medical report\n", "- [Comprehend](https://aws.amazon.com/comprehend/): To process general language data from the output of Textract.\n", "- [Comprehend Medical](https://aws.amazon.com/comprehend/medical/): To process medical-domain information from the output of Textract.\n", "\n", "NOTE: This notebook requires that the SageMaker Execution Role has additional permission to call the Textract, Comprehend, and Comprehend Medical services. Please reach out to your system administrator if you are running this outside of an AWS-hosted workshop.\n", "\n", "## Contents\n", "\n", "1. [Objective](#1.-Objective)\n", "1. [Setup Environment](#2.-Setup-Environment)\n", "1. [Extract text from a medical PDF document with Amazon Textract](#3.-Extract-text-from-a-medical-PDF-document-with-Amazon-Textract)\n", "1. [Process general text information with Amazon Comprehend](#4.-Process-general-text-information-with-Amazon-Comprehend)\n", "1. [Process medical domain information using Amazon Comprehend Medical](#5.-Process-medical-domain-information-using-Amazon-Comprehend-Medical)\n", "1. [Clean up](#6.-Clean-up)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "# 1. Objective\n", "\n", "The objective of this section of the workshop is to learn how to use Amazon Textract and Comprehend Medical to extract the medical information from an electronic medical report in PDF format." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "# 2. Setup environment\n", "\n", "Before be begin, let us setup our environment. We will need the following:\n", "\n", "* Amazon Textract Results Parser `textract-trp` to process our Textract results.\n", "* Python libraries \n", "* Pre-processing functions that will help with processing and visualization of our results. For the purpose of this workshop, we have provided a pre-processing function library that can be found in [util/preprocess.py](./util/preprocess.py)\n", "\n", "Note: `textract-trp` will require Python 3.6 or newer." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "!pip install textract-trp" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "import boto3\n", "import time\n", "import sagemaker\n", "import os\n", "import trp\n", "import pandas as pd\n", "\n", "bucket = sagemaker.Session().default_bucket()\n", "prefix = \"sagemaker/medical_notes\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "# 3. Extract text from a medical PDF document with Amazon Textract\n", "\n", "In this section we will be extracting the text from a medical report in PDF format using Textract. To facilitate this workshop, we have generated a [sample PDF medical report](./data/sample_report_1.pdf) using the [MTSample dataset](https://www.kaggle.com/tboyle10/medicaltranscriptions) from kaggle." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## About Textract\n", "Amazon Textract can detect lines of text and the words that make up a line of text. Textract can handle documents in either synchronous or asynchronous processing:\n", "+ [synchronous API](https://docs.aws.amazon.com/textract/latest/dg/sync.html): supports *The input document must be an image in `JPEG` or `PNG` format*. Single page document analysis can be performed using a Textract synchronous operation.\n", " 1. *`detect_document_text`*: detects text in the input document. \n", " 2. *`analyze_document`*: analyzes an input document for relationships between detected items.\n", "+ [asynchronous API](https://docs.aws.amazon.com/textract/latest/dg/async.html): *can analyze text in documents that are in `JPEG`, `PNG`, and `PDF` format. Multi page processing is an asynchronous operation. The documents are stored in an Amazon S3 bucket. Use DocumentLocation to specify the bucket name and file name of the document.*\n", " 1. for context analysis:\n", " 1. *`start_document_text_detection`*: starts the asynchronous detection of text in a document. \n", " 2. *`get_document_text_detection`*: gets the results for an Amazon Textract asynchronous operation that detects text in a document.\n", " 2. for relationships between detected items :\n", " 1. *`start_document_analysis`*: starts the asynchronous analysis of relationship in a document. \n", " 2. *`get_document_analysis`*: Gets the results for an Amazon Textract asynchronous operation that analyzes text in a document\n", " \n", "For detailed api, refer to documentation [here](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/textract.html#Textract.Client.analyze_document).\n", "\n", "In this demo, as the input is in pdf format and has multiple pages, we will be using the multi page textract operation, we will need to upload our sample medical record to an S3 bucket. Run the next cell to upload our sample medical report." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "fileName = \"sample_report_1.pdf\"\n", "fileUploadPath = os.path.join(\"./data\", fileName)\n", "textractObjectName = os.path.join(prefix, \"data\", fileName)\n", "\n", "# Upload medical report file\n", "boto3.Session().resource(\"s3\").Bucket(bucket).Object(textractObjectName).upload_file(\n", " fileUploadPath\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Start text detection asynchonously in the pdf\n", "In the next step, we will start the asynchronous textract operation by calling the `start_document_analysis()` function. The function will kickoff an asynchronous job that will process our medical report file in the stipulated S3 bucket." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "textract = boto3.client(\"textract\")\n", "response = textract.start_document_analysis(\n", " DocumentLocation={\"S3Object\": {\"Bucket\": bucket, \"Name\": textractObjectName}},\n", " FeatureTypes=[\n", " \"TABLES\",\n", " ],\n", ")\n", "\n", "textractJobId = response[\"JobId\"]\n", "print(\"job id is: \", textractJobId)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Monitor the job status\n", "\n", "As the job is kicked off in the background, we can monitor the progress of the job by calling the `get_document_analysis()` function and passing the job id of the job that we created. \n", "\n", "Run the next cell and wait for the Textract Job status to return a SUCCEEDED status.\n", "the outcome is in json format" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "%%time\n", "time.sleep(5)\n", "response = textract.get_document_analysis(JobId=textractJobId)\n", "status = response[\"JobStatus\"]\n", "\n", "while status == \"IN_PROGRESS\":\n", " time.sleep(5)\n", " response = textract.get_document_analysis(JobId=textractJobId)\n", " status = response[\"JobStatus\"]\n", " print(\"Textract Job status: {}\".format(status))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## View Textract results\n", "Now that we've successfully extracted the text from the medical report, let us extract the textract results and consolidate the text so that we can pass it to Comprehend Medical to start extract medical information from the report." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "%%time\n", "pages = []\n", "\n", "time.sleep(5)\n", "\n", "response = textract.get_document_analysis(JobId=textractJobId)\n", "\n", "pages.append(response)\n", "\n", "nextToken = None\n", "if \"NextToken\" in response:\n", " nextToken = response[\"NextToken\"]\n", "\n", "while nextToken:\n", " time.sleep(5)\n", "\n", " response = textract.get_document_analysis(JobId=textractJobId, NextToken=nextToken)\n", "\n", " pages.append(response)\n", " print(\"Resultset page recieved: {}\".format(len(pages)))\n", " nextToken = None\n", " if \"NextToken\" in response:\n", " nextToken = response[\"NextToken\"]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's take a look at the output from textract by using the trp library to extract and format the textract results." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true, "tags": [] }, "outputs": [], "source": [ "doc = trp.Document(pages)\n", "print(\"Total length of document is\", len(doc.pages))\n", "idx = 1\n", "full_text = \"\"\n", "for page in doc.pages:\n", " print(f\"Results from page {idx}: \\n\", page.text)\n", " full_text += page.text\n", " idx = idx + 1" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "---\n", "\n", "# 4. Process general text information with Amazon Comprehend" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### What is the dominant language?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "import pprint\n", "\n", "comprehend_client = boto3.client(service_name=\"comprehend\", region_name=\"us-east-1\")\n", "\n", "response = comprehend_client.detect_dominant_language(Text=full_text).get(\n", " \"Languages\", []\n", ")\n", "\n", "for language in response:\n", " print(\n", " f\"Detected language is {language.get('LanguageCode', [])}, with a confidence score of {language.get('Score', [])}\"\n", " )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### What are the named entities?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true, "tags": [] }, "outputs": [], "source": [ "response = comprehend_client.detect_entities(Text=full_text, LanguageCode=\"en\")\n", "\n", "entities_df = pd.DataFrame(\n", " [\n", " [\n", " entity[\"Text\"],\n", " entity[\"Type\"],\n", " entity[\"Score\"],\n", " entity[\"BeginOffset\"],\n", " entity[\"EndOffset\"],\n", " ]\n", " for entity in response[\"Entities\"]\n", " ],\n", " columns=[\"Text\", \"Type\", \"Score\", \"BeginOffset\", \"EndOffset\"],\n", ").sort_values(by=\"Score\", ascending=False)\n", "\n", "display(entities_df)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### What are the key phrases?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true, "tags": [] }, "outputs": [], "source": [ "response = comprehend_client.detect_key_phrases(Text=full_text, LanguageCode=\"en\")\n", "\n", "entities_df = pd.DataFrame(\n", " [\n", " [entity[\"Text\"], entity[\"Score\"], entity[\"BeginOffset\"], entity[\"EndOffset\"]]\n", " for entity in response[\"KeyPhrases\"]\n", " ],\n", " columns=[\"Text\", \"Score\", \"BeginOffset\", \"EndOffset\"],\n", ").sort_values(by=\"Score\", ascending=False)\n", "\n", "display(entities_df)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Is there any personally-identifiable information?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "response = comprehend_client.detect_pii_entities(Text=full_text, LanguageCode=\"en\")\n", "\n", "entities_df = pd.DataFrame(\n", " [\n", " [entity[\"Type\"], entity[\"Score\"], entity[\"BeginOffset\"], entity[\"EndOffset\"]]\n", " for entity in response[\"Entities\"]\n", " ],\n", " columns=[\"Type\", \"Score\", \"BeginOffset\", \"EndOffset\"],\n", ").sort_values(by=\"Score\", ascending=False)\n", "\n", "display(entities_df)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### What is the overall sentiment?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response = comprehend_client.detect_sentiment(Text=full_text[:5000], LanguageCode=\"en\")\n", "print(response.get(\"Sentiment\", []))\n", "pprint.pprint(response.get(\"SentimentScore\", []))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### What are the parts of speech?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response = comprehend_client.detect_syntax(Text=full_text[:5000], LanguageCode=\"en\")\n", "entities_df = pd.DataFrame(\n", " [\n", " [\n", " entity[\"Text\"],\n", " entity[\"PartOfSpeech\"][\"Tag\"],\n", " entity[\"PartOfSpeech\"][\"Score\"],\n", " entity[\"BeginOffset\"],\n", " entity[\"EndOffset\"],\n", " ]\n", " for entity in response[\"SyntaxTokens\"]\n", " ],\n", " columns=[\"Type\", \"PartOfSpeech\", \"Score\", \"BeginOffset\", \"EndOffset\"],\n", ").sort_values(by=\"Score\", ascending=False)\n", "\n", "display(entities_df)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "---\n", "\n", "# 5. Process medical domain information using Amazon Comprehend Medical\n", "\n", "## About Amazon Comprehend Medical\n", "\n", "Comprehend Medical detects useful information in unstructured clinical text. As much as 75% of all health record data is found in unstructured text such as physician's notes, discharge summaries, test results, and case notes. Amazon Comprehend Medical uses Natural Language Processing (NLP) models to sort through text for valuable information. \n", "\n", "Using Amazon Comprehend Medical, you can quickly and accurately gather information, such as medical condition, medication, dosage, strength, and frequency from a variety of sources like doctors’ notes. Amazon Comprehend Medical uses advanced machine learning models to accurately and quickly identify medical information, such as medical conditions and medications, and determines their relationship to each other, for instance, medicine dosage and strength. Amazon Comprehend Medical can also link the detected information to medical ontologies such as ICD-10-CM or RxNorm\n", "\n", "Currently, Amazon Comprehend Medical only detects medical entities in English language texts.\n", "\n", "![Image of Comprehend Medical](https://d1.awsstatic.com/diagrams/product-page-diagram-Ontology-Linking_How-It-Works@2x.f2dde99f71240451d64b24bdd202573ff9a26d35.png)\n", "\n", "In this workshop, we will be using the detect entities function ([detect_entities_v2](https://docs.aws.amazon.com/comprehend/latest/dg/extracted-med-info-V2.html)) to extract medical conditions. Then, we'll use the ICD-10-CM Linking function ([infer_icd10_cm](https://docs.aws.amazon.com/comprehend-medical/latest/dev/ontology-icd10.html)]) to code the conditions." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Detect medical entities\n", "The output of *detect_entities_v2* can detect the following entities:\n", "\n", "\n", "- `MEDICAL_CONDITION`: The signs, symptoms, and diagnosis of medical conditions.\n", "- `Score` - The level of confidence that Amazon Comprehend Medical has in the accuracy of the detection\n", "- `Trait` - Contextual information for the entity\n", "\n", "Other information extracted by Comprehend Medical:\n", "- `MEDICATION`: Medication and dosage information for the patient.\n", "- `PROTECTED_HEALTH_INFORMATION`: patient's personal information, e.g. name, age, gender\n", "- `TEST_TREATMENT_PROCEDURE`: the procedures that are used to determine a medical condition.\n", "- `TIME_EXPRESSION`: Entities related to time when they are associated with a detected entity." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "maxLength = 20000\n", "pd.options.display.max_rows = 999\n", "comprehendResponse = []\n", "comprehend_medical_client = boto3.client(\n", " service_name=\"comprehendmedical\", region_name=\"us-east-1\"\n", ")\n", "\n", "response = comprehend_medical_client.detect_entities_v2(Text=full_text)\n", "\n", "entities_df = pd.DataFrame(\n", " [\n", " [\n", " entity[\"Id\"],\n", " entity[\"Text\"],\n", " entity[\"Category\"],\n", " entity[\"Type\"],\n", " entity[\"Score\"],\n", " entity[\"BeginOffset\"],\n", " entity[\"EndOffset\"],\n", " entity[\"Attributes\"][0][\"RelationshipType\"]\n", " if \"Attributes\" in entity\n", " else \"\",\n", " entity[\"Attributes\"][0][\"Text\"] if \"Attributes\" in entity else \"\",\n", " entity[\"Attributes\"][0][\"Category\"] if \"Attributes\" in entity else \"\",\n", " ]\n", " for entity in response[\"Entities\"]\n", " ],\n", " columns=[\n", " \"Id\",\n", " \"Text\",\n", " \"Category\",\n", " \"Type\",\n", " \"Score\",\n", " \"BeginOffset\",\n", " \"EndOffset\",\n", " \"RelationshipType\",\n", " \"Text2\",\n", " \"Category2\",\n", " ],\n", ").sort_values(by=\"Score\", ascending=False)\n", "\n", "display(entities_df)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Link ICD-10 concepts" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response = comprehend_medical_client.infer_icd10_cm(Text=full_text[:5000])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for entity in response[\"Entities\"][10:]:\n", " if entity.get(\"Score\", []) < 0.8:\n", " continue\n", "\n", " print(f\"Text: {entity.get('Text', [])}\")\n", " print(f\"Type: {entity.get('Type', [])}\")\n", " print(f\"Category: {entity.get('Category', [])}\")\n", " print(f\"Score: {entity.get('Score', [])}\")\n", "\n", " icd10_df = pd.DataFrame(\n", " [\n", " [concept[\"Code\"], concept[\"Description\"], concept[\"Score\"]]\n", " for concept in entity[\"ICD10CMConcepts\"]\n", " ],\n", " columns=[\"Code\", \"Description\", \"Score\"],\n", " ).sort_values(by=\"Score\", ascending=False)\n", "\n", " display(icd10_df)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 6. Clean up" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "boto3.Session().resource(\"s3\").Bucket(bucket).Object(textractObjectName).delete()" ] } ], "metadata": { "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3.9.8 64-bit ('3.9.8')", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.8" }, "vscode": { "interpreter": { "hash": "a8534c14445fc6cdc3039d8140510d6736e5b4960d89f445a45d8db6afd8452b" } } }, "nbformat": 4, "nbformat_minor": 4 }