{ "cells": [ { "cell_type": "markdown", "id": "150861c5", "metadata": {}, "source": [ "# Leveraging pre-trained APIs in Amazon Comprehend" ] }, { "cell_type": "markdown", "id": "383c1537", "metadata": {}, "source": [ "## Table of contents\n", "- [Introduction](#intro)\n", "- [Setup](#setup)\n", "- [Identifying Named Entities](#identifying-named-entities)\n", "- [Detecting Key Phrases](#detecting-key-phrases)\n", "- [Identifying the Dominant Language](#identifying-the-ominant-language)\n", "- [Determining Emotional sentiment](#determining-emotional-sentiment)\n", "- [Determining Syntax](#determiningsyntax)\n", "- [Detecting Personally Identifiable Information (PII)](#detecting-pii)\n", "- [Conclusion](#conclusion)\n", "- [Clean Up](#clean-up)" ] }, { "cell_type": "markdown", "id": "5462b451", "metadata": {}, "source": [ "## Introduction\n", "\n", "This notebook provides step-by-step instructions to use the [Amazon Comprehend](https://aws.amazon.com/comprehend/)'s pre-trained APIs to to uncover information in unstructured data. Amazon Comprehend uses a pre-trained model to examine and analyze a document or set of documents to gather insights about it. This model is continuously trained on a large body of text so that there is no need for you to provide training data.\n", "\n", "We will explore 6 pre-trained APIs: Identifying Named Entities, Extracting Key Phrases, Identifying the Dominant Language, Determining Emotional sentiment, Determining Syntax, Detecting Detect Personally Identifiable Information (PII).\n", "\n", "This Notebook uses AWS resources and you may incur a cost when running these cells." ] }, { "cell_type": "markdown", "id": "0f7dcd17", "metadata": {}, "source": [ "## Tips\n", "\n", "If you are new to Python Notebooks: `SHIFT` + `ENTER` will execute a code cell and go to the next one." ] }, { "cell_type": "markdown", "id": "9257c3a5", "metadata": {}, "source": [ "## Setup\n", "We import the relevant packages to interact with Amazon Comprehend. [Boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html) is the AWS Python SDK." ] }, { "cell_type": "code", "execution_count": null, "id": "c5007215", "metadata": {}, "outputs": [], "source": [ "import boto3" ] }, { "cell_type": "markdown", "id": "e6f3a587", "metadata": {}, "source": [ "We specify the SageMaker execution role, this is the role that is used in this notebook and the region the notebook is in." ] }, { "cell_type": "code", "execution_count": null, "id": "ae66e720", "metadata": {}, "outputs": [], "source": [ "import sagemaker\n", "from sagemaker import get_execution_role\n", "role = get_execution_role()\n", "region = boto3.Session().region_name" ] }, { "cell_type": "markdown", "id": "56a416ee", "metadata": {}, "source": [ "We import other packages we will use." ] }, { "cell_type": "code", "execution_count": null, "id": "4ae46688", "metadata": {}, "outputs": [], "source": [ "import json\n", "import pandas as pd\n", "import numpy as np" ] }, { "cell_type": "markdown", "id": "1d1c7cb6", "metadata": {}, "source": [ "## Starting the Amazon Comprehend client" ] }, { "cell_type": "code", "execution_count": null, "id": "735e00ff", "metadata": {}, "outputs": [], "source": [ "comprehend = boto3.client(service_name='comprehend', region_name=region)" ] }, { "cell_type": "markdown", "id": "f70049c0", "metadata": {}, "source": [ "## Data\n", "\n", "In this lab, we will use the same sample input text used in the [Amazon Comprehend console](https://console.aws.amazon.com/comprehend). We have copyed it here for convenience. If you wish to experiment with a different text, simply modify the cell bellow." ] }, { "cell_type": "code", "execution_count": null, "id": "2f7222af", "metadata": {}, "outputs": [], "source": [ "sample_text = '''\n", "Hello Zhang Wei. Your AnyCompany Financial Services, LLC credit card account 1111-0000-1111-0000 has a minimum payment of $24.53 that is due by July 31st. Based on your autopay settings, we will withdraw your payment on the due date from your bank account XXXXXX1111 with the routing number XXXXX0000. \n", "Your latest statement was mailed to 100 Main Street, Anytown, WA 98121. \n", "After your payment is received, you will receive a confirmation text message at 206-555-0100. \n", "If you have questions about your bill, AnyCompany Customer Service is available by phone at 206-555-0199 or email at support@anycompany.com.\n", "'''" ] }, { "cell_type": "markdown", "id": "197b4170", "metadata": {}, "source": [ "## Identifying Named Entities\n", "\n", "A named entity is a real-world object (persons, places, locations, organizations, etc.) that can be denoted with a proper name.\n", "\n", "Amazon Comprehend can extract named entities from a document or text. This can be useful, for example, for indexing, document labeling or search. For more information, see [Detect Entities](https://docs.aws.amazon.com/comprehend/latest/dg/API_DetectEntities.html)).\n", "\n", "The API used to extract these entities is the [DetectEntities API](https://docs.aws.amazon.com/comprehend/latest/dg/API_DetectEntities).\n", "\n", "For each entity detected Amazon Comprehend returns both the type, for instance \"Person\" or \"Date\", as well as a confidence score which indicates how confident the model is in this detection. In your implementation you can use this confidence score to set threshold values." ] }, { "cell_type": "code", "execution_count": null, "id": "89570b99", "metadata": {}, "outputs": [], "source": [ "print('Calling DetectEntities')\n", "detected_entities = comprehend.detect_entities(Text=sample_text, LanguageCode='en')\n", "print(json.dumps(detected_entities, sort_keys=True, indent=4))\n", "print('End of DetectEntities\\n')" ] }, { "cell_type": "markdown", "id": "2fba0175", "metadata": {}, "source": [ "The response includes the full score, type, and offsets.\n", "\n", "Now lets make it a bit more human readable:" ] }, { "cell_type": "code", "execution_count": null, "id": "bacf4771", "metadata": {}, "outputs": [], "source": [ "detectec_entities_df = pd.DataFrame([ [entity['Text'], entity['Type'], entity['Score']] for entity in detected_entities['Entities']],\n", " columns=['Text', 'Type', 'Score'])\n", "\n", "print('This was the text analyzed:')\n", "print(sample_text)\n", "print()\n", "display (detectec_entities_df)" ] }, { "cell_type": "markdown", "id": "96b2ea19", "metadata": {}, "source": [ "## Detecting Key Phrases\n", "\n", "Amazon Comprehend can extract key noun phrases that appear in a document. For example, a document about a basketball game might return the names of the teams, the name of the venue, and the final score. This can be used, for example, for indexing or summarization. For more information, see [Detect Key Phrases](https://docs.aws.amazon.com/comprehend/latest/dg/get-started-api-key-phrases.html).\n", "\n", "The API used to extract these key phrases is the [DetectKeyPhrases API](https://docs.aws.amazon.com/comprehend/latest/dg/API_DetectKeyPhrases).\n", "\n", "Amazon Comprehend returns the key phrases, as well as a confidence score which indicates how confident the model is in this detection. In your implementation you can use this confidence score to set threshold values." ] }, { "cell_type": "code", "execution_count": null, "id": "a2aff7ce", "metadata": {}, "outputs": [], "source": [ "print('Calling DetectKeyPhrases')\n", "detected_key_phrases = comprehend.detect_key_phrases(Text=sample_text, LanguageCode='en')\n", "print(json.dumps(detected_key_phrases, sort_keys=True, indent=4))\n", "print('End of DetectKeyPhrases\\n')" ] }, { "cell_type": "markdown", "id": "7356af3f", "metadata": {}, "source": [ "The response includes the full score, key phrase text, and offsets.\n", "\n", "Now lets make it a bit more human readable:" ] }, { "cell_type": "code", "execution_count": null, "id": "a62a5d31", "metadata": {}, "outputs": [], "source": [ "detected_key_phrases_df = pd.DataFrame([ [entity['Text'], entity['Score']] for entity in detected_key_phrases['KeyPhrases']],\n", " columns=['Text', 'Score'])\n", "\n", "print('This was the text analyzed:')\n", "print(sample_text)\n", "print()\n", "display (detected_key_phrases_df)" ] }, { "cell_type": "markdown", "id": "422290b2", "metadata": {}, "source": [ "## Identifying the Dominant Language\n", "\n", "Amazon Comprehend identifies the dominant language in a document. Amazon Comprehend can currently identify many languages. This can be useful as a first step before further processing, for example when phone call transcripts can be in different languages. For more information, including which languages can be identified, see [Detect the Dominant Language](https://docs.aws.amazon.com/comprehend/latest/dg/how-languages.html).\n", "\n", "The API used to identify the dominant language is the [DetectDominantLanguage API](https://docs.aws.amazon.com/comprehend/latest/dg/API_DetectDominantLanguage).\n", "\n", "Amazon Comprehend returns the dominant language, as well as a confidence score which indicates how confident the model is in this detection. In your implementation you can use this confidence score to set threshold values. If more than one language is detected, it will return each detected language and its corresponding confidence score." ] }, { "cell_type": "code", "execution_count": null, "id": "cdd364e3", "metadata": {}, "outputs": [], "source": [ "print('Calling DetectDominantLanguage')\n", "detected_language = comprehend.detect_dominant_language(Text=sample_text)\n", "print(json.dumps(detected_language, sort_keys=True, indent=4))\n", "print('End of DetectDominantLanguage\\n')" ] }, { "cell_type": "markdown", "id": "9d692c35", "metadata": {}, "source": [ "The response includes the full score, and the detected language codes.\n", "\n", "Now lets make it a bit more human readable:" ] }, { "cell_type": "code", "execution_count": null, "id": "74fc4369", "metadata": {}, "outputs": [], "source": [ "detected_language_df = pd.DataFrame([ [code['LanguageCode'], code['Score']] for code in detected_language['Languages']],\n", " columns=['Language Code', 'Score'])\n", "\n", "print('This was the text analyzed:')\n", "print(sample_text)\n", "print()\n", "display (detected_language_df)" ] }, { "cell_type": "markdown", "id": "b074355d", "metadata": {}, "source": [ "## Determining Emotional Sentiment\n", "\n", "Amazon Comprehend determines the emotional sentiment of a document. Sentiment can be positive, neutral, negative, or mixed. For more information, see Determine Sentiment. This can be useful for example to analyze the content of reviews or transcripts from call centres. For more information, see [Detecting Sentiment](https://docs.aws.amazon.com/comprehend/latest/dg/get-started-api-sentiment.html).\n", "\n", "The API used to extract the emotional sentiment is the [DetectSentiment API](https://docs.aws.amazon.com/comprehend/latest/dg/API_DetectSentiment).\n", "\n", "Amazon Comprehend returns the different sentiments and the related confidence score for each of them, which indicates how confident the model is in this detection. The sentiment with the highest confidence score can be seen as the predominant sentiment in the text." ] }, { "cell_type": "code", "execution_count": null, "id": "b376d5b5", "metadata": {}, "outputs": [], "source": [ "print('Calling DetectSentiment')\n", "detected_sentiment = comprehend.detect_sentiment(Text=sample_text, LanguageCode='en')\n", "print(json.dumps(detected_sentiment, sort_keys=True, indent=4))\n", "print('End of DetectSentiment\\n')" ] }, { "cell_type": "markdown", "id": "ac7181ac", "metadata": {}, "source": [ "The response includes the predominant sentiment and the full scores for each detected sentiment.\n", "\n", "Now lets make it a bit more human readable:" ] }, { "cell_type": "code", "execution_count": null, "id": "e64eb890", "metadata": {}, "outputs": [], "source": [ "predominant_sentiment = detected_sentiment['Sentiment']\n", "detected_sentiments_df = pd.DataFrame([ [sentiment, detected_sentiment['SentimentScore'][sentiment]] for sentiment in detected_sentiment['SentimentScore']],\n", " columns=['Language Code', 'Score'])\n", "\n", "print('This was the text analyzed:')\n", "print(sample_text)\n", "print()\n", "print('The predominant sentiment is {}.'.format(predominant_sentiment))\n", "print()\n", "display (detected_sentiments_df)" ] }, { "cell_type": "markdown", "id": "8e5c6465", "metadata": {}, "source": [ "## Determining Syntax\n", "\n", "Amazon Comprehend parses each word in your document and determines the syntax, the part of speech, for the word. For example, in the sentence \"It is raining today in Seattle,\" \"it\" is identified as a pronoun, \"raining\" is identified as a verb, and \"Seattle\" is identified as a proper noun. For more information, see [Analyze Syntax](https://docs.aws.amazon.com/comprehend/latest/dg/how-syntax.html).\n", "\n", "The API used to extract thesyntax information is the [DetectSyntax API](https://docs.aws.amazon.com/comprehend/latest/dg/API_DetectSyntax).\n", "\n", "Amazon Comprehend returns the different parts of speech and the related confidence score for each of them, which indicates how confident the model is in this detection, token Ids and offsets." ] }, { "cell_type": "code", "execution_count": null, "id": "72d08f82", "metadata": {}, "outputs": [], "source": [ "print('Calling DetectSyntax')\n", "detected_syntax = comprehend.detect_syntax(Text=sample_text, LanguageCode='en')\n", "print(json.dumps(detected_syntax, sort_keys=True, indent=4))\n", "print('End of DetectSyntax\\n')" ] }, { "cell_type": "markdown", "id": "5560a917", "metadata": {}, "source": [ "Amazon Comprehend returns the text, different parts of speech, confidence score for each of them, token Ids and offsets.\n", "\n", "Now lets make it a bit more human readable:" ] }, { "cell_type": "code", "execution_count": null, "id": "274c3884", "metadata": {}, "outputs": [], "source": [ "detected_syntax_df = pd.DataFrame([ [part['Text'], part['PartOfSpeech']['Tag'], part['PartOfSpeech']['Score']] for part in detected_syntax['SyntaxTokens']],\n", " columns=['Text', 'Part Of Speech', 'Score'])\n", "\n", "print('This was the text analyzed:')\n", "print(sample_text)\n", "print()\n", "print('First twenty tokens:')\n", "display (detected_syntax_df.head(20))" ] }, { "cell_type": "markdown", "id": "a5a53002", "metadata": {}, "source": [ "## Detecting Personally Identifiable Information (PII)\n", "\n", "Amazon Comprehend analyzes documents to detect personal data that could be used to identify an individual, such as an address, bank account number, or phone number. This can be usefull, for example, for information extraction and indexing, and to comply with legal requirements around data protection. For more information, see [Detect Personally Identifiable Information (PII)](https://docs.aws.amazon.com/comprehend/latest/dg/how-pii.html).\n", "\n", "Amazon Comprehend can help you identify the location of individual PII in your document or help you label documents that contain PII. \n", "\n", "### Identify the location of PII in your text documents\n", "\n", "Amazon Comprehend can help you identify the location of individual PII in your document. Select \"Offsets\" in the Personally identifiable information (PII) analysis mode.\n", "\n", "The API used to identify the location of individual PII is the [DetectPiiEntities API](https://docs.aws.amazon.com/comprehend/latest/dg/API_DetectPiiEntities.html).\n", "\n", "Amazon Comprehend returns the different PII and the related confidence score for each of them, which indicates how confident the model is in this detection. " ] }, { "cell_type": "code", "execution_count": null, "id": "f43eefa7", "metadata": {}, "outputs": [], "source": [ "print('Calling DetectPiiEntities')\n", "detected_pii_entities = comprehend.detect_pii_entities(Text=sample_text, LanguageCode='en')\n", "print(json.dumps(detected_pii_entities, sort_keys=True, indent=4))\n", "print('End of DetectPiiEntities\\n')" ] }, { "cell_type": "markdown", "id": "5bfd1c4f", "metadata": {}, "source": [ "Amazon Comprehend returns the PII entity, a confidence score for each of them, and offsets.\n", "\n", "Now lets make it a bit more human readable:" ] }, { "cell_type": "code", "execution_count": null, "id": "6d58a2f4", "metadata": {}, "outputs": [], "source": [ "detected_pii_entities_df = pd.DataFrame([ [entity['Type'], entity['Score']] for entity in detected_pii_entities['Entities']],\n", " columns=['Type', 'Score'])\n", "\n", "print('This was the text analyzed:')\n", "print(sample_text)\n", "print()\n", "display (detected_pii_entities_df)" ] }, { "cell_type": "markdown", "id": "7b473768", "metadata": {}, "source": [ "### Label text documents with PII\n", "\n", "Amazon Comprehend can help you label documents that contain PII. Select \"Labels\" in the Personally identifiable information (PII) analysis mode.\n", "\n", "The API used to extract the PII enties in the document. We used the [ContainsPiiEntities API](https://docs.aws.amazon.com/comprehend/latest/dg/API_ContainsPiiEntities.html).\n", "\n", "Amazon Comprehend returns the different PII labels and the related confidence score for each of them, which indicates how confident the model is in this detection. These labels indicate the presence of these types of PII in the document. " ] }, { "cell_type": "code", "execution_count": null, "id": "cd41f46e", "metadata": {}, "outputs": [], "source": [ "print('Calling ContainsPiiEntities')\n", "detected_pii_labels = comprehend.contains_pii_entities(Text=sample_text, LanguageCode='en')\n", "print(json.dumps(detected_pii_labels, sort_keys=True, indent=4))\n", "print('End of ContainsPiiEntities\\n')" ] }, { "cell_type": "markdown", "id": "d7a926a9", "metadata": {}, "source": [ "Amazon Comprehend returns the PII entity name and full scores.\n", "\n", "Now lets make it a bit more human readable:" ] }, { "cell_type": "code", "execution_count": null, "id": "a20c9678", "metadata": {}, "outputs": [], "source": [ "detected_pii_labels_df = pd.DataFrame([ [entity['Name'], entity['Score']] for entity in detected_pii_labels['Labels']],\n", " columns=['Name', 'Score'])\n", "\n", "print('This was the text analyzed:')\n", "print(sample_text)\n", "print()\n", "display (detected_pii_labels_df)" ] }, { "cell_type": "markdown", "id": "c6af603f", "metadata": {}, "source": [ "## Conclusion\n", "\n", "You have now learned how to use the pre-trained APIs using the Python SDK.\n", "\n", "For examples of how to use these APIs from the AWS Management Console, follow the steps in \"Using the AWS Management Console\" in the workshop website." ] }, { "cell_type": "markdown", "id": "8ee52975", "metadata": {}, "source": [ "## Clean Up\n", "\n", "Once you have finished using this notebook, make sure to stop and delete this Amazon SageMaker Notebook instance in the [Amazon SageMaker Console](https://console.aws.amazon.com/sagemaker/) to avoid incurring additional costs." ] } ], "metadata": { "kernelspec": { "display_name": "conda_python3", "language": "python", "name": "conda_python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.13" } }, "nbformat": 4, "nbformat_minor": 5 }