{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "**Post-Processing Amazon Textract with Location-Aware Transformers**\n", "\n", "# Part 2: Data Consolidation and Model Training/Deployment\n", "\n", "> *This notebook works well with the `Data Science 3.0 (Python 3)` kernel on SageMaker Studio - use the same as for NB1*\n", "\n", "In the [first notebook](1.%20Data%20Preparation.ipynb) we worked through preparing a corpus with Amazon Textract and labelling a small sample to highlight entities of interest.\n", "\n", "In this part 2, we'll consolidate the labelling job results together with a pre-prepared augmentation set, and actually train and deploy a SageMaker model for word classification.\n", "\n", "First, as in the previous notebook, we'll start by importing the required libraries and loading configuration:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "%load_ext autoreload\n", "%autoreload 2\n", "\n", "# Python Built-Ins:\n", "from datetime import datetime\n", "import json\n", "from logging import getLogger\n", "import os\n", "import random\n", "import time\n", "\n", "# External Dependencies:\n", "import boto3 # AWS SDK for Python\n", "import sagemaker\n", "from sagemaker.huggingface import HuggingFace as HuggingFaceEstimator, TrainingCompilerConfig\n", "from tqdm.notebook import tqdm # Progress bars\n", "\n", "# Local Dependencies:\n", "import util\n", "\n", "logger = getLogger()\n", "\n", "# Manual configuration (check this matches notebook 1):\n", "bucket_name = sagemaker.Session().default_bucket()\n", "bucket_prefix = \"textract-transformers/\"\n", "print(f\"Working in bucket s3://{bucket_name}/{bucket_prefix}\")\n", "config = util.project.init(\"ocr-transformers-demo\")\n", "print(config)\n", "\n", "# Field configuration saved from first notebook:\n", "with open(\"data/field-config.json\", \"r\") as f:\n", " fields = [\n", " util.postproc.config.FieldConfiguration.from_dict(cfg)\n", " for cfg in json.loads(f.read())\n", " ]\n", "entity_classes = [f.name for f in fields]\n", "\n", "# S3 URIs as per first notebook:\n", "raw_s3uri = f\"s3://{bucket_name}/{bucket_prefix}data/raw\"\n", "imgs_s3uri = f\"s3://{bucket_name}/{bucket_prefix}data/imgs-clean\"\n", "textract_s3uri = f\"s3://{bucket_name}/{bucket_prefix}data/textracted\"\n", "thumbs_s3uri = f\"s3://{bucket_name}/{bucket_prefix}data/thumbnails\"\n", "annotations_base_s3uri = f\"s3://{bucket_name}/{bucket_prefix}data/annotations\"\n", "\n", "# AWS service clients:\n", "s3 = boto3.resource(\"s3\")\n", "smclient = boto3.client(\"sagemaker\")\n", "ssm = boto3.client(\"ssm\")" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "## Data Consolidation\n", "\n", "To construct a training set, we'll typically need to consolidate the results of multiple SageMaker Ground Truth labelling jobs: Perhaps because the work was split up into more manageable chunks - or maybe because additional review/adjustment jobs were run to improve label quality.\n", "\n", "First, we'll download the output folders of all our labelling jobs to the local `data/annotations` folder: (The code here assumes you configured the same `annotations_base_s3uri` output folder for each job in SMGT)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "!aws s3 sync --quiet $annotations_base_s3uri ./data/annotations" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Inside this folder, you'll find some **pre-annotated augmentation data** provided for you already (in the `augmentation-` subfolders). These datasets are not especially large or externally useful, but will help you train an example model without too much (or even any!) manual annotation effort.\n", "\n", "▶️ **Edit** the `include_jobs` line below to control which datasets (pre-provided and your own) will be included:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "include_jobs = [\n", " \"augmentation-1\",\n", " \"augmentation-2\",\n", " # TODO: Adjust the below to include the labelling job(s) you created, if you finished labelling:\n", " # \"cfpb-boxes-1\",\n", "]\n", "\n", "\n", "source_manifests = []\n", "for job_name in sorted(filter(\n", " lambda n: os.path.isdir(f\"data/annotations/{n}\"),\n", " os.listdir(\"data/annotations\")\n", ")):\n", " if job_name not in include_jobs:\n", " logger.warning(f\"Skipping {job_name} (not in include_jobs list)\")\n", " continue\n", " job_manifest_path = f\"data/annotations/{job_name}/manifests/output/output.manifest\"\n", " if not os.path.isfile(job_manifest_path):\n", " raise RuntimeError(f\"Could not find job output manifest {job_manifest_path}\")\n", " source_manifests.append({\"job_name\": job_name, \"manifest_path\": job_manifest_path})\n", "\n", "print(f\"Got {len(source_manifests)} annotated manifests:\")\n", "print(\"\\n\".join(map(lambda o: o[\"manifest_path\"], source_manifests)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that the results are downloaded, we're ready to **consolidate the output manifest files** from each one into a combined manifest file.\n", "\n", "Note that to combine multiple output manifests to a single dataset:\n", "\n", "- The labels must be stored in the same attribute on every record (records use the labeling job name by default, which will be different between jobs).\n", "- If importing data collected from some other account (like the `augmentation-` sets), we'll need to **map the S3 URIs** to equivalent links on your own bucket." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# Annotations/labels will be standardized to this field on all records:\n", "standard_label_field = \"label\"\n", "\n", "# To import a manifest from somebody else, we of course need to map their bucket names and prefixes\n", "# to ours (and have equivalent files stored in the same locations after the mapping):\n", "BUCKET_MAPPINGS = {\"DOC-EXAMPLE-BUCKET\": bucket_name}\n", "PREFIX_MAPPINGS = {\"EXAMPLE-PREFIX/\": bucket_prefix}\n", "\n", "print(\"Writing data/annotations/annotations-all.manifest.jsonl\")\n", "with open(\"data/annotations/annotations-all.manifest.jsonl\", \"w\") as fout:\n", " for source in tqdm(source_manifests, desc=\"Consolidating manifests...\"):\n", " with open(source[\"manifest_path\"], \"r\") as fin:\n", " for line in filter(lambda l: l, fin):\n", " obj = json.loads(line)\n", "\n", " # Import refs by applying BUCKET_MAPPINGS and PREFIX_MAPPINGS:\n", " for k in filter(lambda k: k.endswith(\"-ref\"), obj.keys()):\n", " if not obj[k].lower().startswith(\"s3://\"):\n", " raise RuntimeError(\n", " \"Attr %s ends with -ref but does not start with 's3://'\\n%s\"\n", " % (k, obj)\n", " )\n", " obj_bucket, _, obj_key = obj[k][len(\"s3://\"):].partition(\"/\")\n", " obj_bucket = BUCKET_MAPPINGS.get(obj_bucket, obj_bucket)\n", " for old_prefix in PREFIX_MAPPINGS:\n", " if obj_key.startswith(old_prefix):\n", " obj_key = (\n", " PREFIX_MAPPINGS[old_prefix]\n", " + obj_key[len(old_prefix):]\n", " )\n", " obj[k] = f\"s3://{obj_bucket}/{obj_key}\"\n", "\n", " # Find the job output field:\n", " if source[\"job_name\"] in obj:\n", " source_label_attr = source[\"job_name\"]\n", " elif standard_label_field in obj:\n", " source_label_attr = standard_label_field\n", " else:\n", " raise RuntimeError(\"Couldn't find label field for entry in {}:\\n{}\".format(\n", " source[\"job_name\"],\n", " obj,\n", " ))\n", " # Rename to standard:\n", " obj[standard_label_field] = obj.pop(source_label_attr)\n", " source_meta_attr = f\"{source_label_attr}-metadata\"\n", " if source_meta_attr in obj:\n", " obj[f\"{standard_label_field}-metadata\"] = obj.pop(source_meta_attr)\n", " # Write to output manifest:\n", " fout.write(json.dumps(obj) + \"\\n\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Split training and test sets\n", "\n", "To get some insight on how well our model is generalizing to real-world data, we'll need to reserve some annotated data as a testing/validation set.\n", "\n", "Below, we randomly partition the data into training and test sets and then upload the two manifests to S3:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "def split_manifest(f_in, f_train, f_test, train_pct=0.9, random_seed=1337):\n", " \"\"\"Split `f_in` manifest file into `f_train`, `f_test`\"\"\"\n", " logger.info(f\"Reading {f_in}\")\n", " with open(f_in, \"r\") as fin:\n", " lines = list(filter(lambda line: line, fin))\n", " logger.info(\"Shuffling records\")\n", " random.Random(random_seed).shuffle(lines)\n", " n_train = round(len(lines) * train_pct)\n", "\n", " with open(f_train, \"w\") as ftrain:\n", " logger.info(f\"Writing {n_train} records to {f_train}\")\n", " for line in lines[:n_train]:\n", " ftrain.write(line)\n", " with open(f_test, \"w\") as ftest:\n", " logger.info(f\"Writing {len(lines) - n_train} records to {f_test}\")\n", " for line in lines[n_train:]:\n", " ftest.write(line)\n", "\n", "\n", "split_manifest(\n", " \"data/annotations/annotations-all.manifest.jsonl\",\n", " \"data/annotations/annotations-train.manifest.jsonl\",\n", " \"data/annotations/annotations-test.manifest.jsonl\",\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "train_manifest_s3uri = f\"s3://{bucket_name}/{bucket_prefix}data/annotations/annotations-train.manifest.jsonl\"\n", "!aws s3 cp data/annotations/annotations-train.manifest.jsonl $train_manifest_s3uri\n", "\n", "test_manifest_s3uri = f\"s3://{bucket_name}/{bucket_prefix}data/annotations/annotations-test.manifest.jsonl\"\n", "!aws s3 cp data/annotations/annotations-test.manifest.jsonl $test_manifest_s3uri" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Visualize the data\n", "\n", "Before training the model, we'll sense-check the data by plotting a few examples.\n", "\n", "The utility function below will overlay the page image with the annotated bounding boxes, the locations of `WORD` blocks detected from the Amazon Textract results, and the resulting classification of individual Textract `WORD`s.\n", "\n", "> ⏰ If you Textracted a large number of documents and haven't previously synced them to the notebook, the initial download here may take a few minutes to complete. For our sample set of 120, typically only ~20s is needed." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "%%time\n", "\n", "!aws s3 sync --quiet $textract_s3uri ./data/textracted" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "> ⚠️ **Note:** For the interactive visualization widgets in this notebook to work correctly, you'll need the [IPyWidgets extension for JupyterLab](https://ipywidgets.readthedocs.io/en/latest/user_install.html).\n", ">\n", "> On [SageMaker Studio](https://aws.amazon.com/sagemaker/studio/), this should be installed by default. On the classic [SageMaker Notebook Instances](https://docs.aws.amazon.com/sagemaker/latest/dg/nbi.html) though, you'll need to install the `@jupyter-widgets/jupyterlab-manager` extension (from `Settings > Extension Manager`, or using a [lifecycle configuration](https://docs.aws.amazon.com/sagemaker/latest/dg/notebook-lifecycle-config.html) similar to [this sample](https://github.com/aws-samples/amazon-sagemaker-notebook-instance-lifecycle-config-samples/tree/master/scripts/install-lab-extension)) - or just use plain `Jupyter` instead of `JupyterLab`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "with open(\"data/annotations/annotations-test.manifest.jsonl\", \"r\") as fman:\n", " test_examples = [json.loads(line) for line in filter(lambda l: l, fman)]\n", "\n", "util.viz.draw_from_manifest_items(\n", " test_examples,\n", " standard_label_field,\n", " entity_classes,\n", " imgs_s3uri[len(\"s3://\"):].partition(\"/\")[2],\n", " textract_s3key_prefix=textract_s3uri[len(\"s3://\"):].partition(\"/\")[2],\n", " imgs_local_prefix=\"data/imgs-clean\",\n", " textract_local_prefix=\"data/textracted\",\n", ")" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "## Prepare custom training and inference containers\n", "\n", "SageMaker framework containers like those [for PyTorch](https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/using_pytorch.html) and [Hugging Face](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/index.html) support `pip` runtime dependency injection by specifying a `requirements.txt` file in your source bundle - and a specimen requirements file is included in [src/requirements.txt](src/requirements.txt).\n", "\n", "This can make experimenting with different library versions faster. **However**, running the installs at each training job / endpoint start-up can make experimenting with script code changes slower. \n", "\n", "Some of the extra computer vision dependencies required for this use case can take a while to install, so in this example we'll build customized containers in advance (as shown in notebook 1 for pre-processing) and leave our requirements.txt empty:\n", "\n", "> ℹ️ **Alternatively:** If needed (for example, to experiment with [SageMaker Training Compiler](https://docs.aws.amazon.com/sagemaker/latest/dg/training-compiler.html) which doesn't support customized containers at the time of writing), you could instead:\n", ">\n", "> 1. Uncomment the dependencies listed in [src/requirements.txt](src/requirements.txt)\n", "> 2. Skip the `sm-docker build` steps below, and\n", "> 3. Remove the `image_uri=` arguments later in the notebook" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# Configurations:\n", "hf_version = \"4.17\"\n", "py_version = \"py38\"\n", "pt_version = \"1.10\"\n", "train_repo_name = \"sm-ocr-training\"\n", "train_repo_tag = \"hf-4.26-pt-gpu\" # (Base HF version is overridden in Dockerfile)\n", "inf_repo_name = \"sm-ocr-inference\"\n", "inf_repo_tag = train_repo_tag\n", "\n", "account_id = sagemaker.Session().account_id()\n", "region = os.environ[\"AWS_REGION\"]\n", "\n", "base_image_params = {\n", " \"framework\": \"huggingface\",\n", " \"region\": region,\n", " \"instance_type\": \"ml.p3.2xlarge\", # (Just used to check whether GPUs/accelerators are used)\n", " \"py_version\": py_version,\n", " \"version\": hf_version,\n", " \"base_framework_version\": f\"pytorch{pt_version}\",\n", "}\n", "\n", "train_base_uri = sagemaker.image_uris.retrieve(**base_image_params, image_scope=\"training\")\n", "inf_base_uri = sagemaker.image_uris.retrieve(**base_image_params, image_scope=\"inference\")\n", "\n", "# Combine together into the final URIs:\n", "train_image_uri = f\"{account_id}.dkr.ecr.{region}.amazonaws.com/{train_repo_name}:{train_repo_tag}\"\n", "print(f\"Target training image: {train_image_uri}\")\n", "inf_image_uri = f\"{account_id}.dkr.ecr.{region}.amazonaws.com/{inf_repo_name}:{inf_repo_tag}\"\n", "print(f\"Target inference image: {inf_image_uri}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `--compute-type` parameter below is optional, but can help to speed up image build versus the [default](https://github.com/aws-samples/sagemaker-studio-image-build-cli/blob/87c25051ab033dc81ae1f388515315a70b701157/sagemaker_studio_image_build/cli.py#L100) `BUILD_GENERAL1_SMALL`.\n", "\n", "> ⏰ These image builds may take ~12 mins each, but once complete the images will be stored in your Amazon ECR registry and ready to re-use." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true, "tags": [] }, "outputs": [], "source": [ "%%time\n", "# (No need to re-run this cell if your train image is already in ECR)\n", "\n", "# Build and push the training image:\n", "!cd custom-containers/train-inf && sm-docker build . \\\n", " --compute-type BUILD_GENERAL1_LARGE \\\n", " --repository {train_repo_name}:{train_repo_tag} \\\n", " --role {config.sm_image_build_role} \\\n", " --build-arg BASE_IMAGE={train_base_uri}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that although our training and inference containers use the [same Dockerfile](custom-containers/train-inf/Dockerfile), they're built from different parent images so both are needed in ECR:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true, "tags": [] }, "outputs": [], "source": [ "%%time\n", "# (No need to re-run this cell if your inference image is already in ECR)\n", "\n", "# Build and push the inference image:\n", "!cd custom-containers/train-inf && sm-docker build . \\\n", " --compute-type BUILD_GENERAL1_LARGE \\\n", " --repository {inf_repo_name}:{inf_repo_tag} \\\n", " --role {config.sm_image_build_role} \\\n", " --build-arg BASE_IMAGE={inf_base_uri}" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# Check from notebook whether the images were successfully created:\n", "ecr = boto3.client(\"ecr\")\n", "for repo, tag, uri in (\n", " (train_repo_name, train_repo_tag, train_image_uri),\n", " (inf_repo_name, inf_repo_tag, inf_image_uri)\n", "):\n", " imgs_desc = ecr.describe_images(\n", " registryId=account_id,\n", " repositoryName=repo,\n", " imageIds=[{\"imageTag\": tag}],\n", " )\n", " assert len(imgs_desc[\"imageDetails\"]) > 0, f\"Couldn't find ECR image {uri} after build\"\n", " print(f\"Found {uri}\")" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "## (Optional) Self-supervised pre-training\n", "\n", "You can run the cell below and **skip the rest of this section**, unless you'd like to dive deeper on this topic:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "pretrain = False # Set this True instead to run pre-training (details below).\n", "\n", "pretrained_s3_uri = None # Will be overwritten later if pretrain is enabled" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "In many cases, businesses have a great deal more relevant *unlabelled* data available in addition to the manually labeled dataset. For example, you might have many more historical documents available (with OCR results already, or able to be processed with Amazon Textract) than you're reasonably able to annotate entities on - just as we do in the credit cards example!\n", "\n", "Large-scale language models are typically **pre-trained** to unlabelled data in a **self-supervised** pattern: Teaching the model to predict some implicit task in the data like, for example, masking a few words on the page and predicting what words should go in the gaps.\n", "\n", "This pre-training doesn't directly teach the model to perform the target task (classifying entities), but forces the core of the model to learn intrinsic patterns in the data. When we then replace the output layers and **fine-tune** towards the target task with human-labelled data, the model is able to learn the target task more effectively.\n", "\n", "By default, for speed, the configuration below will use a public pre-trained model from the [Hugging Face Transformers Model Hub](https://huggingface.co/models?search=layoutxlm). This allows us to focus immediately on fine-tuning to our task; but also means accuracy may be degraded if our documents are very different from the original corpus the model was trained on.\n", "\n", "**Alternatively, set `pretrain = True` above** to *further* pre-train this same base public model on your own Textracted (but unlabelled) documents first.\n", "\n", "> ⚠️ Both these options **use a pre-trained model as a base**: Do check out the licensing and other details for your selected pre-trained `model_name_or_path` on the model hub, as some published models are licensed for non-commercial use only. If you're interested in pre-training your own models from scratch rather than continuation, please let us know on the [existing GitHub issue thread](https://github.com/aws-samples/amazon-textract-transformer-pipeline/issues/19).\n", "\n", "Pre-training is most likely to be valuable when:\n", "\n", "1. You have a significantly broader range of data available than the core supervised/annotated dataset (e.g. hundreds to thousands of documents or more are available)\n", "2. Your data is usefully *diverse* (millions of nearly-identical proformas may not teach the model very much useful structure, and could pull it away from learning general grammar patterns)\n", "3. ...But within an *unusual or specialized domain* (for example with industry jargon or product names, or a language that's less well-represented in the public model's pre-training - like Indonesian in LayoutXLM).\n", "4. Understanding these language patterns appears to be a limiting factor on model performance (rather than e.g. being just very strongly constrained by lack of annotations or noise in the annotated data).\n", "\n", "> ⚠️ If you followed through [Notebook 1](1.%20Data%20Preparation.ipynb) with the default settings to Amazon Textract only a small sample of the documents, you may like to go back, increase `N_DOCS_KEPT`, and Textract some more of the source documents first before trying pre-training.\n", "\n", "**In our tests with the Credit Card Agreements sample dataset**, LayoutXLM improved from ~68% to ~74% in downstream NER `eval_focus_else_acc_minus_one` by continuation pre-training on the full ~2,541 document corpus, when averaged over different random seed initializations (standard deviations ~3% over random seeds in each configuration). LayoutLMv1 also appeared to consistently benefit from pre-training, but only very slightly at <1% change in focus accuracy.\n", "\n", "> ⚠️ **Note:** Refer to the [Amazon SageMaker Pricing Page](https://aws.amazon.com/sagemaker/pricing/) for up-to-date guidance before running large pre-training jobs.\n", ">\n", "> In our tests at the time of writing:\n", ">\n", "> - Pre-training LayoutXLM on the full ~2,500-document corpus for 25 epochs took around 10 hours on a single-GPU `ml.p3.2xlarge` instance with per-device batch size 2\n", "> - Pre-training LayoutLMv1 on the full ~2,500-document corpus for 25 epochs took around 4 hours on an `ml.p3.8xlarge` instance with per-device batch size 4\n", "\n", "> ℹ️ **Notes on *from-scratch* pre-training:**\n", ">\n", "> When particularly large and diverse corpora are available relative to what public models have been trained on (especially when working with low-resource languages for example), you might be interested to try from-scratch pre-training rather than continuing from a public model checkpoint.\n", ">\n", "> To explore this, be aware that:\n", "> - The MLM task implemented in this example is simpler than, and may be different from, the full pre-training objective used by most of these models. For example the [LayoutXLM paper](https://arxiv.org/abs/2104.08836) discusses 3 parallel objectives: Extra work would be required to implement these.\n", "> - Without a good volume and diversity of documents, your results will likely be poor. Check how your dataset(s) compare to the overall size and diversity of those used for pre-training by your target model's original authors." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For self-supervised pre-training, you can utilize the full available corpus of Textract-processed documents: Not just the subset of documents and pages you have annotations for. Reserving some documents for validation is still a good idea though, to understand if and when the model starts to over-fit.\n", "\n", "Arguably, including pages from the entity recognition validation dataset in pre-training constitutes [leakage](https://en.wikipedia.org/wiki/Leakage_(machine_learning)): Because even though we're not including any information about the entity labels the NER model will predict, we're teaching the model information about patterns of content in the hold-out pages.\n", "\n", "Therefore, the below code takes a conservative view to avoid possibly over-estimating the added benefits of pre-training: Constructing manifests to route *any document with pages in the entity recognition validation set* to also be in the validation set for pre-training." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "selfsup_train_manifest_s3uri = f\"s3://{bucket_name}/{bucket_prefix}data/docs-train.manifest.jsonl\"\n", "selfsup_val_manifest_s3uri = f\"s3://{bucket_name}/{bucket_prefix}data/docs-val.manifest.jsonl\"\n", "\n", "# To avoid information leakage, take the validation set = the set of all documents with *any* pages\n", "# mentioned in the validation set:\n", "val_textract_s3uris = set()\n", "with open(\"data/annotations/annotations-test.manifest.jsonl\", \"r\") as f:\n", " for line in f:\n", " val_textract_s3uris.add(json.loads(line)[\"textract-ref\"])\n", "with open(\"data/docs-val.manifest.jsonl\", \"w\") as f:\n", " for uri in val_textract_s3uris:\n", " f.write(json.dumps({\"textract-ref\": uri}) + \"\\n\")\n", "print(f\"Added {len(val_textract_s3uris)} docs to pre-training validation set\")\n", "\n", "# Any Textracted docs not mentioned in validation can go to training:\n", "train_textract_s3uris = set()\n", "with open(\"data/textracted-all.manifest.jsonl\", \"r\") as fner:\n", " with open(\"data/docs-train.manifest.jsonl\", \"w\") as f:\n", " for line in fner:\n", " uri = json.loads(line)[\"textract-ref\"]\n", " if (uri in val_textract_s3uris) or (uri in train_textract_s3uris):\n", " continue\n", " else:\n", " train_textract_s3uris.add(uri)\n", " f.write(json.dumps({\"textract-ref\": uri}) + \"\\n\")\n", "print(f\"Added {len(train_textract_s3uris)} docs to pre-training set\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "!aws s3 cp data/docs-train.manifest.jsonl {selfsup_train_manifest_s3uri}\n", "!aws s3 cp data/docs-val.manifest.jsonl {selfsup_val_manifest_s3uri}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With the Amazon Textract JSONs prepared on S3 and split between training and validation via manifests, we're ready to run the pre-training.\n", "\n", "> ▶️ See the following *Fine-tuning on annotated data* section for more parameter details and links on how model training works in SageMaker - which are omitted here since this section is optional.\n", "\n", "In general, available hyperparameters are based on the [Hugging Face TrainingArguments parser](https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments) with [customizations applied in src/code/config.py](src/code/config.py). See the **\"Scaling and optimizing model training\"** section of the [Customization Guide](../CUSTOMIZATION_GUIDE.md) for more details on adjusting parallelism and instance type/count - which is particularly relevant for self-supervised pre-training on large datasets." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "hyperparameters = {\n", " # (See src/code/config.py for more info on script parameters)\n", " \"task_name\": \"mlm\",\n", " \"images_prefix\": imgs_s3uri[len(\"s3://\"):].partition(\"/\")[2],\n", " \"textract_prefix\": textract_s3uri[len(\"s3://\"):].partition(\"/\")[2],\n", "\n", " # LayoutXLM multi-lingual model by default. Other tested base models include:\n", " # - LayoutLMv2: \"microsoft/layoutlmv2-base-uncased\"\n", " # - LayoutLMv1: \"microsoft/layoutlm-base-uncased\"\n", " \"model_name_or_path\": \"microsoft/layoutxlm-base\",\n", "\n", " \"learning_rate\": 5e-5,\n", " \"per_device_train_batch_size\": 2,\n", " \"per_device_eval_batch_size\": 4,\n", "\n", " \"num_train_epochs\": 25,\n", " \"early_stopping_patience\": 10,\n", " \"metric_for_best_model\": \"eval_loss\",\n", " \"greater_is_better\": \"false\",\n", "\n", " # Early stopping implies checkpointing every evaluation (epoch), so limit the total checkpoints\n", " # kept to avoid filling up disk:\n", " \"save_total_limit\": 10,\n", " \"seed\": 42,\n", "}\n", "\n", "metric_definitions = [\n", " {\"Name\": \"epoch\", \"Regex\": util.training.get_hf_metric_regex(\"epoch\")},\n", " {\"Name\": \"learning_rate\", \"Regex\": util.training.get_hf_metric_regex(\"learning_rate\")},\n", " {\"Name\": \"train:loss\", \"Regex\": util.training.get_hf_metric_regex(\"loss\")},\n", " {\"Name\": \"validation:loss\", \"Regex\": util.training.get_hf_metric_regex(\"eval_loss\")},\n", " {\n", " \"Name\": \"validation:samples_per_sec\",\n", " \"Regex\": util.training.get_hf_metric_regex(\"eval_samples_per_second\"),\n", " },\n", "]\n", "\n", "pre_estimator = HuggingFaceEstimator(\n", " role=sagemaker.get_execution_role(),\n", " # Use \"ddp_launcher.py\" for native PyTorch DDP, \"train.py\" for single-GPU or SageMaker DDP:\n", " entry_point=\"ddp_launcher.py\",\n", " source_dir=\"src\",\n", " py_version=py_version,\n", " pytorch_version=pt_version,\n", " transformers_version=hf_version,\n", " image_uri=train_image_uri, # Use customized training container image\n", "\n", " base_job_name=\"xlm-cfpb-pretrain\",\n", " output_path=f\"s3://{bucket_name}/{bucket_prefix}trainjobs\",\n", "\n", " instance_type=\"ml.p3.16xlarge\", # Or ml.p3.8xlarge, etc.\n", " instance_count=1,\n", " volume_size=150,\n", "\n", " debugger_hook_config=False, # (Required for LayoutLMv2/XLM, not for v1)\n", " # To enable SageMaker DDP (on supported instance types):\n", " # distribution={\"smdistributed\": {\"dataparallel\": {\"enabled\": True}}},\n", "\n", " hyperparameters=hyperparameters,\n", " metric_definitions=metric_definitions,\n", " environment={\n", " # Required for our custom dataset loading code (which depends on tokenizer):\n", " \"TOKENIZERS_PARALLELISM\": \"false\",\n", " # May be useful for debugging some DDP issues:\n", " # \"TORCH_DISTRIBUTED_DEBUG\": \"INFO\",\n", " },\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true, "tags": [] }, "outputs": [], "source": [ "if pretrain:\n", " pre_estimator.fit(\n", " inputs={\n", " \"images\": thumbs_s3uri, # (Can omit this channel with LayoutLMv1 for performance)\n", " \"train\": selfsup_train_manifest_s3uri,\n", " \"textract\": textract_s3uri + \"/\",\n", " \"validation\": selfsup_val_manifest_s3uri,\n", " },\n", " #wait=False,\n", " )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Once the pre-training is complete, fetch the output model S3 URI to use as input for the fine-tuning stage:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "if pretrain:\n", " # Un-comment this first line to load an previous pre-training job instead:\n", " # pre_estimator = HuggingFaceEstimator.attach(\"layoutlm-cfpb-pretrain-2021-11-17-01-53-05-786\")\n", "\n", " pretraining_job_desc = pre_estimator.latest_training_job.describe()\n", " pretrained_s3_uri = pretraining_job_desc[\"ModelArtifacts\"][\"S3ModelArtifacts\"]\n", "\n", "print(f\"Custom pre-trained model: {pretrained_s3_uri}\")" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "## Fine-tuning on annotated data\n", "\n", "In this section we'll run a [SageMaker Training Job](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-training.html) to fine-tune the model on our annotated dataset.\n", "\n", "In this process:\n", "\n", "- SageMaker will run the job on a dedicated, managed instance of type we choose (we'll use `ml.p*` or `ml.g*` GPU-accelerated types), allowing us to keep this notebook's resources modest and only pay for the seconds of GPU time the training job needs.\n", "- The data as specified in the manifest files will be downloaded from Amazon S3.\n", "- The bundle of scripts we provide (in `src/`) will be transparently uploaded to S3 and then run inside the specified SageMaker-provided [framework container](https://docs.aws.amazon.com/sagemaker/latest/dg/docker-containers-prebuilt.html). There's no need for us to build our own container image or implement a serving stack for inference (although fully-custom containers are [also supported](https://docs.aws.amazon.com/sagemaker/latest/dg/docker-containers.html)).\n", "- Job hyperparameters will be passed through to our `src/` scripts as CLI arguments.\n", "- SageMaker will analyze the logs from the job (i.e. `print()` or `logger` calls from our script) with the regular expressions specified in `metric_definitions`, to scrape structured timeseries metrics like loss and accuracy.\n", "- When the job finishes, the contents of the `model` folder in the container will be automatically tarballed and uploaded to a `model.tar.gz` in Amazon S3.\n", "\n", "Rather than orchestrating this process through the low-level [SageMaker API](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateTrainingJob.html) (e.g. via [boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html#SageMaker.Client.create_training_job)), we'll use the open-source [SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable/) (`sagemaker`) for convenience. You can also refer to [Hugging Face's own docs for training on SageMaker](https://huggingface.co/transformers/sagemaker.html) for more information and examples.\n", "\n", "First, we'll configure some parameters you may **sometimes wish to re-use across training jobs**. Continuation jobs may want to use the same checkpoint location in S3, while from-scratch training should start fresh\n", "\n", "▶️ You can choose when to re-run this cell between experiments:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "checkpoint_collection_name = \"checkpoints-\" + datetime.now().strftime(\"%Y-%m-%d-%H-%M-%S\")\n", "print(f\"Saving checkpoints to collection {checkpoint_collection_name}\")\n", "\n", "checkpoint_s3_uri = f\"s3://{bucket_name}/{bucket_prefix}checkpoints/{checkpoint_collection_name}\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we'll define the core configuration for our training job:\n", "\n", "▶️ This should usually be re-run for every new training job" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "hyperparameters = {\n", " # (See src/code/config.py for more info on script parameters)\n", " \"annotation_attr\": standard_label_field,\n", " \"images_prefix\": imgs_s3uri[len(\"s3://\"):].partition(\"/\")[2],\n", " \"textract_prefix\": textract_s3uri[len(\"s3://\"):].partition(\"/\")[2],\n", " \"num_labels\": len(fields) + 1, # +1 for \"other\"\n", "\n", " \"per_device_train_batch_size\": 2,\n", " \"per_device_eval_batch_size\": 4,\n", "\n", " \"num_train_epochs\": 150, # Set high for automatic HP tuning later\n", " \"early_stopping_patience\": 15, # Usually stops after <25 epochs on this sample data+config\n", " \"metric_for_best_model\": \"eval_focus_else_acc_minus_one\",\n", " \"greater_is_better\": \"true\",\n", "\n", " # Early stopping implies checkpointing every evaluation (epoch), so limit the total checkpoints\n", " # kept to avoid filling up disk:\n", " \"save_total_limit\": 10,\n", "}\n", "if not pretrained_s3_uri:\n", " # LayoutXLM multi-lingual model by default. Other tested base models include:\n", " # - LayoutLMv2: \"microsoft/layoutlmv2-base-uncased\"\n", " # - LayoutLMv1: \"microsoft/layoutlm-base-uncased\"\n", " # See HF model hub for licensing & details of pre-trained models: https://huggingface.co/models\n", " hyperparameters[\"model_name_or_path\"] = \"microsoft/layoutxlm-base\"\n", "\n", "\n", "metric_definitions = [\n", " {\"Name\": \"epoch\", \"Regex\": util.training.get_hf_metric_regex(\"epoch\")},\n", " {\"Name\": \"learning_rate\", \"Regex\": util.training.get_hf_metric_regex(\"learning_rate\")},\n", " {\"Name\": \"train:loss\", \"Regex\": util.training.get_hf_metric_regex(\"loss\")},\n", " {\n", " \"Name\": \"validation:n_examples\",\n", " \"Regex\": util.training.get_hf_metric_regex(\"eval_n_examples\"),\n", " },\n", " {\"Name\": \"validation:loss_avg\", \"Regex\": util.training.get_hf_metric_regex(\"eval_loss\")},\n", " {\"Name\": \"validation:acc\", \"Regex\": util.training.get_hf_metric_regex(\"eval_acc\")},\n", " {\n", " \"Name\": \"validation:n_focus_examples\",\n", " \"Regex\": util.training.get_hf_metric_regex(\"eval_n_focus_examples\"),\n", " },\n", " {\n", " \"Name\": \"validation:focus_acc\",\n", " \"Regex\": util.training.get_hf_metric_regex(\"eval_focus_acc\"),\n", " },\n", " {\n", " \"Name\": \"validation:target\",\n", " \"Regex\": util.training.get_hf_metric_regex(\"eval_focus_else_acc_minus_one\"),\n", " },\n", "]\n", "\n", "estimator = HuggingFaceEstimator(\n", " role=sagemaker.get_execution_role(),\n", " entry_point=\"train.py\",\n", " source_dir=\"src\",\n", " py_version=py_version,\n", " pytorch_version=pt_version,\n", " transformers_version=hf_version,\n", " image_uri=train_image_uri, # Use customized training container image\n", "\n", " base_job_name=\"xlm-cfpb-hf\",\n", " output_path=f\"s3://{bucket_name}/{bucket_prefix}trainjobs\",\n", " # checkpoint_s3_uri=checkpoint_s3_uri, # Un-comment to turn on checkpoint upload to S3\n", "\n", " instance_type=\"ml.g4dn.xlarge\", # Could also consider ml.p3.2xlarge\n", " instance_count=1,\n", " volume_size=80,\n", "\n", " debugger_hook_config=False, # (Required for LayoutLMv2/XLM, not for v1)\n", "\n", " hyperparameters=hyperparameters,\n", " metric_definitions=metric_definitions,\n", " environment={\n", " # Required for our custom dataset loading code (which depends on tokenizer):\n", " \"TOKENIZERS_PARALLELISM\": \"false\",\n", " },\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, the below cell will actually kick off the training job and stream logs from the running container.\n", "\n", "> ℹ️ You'll also be able to check the status of the job in the [Training jobs page of the SageMaker Console](https://console.aws.amazon.com/sagemaker/home?#/jobs)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true, "tags": [] }, "outputs": [], "source": [ "inputs = {\n", " \"images\": thumbs_s3uri, # (Can omit this channel with LayoutLMv1 for performance)\n", " \"train\": train_manifest_s3uri,\n", " \"textract\": textract_s3uri + \"/\",\n", " \"validation\": test_manifest_s3uri,\n", "}\n", "if pretrained_s3_uri:\n", " print(f\"Using custom pre-trained model {pretrained_s3_uri}\")\n", " inputs[\"model_name_or_path\"] = pretrained_s3_uri\n", "\n", "estimator.fit(inputs)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "## (Optional) Hyperparameter tuning\n", "\n", "Particularly when applying novel techniques or working in new domains, we'll often need to find good values for a range of different *hyperparameters* of our proposed algorithms.\n", "\n", "Rather than spending time manually adjusting these parameters, we can use [SageMaker Automatic Model Tuning](https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning.html) which uses an intelligent [Bayesian optimization approach](https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning-how-it-works.html) to efficiently and automatically search for high-performing combinations over several training jobs.\n", "\n", "You can optionally run the cell below to kick off an HPO job for the model:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tuner = sagemaker.tuner.HyperparameterTuner(\n", " estimator,\n", " \"validation:target\",\n", " base_tuning_job_name=\"xlm-cfpb-hpo\",\n", " hyperparameter_ranges={\n", " \"learning_rate\": sagemaker.parameter.ContinuousParameter(\n", " 1e-8,\n", " 1e-3,\n", " scaling_type=\"Logarithmic\",\n", " ),\n", " \"per_device_train_batch_size\": sagemaker.parameter.CategoricalParameter([2, 4, 6, 8]),\n", " \"label_smoothing_factor\": sagemaker.parameter.CategoricalParameter([0.0, 1e-12, 1e-9, 1e-6]),\n", " },\n", " metric_definitions=metric_definitions,\n", " strategy=\"Bayesian\",\n", " objective_type=\"Maximize\",\n", " max_jobs=21,\n", " max_parallel_jobs=2,\n", " # early_stopping_type=\"Auto\", # Off by default - could consider turning it on\n", "# warm_start_config=sagemaker.tuner.WarmStartConfig(\n", "# warm_start_type=sagemaker.tuner.WarmStartTypes.IDENTICAL_DATA_AND_ALGORITHM,\n", "# parents={ \"xlm-cfpb-hpo-210723-1625\" },\n", "# ),\n", ")\n", "\n", "tuner.fit(\n", " inputs={\n", " \"images\": thumbs_s3uri, # (Can omit this channel with LayoutLMv1 for performance)\n", " \"train\": train_manifest_s3uri,\n", " \"textract\": textract_s3uri + \"/\",\n", " \"validation\": test_manifest_s3uri,\n", " },\n", " wait=False,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This job will run asynchronously so won't block the notebook, but you can check on the status from the [Hyperparameter tuning jobs list](https://console.aws.amazon.com/sagemaker/home?#/hyper-tuning-jobs) of the SageMaker Console." ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "## Deploy the model\n", "\n", "Once our model is trained (or maybe even automatically hyperparameter-tuned over several training jobs), we can prepare to use it for inference.\n", "\n", "Note that if, for some reason, you need to recover the state of a previous training or tuning job after a notebook restart or similar, you can `attach()` to training or tuning jobs by name - as shown below:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# If needed, you can attach to a previous training job by name like this:\n", "# estimator = HuggingFaceEstimator.attach(\"xlm-cfpb-hf-2022-05-12-16-40-50-692\")\n", "# tuner = sagemaker.tuner.HyperparameterTuner.attach(\"llmv2-cfpb-hpo-210603-0542\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "SageMaker supports a [range of different deployment types](https://docs.aws.amazon.com/sagemaker/latest/dg/deploy-model.html) for inference: You may already be familiar with the [real-time](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints.html) and [batch transform](https://docs.aws.amazon.com/sagemaker/latest/dg/batch-transform.html) options from the [main SageMaker Example Notebook repository](https://github.com/aws/amazon-sagemaker-examples).\n", "\n", "For document processing use-cases like this one though, [SageMaker asynchronous inference](https://docs.aws.amazon.com/sagemaker/latest/dg/async-inference.html) may be a better fit:\n", "\n", "1. Unlike real-time endpoints (at the time of writing), asynchronous endpoints can [auto-scale down to zero instances](https://docs.aws.amazon.com/sagemaker/latest/dg/async-inference-autoscale.html). This can offer substantial cost savings if your business process is low-volume and often idle: With the trade-off that overall process latency may increase, especially for cold-start requests.\n", "1. Asynchronous inference can support longer timeouts and larger request/response payload sizes than real-time: Which can be useful in cases where an individual document may be long and take a significant time to process with a model.\n", " - While you can work around the payload size restriction in real-time endpoints by accepting and returning JSON pointers to S3 objects, instead of passing large payloads inline, the inference time-out could still become an issue for particularly heavy requests\n", "\n", "To deploy our model to an asynchronous endpoint ready to integrate to the OCR pipeline stack:\n", "\n", "- As detailed in the [SDK docs](https://sagemaker.readthedocs.io/en/stable/overview.html#sagemaker-asynchronous-inference), the optional `async_inference_config` parameter tells SageMaker that the endpoint will be asynchronous rather than real-time.\n", "- For permissions integration, our async endpoint will need to store its outputs in the proper S3 location the pipeline is expecting (`output_path`). We can look that up from here in the notebook via the same SSM-based `config` we've seen before.\n", "- To resume the pipeline when the model processes a document, our endpoint will need to notify the pipeline's SNS topic. Again, this is given on `config`.\n", "- While the *SageMaker* limits on request/response size and response timeouts are higher for asynchronous endpoints than real-time, we need to also make sure the serving stack *within the container* is configured to support very large payloads. Setting the `MMS_*` environment variables below prevents errors related to this. For more information see the [AWSLabs Multi-Model Server configuration doc](https://github.com/awslabs/multi-model-server/blob/master/docs/configuration.md) and corresponding page [for TorchServe](https://github.com/pytorch/serve/blob/master/docs/configuration.md#other-properties)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# Configure async endpoint settings for the pipeline stack:\n", "async_inference_config = sagemaker.async_inference.AsyncInferenceConfig(\n", " output_path=f\"s3://{config.model_results_bucket}\",\n", " max_concurrent_invocations_per_instance=2, # (Can tune this for performance)\n", " notification_config={\n", " \"SuccessTopic\": config.model_callback_topic_arn,\n", " \"ErrorTopic\": config.model_callback_topic_arn,\n", " },\n", ")\n", "\n", "# Extra environment variables to enable large payloads in async\n", "async_extra_env_vars = {\n", " \"MMS_DEFAULT_RESPONSE_TIMEOUT\": str(60*3), # 3min instead of default (maybe 60sec?)\n", " \"MMS_MAX_REQUEST_SIZE\": str(100*1024*1024), # 100MiB instead of default ~6.2MiB\n", " \"MMS_MAX_RESPONSE_SIZE\": str(100*1024*1024), # 100MiB instead of default ~6.2MiB\n", "}" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "### Easy one-click deployment\n", "\n", "For straightforward deployment, you can just call `estimator.deploy()` (or equivalently, `tuner.deploy()`) - specifying the extra `async_inference_config` and environment variables for our target async deployment:\n", "\n", "> ⚠️ **Warning:** If you change inference code (e.g. [src/code/inference.py](src/code/inference.py)) and re-deploy by this one-click method, your change will likely not be picked up. See the deep-dive section below instead, for making updates." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "training_job_name = estimator.latest_training_job.describe()[\"TrainingJobName\"]\n", "# Or:\n", "# training_job_name = tuner.best_training_job()\n", "\n", "predictor = estimator.deploy(\n", " # Avoid us accidentally deploying the same model twice by setting name per training job:\n", " endpoint_name=training_job_name,\n", " initial_instance_count=1,\n", " instance_type=\"ml.g4dn.xlarge\",\n", " image_uri=inf_image_uri,\n", " serializer=sagemaker.serializers.JSONSerializer(),\n", " deserializer=sagemaker.deserializers.JSONDeserializer(),\n", " env={\n", " \"PYTHONUNBUFFERED\": \"1\", # TODO: Disable once debugging is done\n", " **async_extra_env_vars,\n", " },\n", " async_inference_config=async_inference_config,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### (Optional) Digging deeper into the model\n", "\n", "Alternatively, you may instead want to explore the artifacts saved by the training job, or edit the `code` script bundle before deploying the endpoint - especially for debugging any problems with inference. Let's see how:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# Look up job name and artifact location from previous training job as before:\n", "training_job_desc = estimator.latest_training_job.describe()\n", "model_s3uri = training_job_desc[\"ModelArtifacts\"][\"S3ModelArtifacts\"]\n", "model_name = training_job_desc[\"TrainingJobName\"]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# *Optionally* download and extract the contents of the model.tar.gz locally:\n", "# (Deleting old data/model folder if it exists)\n", "!rm -rf ./data/model\n", "!aws s3 cp $model_s3uri ./data/model/model.tar.gz\n", "!cd data/model && tar -xzvf model.tar.gz" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "from sagemaker.huggingface import HuggingFaceModel\n", "\n", "try:\n", " # Make sure we don't accidentally re-use same model:\n", " smclient.delete_model(ModelName=model_name)\n", " print(f\"Deleted existing model {model_name}\")\n", "except smclient.exceptions.ClientError as e:\n", " if not (\n", " e.response[\"Error\"][\"Code\"] in (404, \"404\")\n", " or e.response[\"Error\"].get(\"Message\", \"\").startswith(\"Could not find model\")\n", " ):\n", " raise e\n", "\n", "model = HuggingFaceModel(\n", " name=model_name,\n", " model_data=model_s3uri,\n", " role=sagemaker.get_execution_role(),\n", " source_dir=\"src/\",\n", " entry_point=\"inference.py\",\n", " py_version=py_version,\n", " pytorch_version=pt_version,\n", " transformers_version=hf_version,\n", " image_uri=inf_image_uri,\n", " env={\n", " \"PYTHONUNBUFFERED\": \"1\", # TODO: Disable once debugging is done\n", " **async_extra_env_vars,\n", " },\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "try:\n", " # Delete previous endpoint, if already in use:\n", " predictor.delete_endpoint(delete_endpoint_config=True)\n", " print(\"Deleting previous endpoint...\")\n", " time.sleep(8)\n", "except (NameError, smclient.exceptions.ResourceNotFound):\n", " pass # No existing endpoint to delete\n", "except smclient.exceptions.ClientError as e:\n", " if \"Could not find\" not in e.response[\"Error\"].get(\"Message\", \"\"):\n", " raise e\n", "\n", "print(\"Deploying model...\")\n", "predictor = model.deploy(\n", " endpoint_name=training_job_desc[\"TrainingJobName\"],\n", " initial_instance_count=1,\n", " instance_type=\"ml.g4dn.xlarge\",\n", " serializer=sagemaker.serializers.JSONSerializer(),\n", " deserializer=sagemaker.deserializers.JSONDeserializer(),\n", " async_inference_config=async_inference_config,\n", ")\n", "print(\"\\nDone!\")" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "## Extract clean input images on-demand\n", "\n", "> ▶️ If you're using LayoutLMv1, you can skip this section\n", "\n", "Some models (like LayoutLMv2/XLM, but **not** LayoutLMv1) consume **page images** in addition to text and layout data.\n", "\n", "The same code we used in notebook 1 to extract clean page images from raw source documents, can be deployed as an (asynchronous) *inference endpoint* for *on-demand* page thumbnail image generation whenever a new document comes in. If you deployed the pipeline CDK stack with the default options, this endpoint should be **already deployed for you** and can be located as shown below:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "if config.thumbnails_callback_topic_arn == \"undefined\":\n", " logger.warning(\n", " \"This pipeline CDK stack was deployed with thumbnailing disabled (by setting parameter \"\n", " \"use_thumbnails=False). Even if you manually deploy a thumbnailing endpoint from the \"\n", " \"notebook, it will not be used in online processing.\"\n", " )\n", "\n", "preproc_endpoint_name = ssm.get_parameter(\n", " Name=config.thumbnail_endpoint_name_param,\n", ")[\"Parameter\"][\"Value\"]\n", "print(f\"Pre-created thumbnailer endpoint name:\\n {preproc_endpoint_name}\")\n", "\n", "if preproc_endpoint_name == \"undefined\":\n", " raise ValueError(\n", " \"The thumbnailing endpoint was not automatically created by this pipeline's CDK stack \"\n", " \"deployment. See the 'Optional Extras.ipynb' notebook for instructions to manually deploy \"\n", " \"the thumbnailer before continuing.\"\n", " )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "> ℹ️ If the thumbnailer endpoint is **missing or mis-configured** in your environment:\n", "> \n", "> - **Check** whether your pipeline is deployed with online thumbnailing support enabled\n", "> - This is not mandatory for experimenting with alternative models in notebook, but if you later connect a model that consumes thumbnails to a pipeline that doesn't generate them, model accuracy will be degraded.\n", "> - To confirm, find your *pipeline* state machine in [AWS Step Functions](https://console.aws.amazon.com/states/home?#/statemachines) (the one containing NLP and post-processing steps), and check it runs a Thumbnail Generation step in parallel to OCR.\n", "> - To update, configure the `USE_THUMBNAILS` environment variable referenced by [/cdk_app.py](../cdk_app.py) and re-deploy your CDK app. Check whether your version of the CDK code supports auto-deploying the endpoint.\n", "> - **If necessary, manually set up** a thumbnailing endpoint:\n", "> - See the thumbnailing deployment instructions in [Optional Extras.ipynb](Optional%20Extras.ipynb)\n", "> - You can re-run the above cell if you connected the custom thumbnailer to your pipeline, or just set `preproc_endpoint_name` here" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "try:\n", " desc = smclient.describe_endpoint(EndpointName=preproc_endpoint_name)\n", "except smclient.exceptions.ClientError as e:\n", " if e.response.get(\"Error\", {}).get(\"Message\", \"\").startswith(\"Could not find\"):\n", " desc = None # Endpoint does not exist\n", " else:\n", " raise e # Some other unknown issue\n", "\n", "if desc is None:\n", " raise ValueError(\n", " \"The configured thumbnailing endpoint does not exist in SageMaker. See the 'Optional \"\n", " \"Extras.ipynb' notebook for instructions to manually deploy the thumbnailer before \"\n", " \"continuing. Missing endpoint: %s\" % preproc_endpoint_name\n", " )\n", "\n", "preproc_predictor = sagemaker.predictor_async.AsyncPredictor(\n", " sagemaker.Predictor(\n", " preproc_endpoint_name,\n", " serializer=util.deployment.FileSerializer.from_filename(\"any.pdf\"),\n", " deserializer=util.deployment.CompressedNumpyDeserializer(),\n", " ),\n", " name=preproc_endpoint_name,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This endpoint accepts images or documents and outputs resized page thumbnail images.\n", "\n", "For multi-page documents the main output format is `application/x-npz`, which produces a [compressed numpy archive](https://numpy.org/doc/stable/reference/generated/numpy.savez_compressed.html#numpy.savez_compressed) in which `images` is an **array of images** each represented by **PNG bytes**. These formats require customizing the client (predictor) *serializer* and *deserializer* from the default for PyTorch. Since `Predictor` de/serializers set the `Content-Type` and `Accept` headers, we'll also need to re-configure the serializer whenever switching between input document types (for example PDF vs PNG).\n", "\n", "To support potentially large documents, the preprocessor is deployed to an **asynchronous** endpoint which enables larger request and response payload sizes.\n", "\n", "So how would it look to test the endpoint from Python? Let's see an example:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "%%time\n", "\n", "# Choose an input (document or image):\n", "input_file = \"data/raw/121 Financial Credit Union/Visa Credit Card Agreement.pdf\"\n", "#input_file = \"data/imgs-clean/121 Financial Credit Union/Visa Credit Card Agreement-0001-1.png\"\n", "\n", "# Ensure de/serializers are correctly set up:\n", "preproc_predictor.serializer = util.deployment.FileSerializer.from_filename(input_file)\n", "preproc_predictor.deserializer = util.deployment.CompressedNumpyDeserializer()\n", "# Duplication because of https://github.com/aws/sagemaker-python-sdk/issues/3100\n", "preproc_predictor.predictor.serializer = preproc_predictor.serializer\n", "preproc_predictor.predictor.deserializer = preproc_predictor.deserializer\n", "\n", "# Run prediction:\n", "print(\"Calling endpoint...\")\n", "resp = preproc_predictor.predict(input_file)\n", "print(f\"Got response of type {type(resp)}\")\n", "\n", "# Render result:\n", "util.viz.draw_thumbnails_response(resp)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Using the Model\n", "\n", "Once the deployment is complete (and, if our model takes image inputs, the page thumbnail generator endpoint is ready), we're ready to try it out with some requests!" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# As with estimators, you can also attach the notebook to previously deployed endpoints like this:\n", "\n", "# preproc_endpoint_name=\"ocr-thumbnail-1-2022-05-23-15-52-35-703\"\n", "# preproc_predictor = sagemaker.predictor_async.AsyncPredictor(\n", "# sagemaker.Predictor(\n", "# preproc_endpoint_name,\n", "# serializer=util.deployment.FileSerializer.from_filename(\"any.pdf\"),\n", "# deserializer=util.deployment.CompressedNumpyDeserializer(),\n", "# ),\n", "# name=preproc_endpoint_name,\n", "# )\n", "\n", "# endpoint_name=\"xlm-cfpb-hf-2022-05-23-14-10-19-602\"\n", "# predictor = sagemaker.predictor_async.AsyncPredictor(\n", "# sagemaker.Predictor(\n", "# endpoint_name,\n", "# serializer=sagemaker.serializers.JSONSerializer(),\n", "# deserializer=sagemaker.deserializers.JSONDeserializer(),\n", "# ),\n", "# name=endpoint_name,\n", "# )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Making requests and rendering results\n", "\n", "At a high level, the layout+language model accepts Textract-like JSON (e.g. as returned by [AnalyzeDocument](https://docs.aws.amazon.com/textract/latest/dg/API_AnalyzeDocument.html#API_AnalyzeDocument_ResponseSyntax) or [DetectDocumentText](https://docs.aws.amazon.com/textract/latest/dg/API_DetectDocumentText.html#API_DetectDocumentText_ResponseSyntax) APIs) and classifies each `WORD` [block](https://docs.aws.amazon.com/textract/latest/dg/API_Block.html) according to the entity classes we defined earlier: Returning the same JSON with additional fields added to indicate the predictions.\n", "\n", "In addition (per the logic in [src/code/inference.py](src/code/inference.py)):\n", "\n", "- To incorporate image features (for models that support them), requests can also include an `S3Thumbnails: { Bucket, Key }` object pointing to a thumbnailer endpoint response on S3.\n", "- Instead of passing the (typically large and already-S3-resident) Amazon Textract JSON inline, an `S3Input: { Bucket, Key }` reference can be passed instead (and this is actually how the standard pipeline integration works).\n", "- Output could also be redirected by passing an `S3Output: { Bucket, Key }` field in the request, but this is ignored and not needed on async endpoint deployments.\n", "- `TargetPageNum` and `TargetPageOnly` fields can be specified to limit processing to a single page of the input document.\n", "\n", "We can use utility functions to render these predictions as we did the manual annotations previously:\n", "\n", "> ⏰ **Inference may take time in some cases:**\n", ">\n", "> - Although enabling thumbnails can increase demo inference time below by several seconds, the end-to-end pipeline generates these images in parallel with running Amazon Textract - so there's usually no significant impact in practice.\n", "> - If you enabled **auto-scale-to-zero** on your your thumbnailer and/or model endpoint, you may see a cold-start of several minutes.\n", "\n", "> ⚠️ **Check:** Because of the way the SageMaker Python SDK's [AsyncPredictor](https://sagemaker.readthedocs.io/en/stable/api/inference/predictor_async.html) emulates a synchronous `predict()` interface for async endpoints, you may find the notebook waits indefinitely instead of raising an error when something goes wrong. If an inference takes more than ~30s to complete, check the endpoint logs from your [SageMaker Console Endpoints page](https://console.aws.amazon.com/sagemaker/home?#/endpoints) to see if your request resulted in an error." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "import ipywidgets as widgets\n", "import trp\n", "\n", "# Enabling thumbnails can significantly increase inference time here, but can improve results for\n", "# models that consume image features (like LayoutLMv2, XLM):\n", "include_thumbnails = False\n", "\n", "def predict_from_manifest_item(\n", " item,\n", " predictor,\n", " imgs_s3key_prefix=imgs_s3uri[len(\"s3://\"):].partition(\"/\")[2],\n", " raw_s3uri_prefix=raw_s3uri,\n", " textract_s3key_prefix=textract_s3uri[len(\"s3://\"):].partition(\"/\")[2],\n", " imgs_local_prefix=\"data/imgs-clean\",\n", " textract_local_prefix=\"data/textracted\",\n", " draw=True,\n", "):\n", " paths = util.viz.local_paths_from_manifest_item(\n", " item,\n", " imgs_s3key_prefix,\n", " textract_s3key_prefix=textract_s3key_prefix,\n", " imgs_local_prefix=imgs_local_prefix,\n", " textract_local_prefix=textract_local_prefix,\n", " )\n", "\n", " if include_thumbnails:\n", " doc_textract_s3key = item[\"textract-ref\"][len(\"s3://\"):].partition(\"/\")[2]\n", " doc_raw_s3uri = raw_s3uri_prefix + doc_textract_s3key[len(textract_s3key_prefix):].rpartition(\"/\")[0]\n", " print(f\"Fetching thumbnails for {doc_raw_s3uri}\")\n", " thumbs_async = preproc_predictor.predict_async(input_path=doc_raw_s3uri)\n", " thumbs_bucket, _, thumbs_key = thumbs_async.output_path[len(\"s3://\"):].partition(\"/\")\n", " # Wait for the request to complete:\n", " thumbs_async.get_result(sagemaker.async_inference.WaiterConfig())\n", " req_extras = {\"S3Thumbnails\": {\"Bucket\": thumbs_bucket, \"Key\": thumbs_key}}\n", " print(\"Got thumbnails result\")\n", " else:\n", " req_extras = {}\n", "\n", " result_json = predictor.predict({\n", " \"S3Input\": {\"S3Uri\": item[\"textract-ref\"]},\n", " \"TargetPageNum\": item[\"page-num\"],\n", " \"TargetPageOnly\": True,\n", " **req_extras,\n", " })\n", "\n", " if \"Warnings\" in result_json:\n", " for warning in result_json[\"Warnings\"]:\n", " logger.warning(warning)\n", " result_trp = trp.Document(result_json)\n", "\n", " if draw:\n", " util.viz.draw_smgt_annotated_page(\n", " paths[\"image\"],\n", " entity_classes,\n", " annotations=[],\n", " textract_result=result_trp,\n", " # Note that page_num should be item[\"page-num\"] if we requested the full set of pages\n", " # from the model above:\n", " page_num=1,\n", " )\n", " return result_trp\n", "\n", "\n", "widgets.interact(\n", " lambda ix: predict_from_manifest_item(test_examples[ix], predictor),\n", " ix=widgets.IntSlider(\n", " min=0,\n", " max=len(test_examples) - 1,\n", " step=1,\n", " value=0,\n", " description=\"Example:\",\n", " )\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### From token classification to entity detection\n", "\n", "You may have noticed a slight mismatch: We're talking about extracting 'fields' or 'entities' from the document, but our model just classifies individual words. Going from words to entities assumes we're able to understand which words go \"together\" and what order they should be read in.\n", "\n", "Fortunately, Textract helps us out with this too as the word blocks are already collected into `LINE`s.\n", "\n", "For many straightforward applications, we can simply loop through the lines on a page and define an \"entity detection\" as a contiguous group of the same class - as below:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "res = predict_from_manifest_item(\n", " test_examples[6],\n", " predictor,\n", " draw=False,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true, "tags": [] }, "outputs": [], "source": [ "other_cls = len(entity_classes)\n", "prev_cls = other_cls\n", "current_entity = \"\"\n", "\n", "for page in res.pages:\n", " for line in page.lines:\n", " for word in line.words:\n", " pred_cls = word._block[\"PredictedClass\"]\n", " if pred_cls != prev_cls:\n", " if prev_cls != other_cls:\n", " print(f\"----------\\n{entity_classes[prev_cls]}:\\n{current_entity}\")\n", " prev_cls = pred_cls\n", " if pred_cls != other_cls:\n", " current_entity = word.text\n", " else:\n", " current_entity = \"\"\n", " continue\n", " current_entity = \" \".join((current_entity, word.text))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Of course there may be some instances where this heuristic breaks down, but we still have access to all the position (and text) information from each `LINE` and `WORD` to write additional rules for reading order and separation if wanted." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Integrating the model with the OCR Pipeline\n", "\n", "If you've deployed the **OCR pipeline stack** in your AWS Account, you can now configure it to use this endpoint as follows:\n", "\n", "- First, identify the **endpoint name** of your deployed model. Assuming you created the predictor as above, you can simply run the following cell:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "print(predictor.endpoint_name)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- Next, identify the **AWS Systems Manager Parameter** that configures the SageMaker endpoint for the OCR pipeline stack.\n", "\n", "The below code should pull it through for you, but alternatively you can refer to your stack's **Outputs** in the [AWS CloudFormation Console](https://console.aws.amazon.com/cloudformation/home?#/stacks). The Output name should include `SageMakerEndpoint`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "print(config.sagemaker_endpoint_name_param)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- Finally, we'll update this SSM parameter to point to the deployed SageMaker endpoint.\n", "\n", "The below code should do this for you automatically:\n", "\n", "> ⚠️ **Note:** The [Lambda function](../pipeline/enrichment/fn-call-sagemaker/main.py) that calls your model from the OCR pipeline caches the endpoint name for a few minutes (`CACHE_TTL_SECONDS`) to reduce unnecessary ssm:GetParameter calls - so it may take a little time for an update here to take effect if you already processed a document recently." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "pipeline_endpoint_name = predictor.endpoint_name\n", "\n", "print(f\"Configuring pipeline with model: {pipeline_endpoint_name}\")\n", "\n", "ssm.put_parameter(\n", " Name=config.sagemaker_endpoint_name_param,\n", " Overwrite=True,\n", " Value=pipeline_endpoint_name,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Alternatively, you could open the [AWS Systems Manager Parameter Store console](https://console.aws.amazon.com/systems-manager/parameters/?tab=Table) and click on the *name* of the parameter to open its detail page, then the **Edit** button in the top right corner as shown below:\n", "\n", "![](img/ssm-param-detail-screenshot.png \"Screenshot of SSM parameter detail page showing Edit button\")\n", "\n", "From this screen you can manually set the **Value** of the parameter and save the changes.\n", "\n", "Whether you updated the SSM parameters via code or the console, your the pre-processing and enrichment stages of your stack should now be configured to use your endpoints!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Updating the pipeline entity definitions\n", "\n", "As well as configuring the *enrichment* stage of the pipeline to reference the deployed version of the model, we need to configure the *post-processing* stage to match the model's **definition of entity/field types**.\n", "\n", "The entity configuration is as we saved in the previous notebook, but the `annotation_guidance` attributes are not needed:\n", "\n", "> ℹ️ **Note:** As well as the mapping from ID numbers (returned by the model) to human-readable class names, this configuration controls how the pipeline consolidates entity matches into \"fields\" of the document: E.g. choosing the \"most likely\" or \"first\" value between multiple detections, or setting up a multi-value field." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "pipeline_entity_config = json.dumps([f.to_dict(omit=[\"annotation_guidance\"]) for f in fields], indent=2)\n", "print(pipeline_entity_config)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As above, you *could* set this value manually in the SSM console for the parameter named as `EntityConfig`.\n", "\n", "...But we can make the same update via code through the APIs:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "print(f\"Setting pipeline entity configuration\")\n", "ssm.put_parameter(\n", " Name=config.entity_config_param,\n", " Overwrite=True,\n", " Value=pipeline_entity_config,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Trying out the pipeline\n", "\n", "To see the pipeline in action:\n", "\n", "▶️ **Open** the [AWS Step Functions Console](https://console.aws.amazon.com/states/home?#/statemachines) and click on the name of your *State Machine* from the list to see its details.\n", "\n", "(If you can't find it in the list, the code below should look it up for you or you can check the *Outputs* tab of your pipeline stack in the [AWS CloudFormation Console](https://console.aws.amazon.com/cloudformation/home?#/stacks))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "print(\"Your pipeline state machine is:\")\n", "print(config.pipeline_sfn_arn.rpartition(\":\")[2])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "▶️ **Locate** your pipeline's `InputBucket` in [Amazon S3](https://s3.console.aws.amazon.com/s3/home?)\n", "\n", "(Likewise you can look this up from CloudFormation or using the below)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "print(\"Your pipeline's input S3 bucket:\")\n", "print(config.pipeline_input_bucket_name)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "▶️ **Upload** a sample document (PDF) from our dataset to the S3 bucket\n", "\n", "You can do this by dragging and dropping the file to the S3 console - or running the cells below to upload a test document through the AWS CLI:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "pdfpaths = []\n", "for currpath, dirs, files in os.walk(\"data/raw\"):\n", " if \"/.\" in currpath or \"__\" in currpath:\n", " continue\n", " pdfpaths += [\n", " os.path.join(currpath, f) for f in files\n", " if f.lower().endswith(\".pdf\")\n", " ]\n", "pdfpaths = sorted(pdfpaths)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "test_filepath = pdfpaths[0]\n", "test_s3uri = f\"s3://{config.pipeline_input_bucket_name}/{test_filepath}\"\n", "\n", "!aws s3 cp '{test_filepath}' '{test_s3uri}'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You should see that a new *execution* (run) of the state machine is triggered automatically:\n", "\n", "> ℹ️ This may take a few seconds after the upload is complete. If you're not seeing it:\n", ">\n", "> - Check you're in the correct \"pipeline\" state machine, as this solution's stack creates more than one state machine\n", "> - Try refreshing the page or the execution list\n", "\n", "![](img/sfn-statemachine-screenshot.png \"Screenshot of AWS Step Functions state machine detail page showing execution list\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Clicking through to the execution, you'll be able to see the progress through the workflow and output/error information.\n", "\n", "Depending on your configuration, your view may look a little different to the below and you may have **either a successful execution or a failure at the review step**:\n", "\n", "Don't worry if your human review stage is still failing, as we'll configure that in the next notebook.\n", "\n", "![](img/sfn-execution-status-screenshot.png \"Screenshot of Step Functions execution detail view\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Next steps\n", "\n", "You should now have been able to train and deploy the enrichment model, and demonstrate its integration to the pipeline.\n", "\n", "However, the final human review stage is not fully set up yet, so may have triggered an error.\n", "\n", "In the final notebook, we'll configure the human review functionality to finish up the flow: **Open up notebook [3. Human Review.ipynb](3.%20Human%20Review.ipynb)** to follow along.\n", "\n", "You may also like to check out the **Auto-scaling** section of **[Optional Extras.ipynb](Optional%20Extras.ipynb)**, to optimise your resource consumption and cost by scaling our model endpoint depending on current load.\n", "\n", "\n", "### A note on clean-up\n", "\n", "Note that while training, processing and transform jobs in SageMaker start and stop compute resources for the specific job being executed, deployed **endpoints** stay active (and therefore accumulating charges) until you turn them off.\n", "\n", "When you're finished using an endpoint, you should delete it either through the [Amazon SageMaker Console](https://console.aws.amazon.com/sagemaker/home?#/endpoints) or via commands like the below.\n", "\n", "(Of course, your OCR pipeline stack will throw an error if you try to run it configured with an Endpoint Name that no longer exists)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# predictor.delete_endpoint(delete_endpoint_config=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# preproc_predictor.delete_endpoint(delete_endpoint_config=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (Data Science 3.0)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/sagemaker-data-science-310-v1" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.6" } }, "nbformat": 4, "nbformat_minor": 4 }