{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Numerical Captcha Reader using TensorFlow with Amazon SageMaker" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Introduction" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This sample project demonstrates how to use Amazon SageMaker to build, train and deploy Tensorflow model.In this example numerical captcha of 4 digit length is used as input imaged and inference is done to read the input captcha digits. \n", "\n", "[Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a fully managed machine learning service. With SageMaker, data scientists and developers can quickly and easily build and train machine learning models, and then directly deploy them into a production-ready hosted environment.You can use Amazon SageMaker to train and deploy a model using custom TensorFlow code. The SageMaker Python SDK TensorFlow estimators and models and the SageMaker open-source TensorFlow containers make writing a TensorFlow script and running it in SageMaker easier.\n", "\n", "In this example, we will show how easily you can train a Machine Learning model in Amazon SageMaker using TensorFlow 2.X scripts with SageMaker Python SDK. In addition, this notebook demonstrates how to perform real time inference with the [SageMaker TensorFlow Serving container](https://github.com/aws/sagemaker-tensorflow-serving-container). " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install --upgrade sagemaker\n", "!pip install --upgrade awscli" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install captcha\n", "\n", "# Gerenate training & test data\n", "# Script will generate captcha images using Python's captcha library that generates audio and image CAPTCHAs\n", "# The script has input to configure number of digits in generated captcha, other input is how many permutations are needed\n", "# This step will take several minutes to generate capctha images. This time will increase for more permutations\n", "\n", "# Generate 4 digits captcha with 6 permutations\n", "!python gen_captcha.py -d --npi=4 -n 6\n", "\n", "# Generate 4 digits captcha with 60 permutations ; Use This if want to train on larger dataset\n", "#!python gen_captcha.py -d --npi=4 -n 60" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip show sagemaker" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Set up the environment" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we set up the linkage and authentication to AWS services. In this notebook we only need the roles used to give learning and hosting access to your data. The Sagemaker SDK will use S3 defualt buckets when needed. If the get_execution_role does not return a role with the appropriate permissions, you'll need to specify an IAM role arn that does." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "import random\n", "import time\n", "import json\n", "import base64\n", "import numpy as np\n", "import sagemaker\n", "from PIL import Image\n", "from sagemaker import get_execution_role\n", "from sagemaker.tensorflow import TensorFlow\n", "import tensorflow as tf\n", "from tensorflow import keras\n", "\n", "sagemaker_session = sagemaker.Session()\n", "\n", "role = get_execution_role()\n", "region = sagemaker_session.boto_session.region_name\n", "bucket = sagemaker_session.default_bucket()\n", "prefix = 'captcha/char-4-epoch-6'\n", "\n", "print('using bucket %s'%bucket)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# upload data to the default s3 bucket\n", "!aws s3 sync ./images/ s3://$bucket/captcha --quiet" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "s3train = 's3://{}/{}/train/'.format(bucket, prefix)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Training" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For TensorFlow versions 1.11 and later, the [Amazon SageMaker Python SDK](https://sagemaker.readthedocs.io/) supports script mode training scripts.Script mode is a training script format for TensorFlow that lets you execute any TensorFlow training script in SageMaker with minimal modification. The [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk) handles transferring your script to a SageMaker training instance. On the training instance, SageMaker's native TensorFlow support sets up training-related environment variables and executes your training script. In this tutorial, we use the SageMaker Python SDK to launch a training job and deploy the trained model. The `sagemaker.tensorflow.TensorFlow` estimator handles locating the script mode container, uploading your script to a S3 location and creating a SageMaker training job.\n", "\n", "The following training step is using GPU based instance, i.e. ml.p3.2xlarge. It takes around 10 minutes training time with training data generated in previous steps (6 permutations). For traing dataset generated out of 60 permutations takes around one hour to train the model. Refer available instance types [Available Instance Types](https://docs.aws.amazon.com/sagemaker/latest/dg/notebooks-available-instance-types.html) to choose different instance type. Refer [Amazon SageMaker Pricing](https://aws.amazon.com/sagemaker/pricing/) to know the pricing." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "captcha_estimator = TensorFlow(entry_point='captcha-tf.py',\n", " role=role,\n", " instance_count=1,\n", " instance_type='ml.p3.2xlarge',\n", " framework_version='2.1.0',\n", " py_version='py3')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Above step use script to train the model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pygmentize captcha-tf.py" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To start a training job, we call `estimator.fit()`. When training starts, the TensorFlow container executes training script (in this case) captcha-tf.py, passing `hyperparameters` and `model_dir` from the estimator as script arguments. Because we didn't define either in this example, no hyperparameters are passed, and `model_dir` defaults to `s3:////model`, so the script execution is as follows:\n", "\n", "```\n", "/usr/bin/python3 captcha-tf.py --model_dir s3:////model\n", "```\n", " \n", "When training is complete, the training job will upload the saved model for TensorFlow serving." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "captcha_estimator.fit({'train':s3train})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Deploy the trained model to an endpoint" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `deploy()` method creates a SageMaker model, which is then deployed to an endpoint to serve prediction requests in real time. We will use the TensorFlow Serving container for the endpoint. The TensorFlow Serving container is the default inference method for script mode. This serving container runs an implementation of a web server that is compatible with SageMaker hosting protocol." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "endpoint_name = 'captcha-tensorflow'+time.strftime(\"%Y-%m-%d-%H-%M-%S\", time.gmtime())\n", "end_point = captcha_estimator.deploy(initial_instance_count=1,instance_type='ml.m5.4xlarge',endpoint_name=endpoint_name)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Inference" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The formats of the input and the output data correspond directly to the request and response formats of the Predict method in the [TensorFlow Serving REST API](https://www.tensorflow.org/serving/api_rest). SageMaker's TensforFlow Serving endpoints can also accept additional input formats that are not part of the TensorFlow REST API, including the simplified JSON format, line-delimited JSON objects (\"jsons\" or \"jsonlines\"), and CSV data.\n", "\n", "In this example we are using input images converted to numpy array as input, which will be serialized into the simplified JSON format. In addtion, TensorFlow serving can also process multiple items at once as you can see in the following code. You can find the complete documentation on how to make predictions against a TensorFlow serving SageMaker endpoint [here](https://sagemaker.readthedocs.io/en/stable/frameworks/tensorflow/using_tf.html#making-predictions-against-a-sagemaker-endpoint)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def preprocess_input(image_path):\n", " if (os.path.exists(image_path)):\n", " originalImage = Image.open(image_path)\n", " image = np.asarray(originalImage) / 255.\n", " image = tf.expand_dims(image,0)\n", " input_data = {'instances': np.asarray(image).astype(float)}\n", " return input_data\n", " else:\n", " print('input does not exist!\\n')\n", " return None" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's download the random images from test data and use that as input for inference." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "filenames = []\n", "s3testprefix = '{}/test'.format(prefix)\n", "\n", "def get_filenames(bucket,prefix):\n", " results = sagemaker_session.list_s3_files(bucket,s3testprefix)\n", " \n", " for item in range(4):\n", " random_key = random.choice(results)\n", " filenames.append(random_key)\n", " return filenames\n", "\n", "get_filenames(bucket,prefix)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for i in filenames:\n", " file = 's3://{}/{}'.format(bucket,i)\n", " print(file)\n", " print(file.split('/')[-1])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "s3test = 's3://{}/{}/test/'.format(bucket, prefix)\n", "for i in filenames:\n", " file = 's3://{}/{}'.format(bucket,i)\n", " print(file)\n", " !aws s3 cp $file .\n", " input_data = preprocess_input(file.split('/')[-1])\n", " result = end_point.predict(input_data)\n", " predicted_class_idx = np.argmax(result['predictions'][0], axis=-1)\n", " print(predicted_class_idx)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Delete the endpoint" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's delete the endpoint we just created to prevent incurring any extra costs." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "end_point.delete_endpoint()" ] } ], "metadata": { "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (TensorFlow 2.1 Python 3.6 CPU Optimized)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:eu-west-1:470317259841:image/tensorflow-2.1-cpu-py36" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.9" } }, "nbformat": 4, "nbformat_minor": 4 }