{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Image Classification using Visual Transformers\n", "\n", "## Contents\n", "1. [Overview](#Overview)\n", "2. [Setup](#Setup)\n", "3. [Image pre-processing](#Image-Pre-processing)\n", "4. [Build and Train the Visual Transformer Model](#build-train-model)\n", "5. [Deploy VT-ResNet34 Model to SageMaker Endpoint](#Deploy-VT-ResNet34-Model-to-SageMaker-Endpoint)\n", "6. [References](#References)\n", "\n", "Note: \n", "- Dataset used is **Intel Image Classification** from [Kaggle](https://www.kaggle.com/puneet6060/intel-image-classification).\n", "- The notebook is only an example and not to be used for production deployments.\n", "- Use `Python3 (PyTorch 1.6 Python 3.6 CPU Optimized)` kernel and `ml.m5.large (2 vCPU + 8 GiB)` for the notebook." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Overview \n", "\n", "**_Important_** : Notebook has ideas and some of the pseudo code from `Visual Transformers: Token-based Image Representation and Processing for Computer Vision` [research paper](https://arxiv.org/pdf/2006.03677.pdf) but does not reproduce all the results mentioned in the paper. \n", "\n", "In standard image classification algorithms like ResNet, InceptionNet etc., images are represented as pixel arrays on which a series of convolution operations are performed. Although, great accuracy has been achieved with these algorithms, the convolution operation is computationally expensive. Therefore, in this notebook we will look at an alternative way to perform `Image Classification` using the ideas mentioned in the research paper. \n", "\n", "\n", "\n", "Diagram of a Visual Transformer (VT). For a given image, we first apply convolutional layers to extract low-level\n", "features. The output feature map is then fed to Visual Transformer (VT): First, apply a tokenizer, grouping pixels into a small number of visual tokens. Second, apply transformers to model relation**s**hips between tokens.\n", "Third, visual tokens are directly used for image classification or projected back to the feature map for semantic segmentation.\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Setup \n", "To start, let's import some Python libraries initialize a SageMaker session, S3 bucket & prefix, and IAM Role." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install seaborn" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# import python librabries and framework\n", "from pathlib import Path\n", "import numpy as np\n", "import cv2\n", "import PIL.Image as Image\n", "import seaborn as sns\n", "from pylab import rcParams\n", "import matplotlib.pyplot as plt\n", "from matplotlib import rc\n", "from matplotlib.ticker import MaxNLocator\n", "from glob import glob\n", "import shutil\n", "\n", "import torch, torchvision\n", "from torch import nn, optim\n", "import torchvision.transforms as T\n", "from torchvision.datasets import ImageFolder\n", "from torch.utils.data import DataLoader\n", "\n", "import sagemaker\n", "\n", "%matplotlib inline\n", "\n", "sns.set(style='whitegrid', palette='muted', font_scale=1.2)\n", "\n", "COLORS_PALETTE=[\"#01BEFE\",\"#FFDD00\",\"#FF7D00\",\"#FF006D\",\"#ADFF02\",\"#8F00FF\"]\n", "\n", "sns.set_palette(sns.color_palette(COLORS_PALETTE))\n", "\n", "rcParams['figure.figsize'] = 15, 10\n", "\n", "RANDOM_SEED = 42\n", "np.random.seed(RANDOM_SEED)\n", "torch.manual_seed(RANDOM_SEED)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sagemaker_session = sagemaker.Session()\n", "bucket = sagemaker_session.default_bucket()\n", "prefix = \"sagemaker/pytorch-vt-resnet34\"\n", "role = sagemaker.get_execution_role()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Image Pre-processing \n", "The dataset used in the notebook is `Intel Image Classification` downloaded from kaggle.com. \n", "It contains around 25k images of size 150x150 distributed under 6 categories.\n", "```\n", "{'buildings' -> 0,\n", "'forest' -> 1,\n", "'glacier' -> 2,\n", "'mountain' -> 3,\n", "'sea' -> 4,\n", "'street' -> 5 }\n", "```\n", "The `train, test and prediction` data is separated in each zip files. There are around 14k images in Train, 3k in Test and 7k in Prediction.\n", "You can download the dataset from [here](https://www.kaggle.com/puneet6060/intel-image-classification/download), rename the zip file to `data1`, upload it in the Jupyter Lab inside the `image-classification` folder and then follow the steps below. \n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# extracting files may take 5-10mins.\n", "from zipfile import ZipFile\n", "with ZipFile('data1.zip', 'r') as zipObj:\n", " # Extract all the contents of data1.zip file in different data1 directory\n", " zipObj.extractall('data1')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Store location for train, test and prediction dataset. \n", "train_set = './data1/seg_train/seg_train'\n", "test_set = './data1/seg_test/seg_test'\n", "pred_set = './data1/seg_pred/seg_pred'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**We will get each label folder and we can see that we have six folders.** Each of these folders correspond to the classes in the dataset as shown below:\n", "\n", "* buildings = 0 \n", "* forest = 1\n", "* glacier = 2\n", "* mountain = 3\n", "* sea = 4\n", "* street = 5 " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class_names = ['buildings', 'forest', 'glacier', 'mountain', 'sea', 'street']\n", "class_indices = [0,1,2,3,4,5]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# display the count of folders for each class, since we have 6 classes, lets verify it. \n", "train_folders = sorted(glob(train_set + '/*'))\n", "len(train_folders)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Defining the helper functions to load and view images." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def load_image(img_path, resize=True):\n", " img = cv2.cvtColor(cv2.imread(img_path), cv2.COLOR_BGR2RGB)\n", " \n", " if resize:\n", " img = cv2.resize(img, (64,64), interpolation = cv2.INTER_AREA)\n", " \n", " return img\n", "\n", "def show_image(img_path):\n", " img = load_image(img_path)\n", " plt.imshow(img)\n", " plt.axis('off')\n", " \n", "def show_sign_grid(image_paths):\n", " images = [load_image(img) for img in image_paths]\n", " images = torch.as_tensor(images)\n", " images = images.permute(0,3,1,2)\n", " grid_img = torchvision.utils.make_grid(images, nrow=11)\n", " plt.figure(figsize=(24,12))\n", " plt.imshow(grid_img.permute(1,2,0))\n", " plt.axis('off')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Display sample images for all 6 classes in the dataset.**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sample_images = [np.random.choice(glob(f'{tf}/*jpg')) for tf in train_folders]\n", "show_sign_grid(sample_images)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**We will copy all the images to new directory to re-organize the structure of the folder, the purpose is to make it easier for `torchvision dataset` helpers to utilize the images.**\n", "The new directory structure will look like this: \n", "```\n", "|- data\n", "|----train\n", "|----val\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**We are going to reserve 80% for train and 20% for validation for each class, then copy them to the `data` folder.**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# this step may take 2mins to execute.\n", "!rm -rf data\n", "\n", "DATA_DIR = Path('data')\n", "\n", "DATASETS = ['train', 'val']\n", "\n", "for ds in DATASETS:\n", " for cls in class_names:\n", " (DATA_DIR / ds / cls).mkdir(parents=True, exist_ok=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# counting the images in each class. This may take 1-2mins to execute.\n", "for i, cls_index in enumerate(class_indices):\n", " image_paths = np.array(glob(f'{train_folders[cls_index]}/*jpg'))\n", " class_name = class_names[i]\n", " print(f'{class_name}: {len(image_paths)}')\n", " np.random.shuffle(image_paths)\n", " \n", " ds_split = np.split(\n", " image_paths,\n", " indices_or_sections=[int(.8*len(image_paths)), int(.9*len(image_paths))]\n", " )\n", " \n", " dataset_data = zip(DATASETS, ds_split)\n", " for ds, images in dataset_data:\n", " for img_path in images:\n", " shutil.copy(img_path, f'{DATA_DIR}/{ds}/{class_name}/')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Distribution of classes are good, the total per class ratio is not so high.**\n", "\n", "_We add some transformations to artifically increase the size of dataset, particularily random resizing, rotation and horizontal flips, then we normalize the tensors using present values for each channel._" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mean_nums = [0.485, 0.456, 0.406]\n", "std_nums = [0.229, 0.224, 0.225]\n", "\n", "transforms = {'train': T.Compose([\n", " T.RandomResizedCrop(size=224),\n", " T.RandomRotation(degrees=15),\n", " T.RandomHorizontalFlip(),\n", " T.ToTensor(),\n", " T.Normalize(mean_nums, std_nums)\n", "]), 'val': T.Compose([\n", " T.Resize(size=224),\n", " T.CenterCrop(size=224),\n", " T.ToTensor(),\n", " T.Normalize(mean_nums, std_nums)\n", "]),}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lets create the Pytorch Dataloader from the `data` directory." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "image_datasets = {\n", " d: ImageFolder(f'{DATA_DIR}/{d}', transforms[d]) for d in DATASETS\n", "}\n", "\n", "data_loaders = {\n", " d: DataLoader(image_datasets[d], batch_size=16, shuffle=True, num_workers=4) for d in DATASETS\n", "}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# counting the images in datasets created above.\n", "dataset_sizes = {d: len(image_datasets[d]) for d in DATASETS}\n", "class_names = image_datasets['train'].classes\n", "dataset_sizes" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def imshow(inp, title=None):\n", " inp = inp.numpy().transpose((1,2,0))\n", " mean = np.array([mean_nums])\n", " std = np.array([std_nums])\n", " inp = std * inp + mean\n", " inp = np.clip(inp,0,1)\n", " plt.imshow(inp)\n", " if title is not None:\n", " plt.title(title)\n", " plt.axis('off')\n", " \n", "inputs, classes = next(iter(data_loaders['train']))\n", "out = torchvision.utils.make_grid(inputs)\n", "\n", "imshow(out, title=[class_names[x] for x in classes])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Lets have a look at some sample images with all the transformations applied to the images.**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Uploading the data to S3\n", "We are going to use the `sagemaker.Session.upload_data` function to upload our datasets to an S3 location. The return value `input_path` identifies the S3 path -- we will use later when we start the training job. This might take few minutes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# upload data to S3, this might take few minutes.\n", "input_path = sagemaker_session.upload_data(path='data', bucket=bucket, key_prefix=prefix)\n", "print('input specification (in this case, just an S3 path): {}'.format(input_path))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Build and Train the model \n", "We will use pretrained resnet34 model and replace the last layer with the custom visual transformer, to classify the images.\n", "\n", "1. Create a Visual Transformer class (replacing the last layer with a transformer layer).\n", "2. Import ResNet34 pretrained model.\n", "3. Convert it into training mode.\n", "4. Train the model on new data.\n", "5. Evaluate model performance on `validation loss` , `validation accuracy` and `execution time`. \n", "\n", "All the above steps are performed in `Training script`. \n", "\n", "### Training script\n", "The `vt-resnet-34.py` script provides all the code we need for training and hosting a SageMaker model (`model_fn` function to load a model).\n", "The training script is very similar to a training script you might run outside of SageMaker, but you can access useful properties about the training environment through various environment variables, such as:\n", "\n", "* `SM_MODEL_DIR`: A string representing the path to the directory to write model artifacts to.\n", " These artifacts are uploaded to S3 for model hosting.\n", "* `SM_NUM_GPUS`: The number of gpus available in the current container.\n", "* `SM_CURRENT_HOST`: The name of the current container on the container network.\n", "* `SM_HOSTS`: JSON encoded list containing all the hosts .\n", "\n", "Supposing one input channel, 'training', was used in the call to the PyTorch estimator's `fit()` method, the following will be set, with the format `SM_CHANNEL_[channel_name]`:\n", "\n", "* `SM_CHANNEL_TRAINING`: A string representing the path to the directory containing data in the 'training' channel.\n", "\n", "For more information about training environment variables, please visit [SageMaker Containers](https://github.com/aws/sagemaker-containers).\n", "\n", "A typical training script loads data from the input channels, configures training with hyperparameters, trains a model, and saves a model to `model_dir` so that it can be hosted later. Hyperparameters are passed to your script as arguments and can be retrieved with an `argparse.ArgumentParser` instance.\n", "\n", "Because the SageMaker imports the training script, you should put your training code in a main guard (``if __name__=='__main__':``) if you are using the same script to host your model as we do in this example, so that SageMaker does not inadvertently run your training code at the wrong point in execution.\n", "\n", "For example, the script run by this notebook:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pygmentize code/vt-resnet-34.py" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Train on Amazon SageMaker\n", "\n", "We use Amazon SageMaker to train and deploy a model using our custom PyTorch code. The Amazon SageMaker Python SDK makes it easier to run a PyTorch script in Amazon SageMaker using its PyTorch estimator. After that, we can use the SageMaker Python SDK to deploy the trained model and run predictions. For more information on how to use this SDK with PyTorch, see [the SageMaker Python SDK documentation](https://sagemaker.readthedocs.io/en/stable/using_pytorch.html).\n", "\n", "To start, we use the `PyTorch` estimator class to train our model. When creating our estimator, we make sure to specify a few things:\n", "\n", "* `entry_point`: the name of our PyTorch script. It contains our training script, which loads data from the input channels, configures training with hyperparameters, trains a model, and saves a model. It also contains code to load and run the model during inference.\n", "* `source_dir`: the location of our training scripts and requirements.txt file. \"requirements.txt\" lists packages you want to use with your script.\n", "* `framework_version`: the PyTorch version we want to use\n", "\n", "The PyTorch estimator supports single-machine, distributed PyTorch training. To use this, we just set instance_count equal to one. Our training script supports distributed training for only GPU instances.\n", "\n", "After creating the estimator, we then call fit(), which launches a training job. We use the Amazon S3 URIs where we uploaded the training data earlier." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from datetime import datetime\n", "from sagemaker.pytorch import PyTorch\n", "\n", "now = datetime.now()\n", "timestr = now.strftime(\"%m-%d-%Y-%H-%M-%S\")\n", "vt_training_job_name = \"vt-training-{}\".format(timestr)\n", "print(vt_training_job_name)\n", "\n", "estimator = PyTorch(\n", " entry_point=\"vt-resnet-34.py\",\n", " source_dir=\"code\",\n", " role=role,\n", " framework_version=\"1.6.0\",\n", " py_version=\"py3\",\n", " instance_count=1, # this script only supports single instance multi-gpu distributed data training.\n", " instance_type=\"ml.p3.16xlarge\", # this instance has 8 GPUs, you can change it, if you want to train on bigger/smaller instance. \n", " use_spot_instances=False, # you can set it to True if you want to use Spot instance for training which might take some additional time, but are more cost effective.\n", "# max_run=3600, # uncomment it, if use_spot_instances = True\n", "# max_wait=3600, # uncomment it, if use_spot_instances = True\n", " debugger_hook_config=False,\n", " hyperparameters={\n", " \"epochs\": 5,\n", " \"num_classes\": 6,\n", " \"batch-size\": 256,\n", " },\n", " metric_definitions=[\n", " {'Name': 'validation:loss', 'Regex': 'Valid_loss = ([0-9\\\\.]+);'},\n", " {'Name': 'validation:accuracy', 'Regex': 'Valid_accuracy = ([0-9\\\\.]+);'},\n", " {'Name': 'train:accuracy', 'Regex': 'Train_accuracy = ([0-9\\\\.]+);'},\n", " {'Name': 'train:loss', 'Regex': 'Train_loss = ([0-9\\\\.]+);'},\n", " ]\n", ")\n", "estimator.fit({\"training\": input_path}, wait=True, job_name=vt_training_job_name)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "vt_training_job_name = estimator.latest_training_job.name\n", "print(\"Visual Transformer training job name: \", vt_training_job_name)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Deploy VT-ResNet34 Model to SageMaker Endpoint " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker import get_execution_role\n", "ENDPOINT_NAME='pytorch-inference-{}'.format(timestr)\n", "predictor = estimator.deploy(initial_instance_count=1, instance_type='ml.p3.2xlarge', endpoint_name=ENDPOINT_NAME)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import json\n", "import requests\n", "from IPython.display import Image \n", "import json\n", "import boto3\n", "import numpy as np\n", "\n", "runtime= boto3.client('runtime.sagemaker')\n", "client = boto3.client('sagemaker')\n", "\n", "endpoint_desc = client.describe_endpoint(EndpointName=ENDPOINT_NAME)\n", "print(endpoint_desc)\n", "print('---'*60)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Making Predictions with VT-ResNet34 Model using SageMaker Endpoint\n", "\n", "Inference dataset is taken from `Registry of Open Data from AWS` (https://registry.opendata.aws/multimedia-commons/)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "payload = '[{\"url\":\"https://multimedia-commons.s3-us-west-2.amazonaws.com/data/images/139/019/1390196df443f2cf614f2255ae75fcf8.jpg\"},\\\n", "{\"url\":\"https://multimedia-commons.s3-us-west-2.amazonaws.com/data/images/139/015/1390157d4caaf290962de5c5fb4c42.jpg\"},\\\n", "{\"url\":\"https://multimedia-commons.s3-us-west-2.amazonaws.com/data/images/139/020/1390207be327f4c4df1259c7266473.jpg\"},\\\n", "{\"url\":\"https://multimedia-commons.s3-us-west-2.amazonaws.com/data/images/139/021/139021f9aed9896831bf88f349fcec.jpg\"},\\\n", "{\"url\":\"https://multimedia-commons.s3-us-west-2.amazonaws.com/data/images/139/028/139028d865bafa3de66568eeb499f4a6.jpg\"},\\\n", "{\"url\":\"https://multimedia-commons.s3-us-west-2.amazonaws.com/data/images/139/030/13903090f3c8c7a708ca69c8d5d68b2.jpg\"},\\\n", "{\"url\":\"https://multimedia-commons.s3-us-west-2.amazonaws.com/data/images/002/010/00201099c5bf0d794c9a951b74390.jpg\"},\\\n", "{\"url\":\"https://multimedia-commons.s3-us-west-2.amazonaws.com/data/images/139/136/139136bb43e41df8949f873fb44af.jpg\"},\\\n", "{\"url\":\"https://multimedia-commons.s3-us-west-2.amazonaws.com/data/images/139/145/1391457e4a2e25557cbf956aaee4345.jpg\"}]'\n", "\n", "payload = json.loads(payload)\n", "for item in payload:\n", " item = json.dumps(item)\n", " response = runtime.invoke_endpoint(EndpointName=ENDPOINT_NAME, \n", " ContentType='application/json', \n", " Body=item)\n", " result = response['Body'].read()\n", " result = json.loads(result)\n", " print('predicted:', result[0]['prediction'])\n", "\n", " from PIL import Image\n", " import requests\n", "\n", " input_data = json.loads(item)\n", " url = input_data['url']\n", " im = Image.open(requests.get(url, stream=True).raw)\n", " newsize = (250, 250) \n", " im1 = im.resize(newsize) \n", "\n", " from IPython.display import Image\n", " display(im1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Cleanup\n", "\n", "Lastly, please remember to delete the Amazon SageMaker endpoint to avoid charges. Uncomment following statement `predictor.delete_endpoint()` to do so. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "predictor.delete_endpoint()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## References \n", "- [1] Visual Transformers: Token-based Image Representation and Processing for Computer Vision (https://arxiv.org/pdf/2006.03677.pdf)\n", "- [2] Kaggle notebook (https://www.kaggle.com/asollie/intel-image-multiclass-pytorch-94-test-acc)\n", "- [3] Registry of Open Data from AWS (https://registry.opendata.aws/multimedia-commons/)" ] } ], "metadata": { "instance_type": "ml.g4dn.xlarge", "kernelspec": { "display_name": "Python 3 (PyTorch 1.6 Python 3.6 GPU Optimized)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-west-2:236514542706:image/pytorch-1.6-gpu-py36-cu110-ubuntu18.04-v3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.10" } }, "nbformat": 4, "nbformat_minor": 4 }