{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Distributed training of tissue slide images using SageMaker and Horovod" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Background\n", "\n", "Neural networks have proven effective at solving complex computer vision tasks such as object detection, image similarity, and classification. With the evolution of low cost GPUs, the computational cost of building and deploying a neural network has drastically reduced. However, most of the techniques are designed to handle pixel resolutions commonly found in visual media, as an example, typical resolution size are 544 and 416 pixels for YOLOv3, 300 and 512 pixels for SSD, and 224 pixels for VGG. Training a classifier over a dataset consisting of gigapixel images (10^9+ pixels) such as satellite, CT, or pathology images is computationally challenging. These images cannot be directly input to a neural network due to their size, as each GPU is limited by available memory. This requires specific pre-processing techniques such as tiling to be able to process the original images in smaller chunks. Furthermore, due to the large size of these images, the overall training time tends to be high, often requiring from several days to weeks without the use of proper scaling techniques such as distributed training.\n", "\n", "In this notebook, using detection of cancer from tissue slide images as our use-case, we will deploy a highly scalable machine learning pipeline to:\n", "\n", "* Pre-process gigapixel images by tiling, zooming, and sorting them into train and test splits using Amazon SageMaker Processing.\n", "* Train an image classifier on pre-processed tiled images using Amazon SageMaker, Horovod and SageMaker Pipe mode.\n", "* Deploy a pre-trained model as an API using Amazon SageMaker." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Setup" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Install library for visualizing SVS images\n", "\n", "First, we install the `slideio` package for visualizing our digital pathology images." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install slideio===0.5.225\n", "!mkdir -p images" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Imports" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we import the necessary libraries to interact with SageMaker. We define our execution role, region, and the name of the S3 bucket in the account to which the tissue slide images will be downloaded. We also create our SageMaker session." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import boto3\n", "import sagemaker\n", "from sagemaker.processing import Processor, ProcessingInput, ProcessingOutput\n", "from sagemaker import get_execution_role\n", "from sagemaker.tensorflow import TensorFlow\n", "from sagemaker.tensorflow.model import TensorFlowModel\n", "from sagemaker.session import s3_input\n", "\n", "role = get_execution_role()\n", "region = boto3.Session().region_name\n", "bucket = 'tcga-data' # Please specify the bucket where the SVS images are downloaded\n", "sagemaker_session = sagemaker.Session()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we'll import the Python libraries we'll need for the remainder of the exercise." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "import slideio\n", "import numpy as np\n", "import tensorflow as tf\n", "import matplotlib.pyplot as plt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### TCGA SVS files\n", "\n", "In this blog, we will be using a dataset consisting of whole-slide images obtained from The Cancer Genome Atlas (https://www.cancer.gov/about-nci/organization/ccg/research/structural-genomics/tcga) (TCGA) to accurately and automatically classify them into LUAD (Adenocarcinoma), LUSC (squamous cell carcinoma), or normal lung tissue, where LUAD and LUSC are the two most prevalent subtypes of lung cancer. The dataset is available for public use by NIH and NCI. Instructions for downloading data are provided here (http://www.andrewjanowczyk.com/download-tcga-digital-pathology-images-ffpe/). The raw high resolution images are in SVS (https://openslide.org/formats/aperio/) format. SVS files are used for archiving and analyzing Aperio microscope images.The techniques and tools used in this blog can be applied to any ultra-high resolution image datasets such as MRI, CT scans, and satellite. \n", "\n", "Please refer to README file for instructions on downloading SVS images from TCGA. Before running the next cell, make sure to create a folder called `tcga-svs` within the S3 bucket specified above and download the SVS image data to that location.\n", "\n", "The output of the next cell contains a sample image of a tissue slide. Notice that this single image contains over quarter million pixels, and occupies over 750 MBs of memory. This image cannot be fed into directly to a neural network in its original form, and therefore it is necessary to tile the image into many smaller images." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Download sample svs file from S3\n", "s3 = boto3.resource('s3', region_name=region)\n", "\n", "image_file = 'TCGA-55-8514-01A-01-TS1.0e0f5cf3-96e9-4a35-aaed-4340df78d389.svs'\n", "key = f'tcga-svs/0000b231-7c05-4e2e-8c9e-6d0675bfbb34/{image_file}'\n", "\n", "s3.Bucket(bucket).download_file(key, f'./images/{image_file}')\n", "\n", "# Read svs image\n", "slide = slideio.open_slide(path=f\"./images/{image_file}\", driver=\"SVS\")\n", "scene = slide.get_scene(0)\n", "block = scene.read_block()\n", "\n", "# Display image\n", "plt.imshow(block,aspect=\"auto\")\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Build Docker container for preprocessing SVS files into TFRecords" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Dockerfile" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Visualize the Docker file that defines the container to be used by SageMaker Processing." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pygmentize Dockerfile" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Python script for preprocessing" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Visualize the python script that orchestrates the preprocessing of the images within the Docker container." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "!pygmentize src/script.py" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Build container and upload it to ECR" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Build and push the Docker image to Amazon's Elastic Container Registry (ECR) so that it can be used by SageMaker Processing." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "from docker_utils import build_and_push_docker_image\n", "\n", "repository_short_name = 'tcga-tissue-slides-preprocess'\n", "image_name = build_and_push_docker_image(repository_short_name)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Launch SageMaker Processing Job\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we are ready to launch the SageMaker Processing job on our images. The SVS slide images are pre-processed in three steps.\n", "\n", "* *Tiling images*: The images are tiled by non-overlapping 512×512-pixel windows, and tiles containing over 50% background are discarded. The tiles are stored as JPEG images.\n", "* *Converting images to TFRecords*: We use SageMaker Pipe Mode to reduce our training time, which requires the data to be available in a proto-buffer format. TFRecord is a popular proto-buffer format used for training models with TensorFlow. SageMaker Pipe Mode and proto-buffer format are explained in detail in the following section\n", "* *Sorting TFRecords :* We sort the dataset into test, train and validation cohorts for a 3-way classifier (LUAD/LUSC/Normal). In the TCGA dataset, there can be multiple slide images corresponding to a single patient. We need to make sure all the tiles generated from slides corresponding to the same patient should occupy the same split to avoid data leakage. For the test set, we create per-slide TFRecords containing all of the tiles from that slide so that we may evaluate the model in the way it will eventually be realistically deployed." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "processor = Processor(image_uri=image_name,\n", " role=get_execution_role(),\n", " instance_count=16, # run the job on 16 instances\n", " base_job_name='processing-base', # should be unique name\n", " instance_type='ml.m5.4xlarge', \n", " volume_size_in_gb=1000)\n", "\n", "processor.run(inputs=[ProcessingInput(\n", " source=f's3://{bucket}/tcga-svs', # s3 input prefix\n", " s3_data_type='S3Prefix',\n", " s3_input_mode='File',\n", " s3_data_distribution_type='ShardedByS3Key', # Split the data across instances\n", " destination='/opt/ml/processing/input')], # local path on the container\n", " outputs=[ProcessingOutput(\n", " source='/opt/ml/processing/output', # local output path on the container\n", " destination=f's3://{bucket}/tcga-svs-tfrecords/' # output s3 location\n", " )],\n", " arguments=['10000'], # number of tiled images per TF record for training dataset\n", " wait=True,\n", " logs=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Visualize tiled images stored within TFRecords" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here are samples of tiled images generated after pre-processing the above tissue slide image. These RGB 3 channels images are of size 512*512 and can be directly used as inputs to a neural network. Each of these tiled images are assigned the same label as the parent slide. Additionally, tiled images with more than 50% background are discarded." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%matplotlib inline\n", "\n", "print(tf.__version__)\n", "print(tf.executing_eagerly())\n", "\n", "HEIGHT=512\n", "WIDTH=512\n", "DEPTH=3\n", "NUM_CLASSES=3\n", "\n", "def dataset_parser(value):\n", " image_feature_description = {\n", " 'label': tf.io.FixedLenFeature([], tf.int64),\n", " 'image_raw': tf.io.FixedLenFeature([], tf.string),\n", " 'slide_string': tf.io.FixedLenFeature([], tf.string)\n", " }\n", " record = tf.io.parse_single_example(value, image_feature_description)\n", " image = tf.io.decode_raw(record['image_raw'], tf.float32)\n", " image = tf.cast(image, tf.float32)\n", " image.set_shape([DEPTH * HEIGHT * WIDTH])\n", " image = tf.cast(tf.reshape(image, [HEIGHT, WIDTH, DEPTH]), tf.float32)\n", " label = tf.cast(record['label'], tf.int32)\n", " slide = record['slide_string']\n", " \n", " return image, label, slide\n", "\n", "# List first 10 tiled images\n", "\n", "key = 'tcga-svs-tfrecords/test'\n", "\n", "file = [f for f in s3.Bucket(bucket).objects.filter(Prefix=key).limit(1)][0]\n", "local_file = file.key.split('/')[-1]\n", "s3.Bucket(bucket).download_file(file.key, f'./images/{local_file}')\n", "\n", "raw_image_dataset = tf.data.TFRecordDataset(f'./images/{local_file}')\n", "parsed_image_dataset = raw_image_dataset.map(dataset_parser)\n", "\n", "c = 0\n", "for image_features in parsed_image_dataset:\n", " image_raw = image_features[0].numpy()\n", " label = image_features[1].numpy()\n", " \n", " plt.figure()\n", " plt.imshow(image_raw/255) \n", " plt.title(f'Full image: {image_features[2].numpy().decode()}, Label: {label}')\n", "\n", " c += 1\n", " if c == 10:\n", " break" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Distributed training with Horovod and SageMaker Pipe Mode input\n", "When training a model with large amount of data, the data needs to distributed across multiple CPUs/GPUs on either a single instance or multiple instances. Deep learning frameworks provide their own methods to support distributed training. [Horovod](https://eng.uber.com/horovod/) is a popular, framework-agnostic toolkit for distributed deep learning. It utilizes an all-reduce algorithm for fast distributed training (compared with parameter server approach) and also includes multiple optimization methods to make the distributed training faster. Examples of distributed training with Horovod on SageMaker are available via other AWS blogs ([TensorFlow](https://aws.amazon.com/blogs/machine-learning/multi-gpu-and-distributed-training-using-horovod-in-amazon-sagemaker-pipe-mode/), [MXNet](https://aws.amazon.com/blogs/machine-learning/reducing-training-time-with-apache-mxnet-and-horovod-on-amazon-sagemaker/)).\n", "\n", "The following cell defines useful variables for the distributed training process. This includes the computation of the appropriate number of shards given the chosen `train_instance_type` and `train_instance_count`. Also note that the value of `gpus_per_host` should reflect the number of GPUs associated with the `train_instance_type`, which in this case is 4." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train_instance_type='ml.p3.8xlarge'\n", "train_instance_count = 4\n", "gpus_per_host = 4\n", "num_of_shards = gpus_per_host * train_instance_count\n", "\n", "distributions = {'mpi': {\n", " 'enabled': True,\n", " 'processes_per_host': gpus_per_host\n", " }\n", "}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Sharding the tiles " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "SageMaker Pipe Mode is a mechanism for providing S3 data to a training job via Linux pipes. Training programs can read from the fifo pipe and get high-throughput data transfer from S3, without managing the S3 access in the program itself. Pipe Mode is covered in more detail in the SageMaker [documentation](https://sagemaker.readthedocs.io/en/stable/frameworks/tensorflow/using_tf.html#training-with-pipe-mode-using-pipemodedataset).\n", "\n", "There are few considerations that we need to keep in mind when working with SageMaker Pipe mode and Horovod:\n", "\n", "* The data that is streamed through each pipe is mutually exclusive of each of the other pipes. The number of pipes dictates the number of data shards that need to be created. \n", "* Horovod wraps the training script for each compute instance. This means that data for each compute instance needs to be allocated to a different shard.\n", "* With the SageMaker Training parameter S3DataDistributionType set to `ShardedByS3Key`, we can share a pipe with more than one instance. The data is streamed in round-robin fashion across instance as shown in the figure below.\n", "\n", "The following cell shards the data within S3 to prepare it as input for distributed training with Pipe mode." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Sharding\n", "client = boto3.client('s3')\n", "result = client.list_objects(Bucket=bucket, Prefix='tcga-svs-tfrecords/train/', Delimiter='/')\n", "\n", "j = -1\n", "for i in range(num_of_shards):\n", " copy_source = {\n", " 'Bucket': bucket,\n", " 'Key': result['Contents'][i]['Key']\n", " }\n", " print(result['Contents'][i]['Key'])\n", " if i % gpus_per_host == 0:\n", " j += 1\n", " dest = 'tcga-svs-tfrecords/train_sharded/' + str(j) +'/' + result['Contents'][i]['Key'].split('/')[2]\n", " print(dest)\n", " s3.meta.client.copy(copy_source, bucket, dest)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that the data is sharded, we can assign these shards as `remote_inputs` to our SageMaker training job." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "svs_tf_sharded = f's3://{bucket}/tcga-svs-tfrecords'\n", "shuffle_config = sagemaker.session.ShuffleConfig(234)\n", "train_s3_uri_prefix = svs_tf_sharded\n", "remote_inputs = {}\n", "\n", "for idx in range(gpus_per_host):\n", " train_s3_uri = f'{train_s3_uri_prefix}/train_sharded/{idx}/'\n", " train_s3_input = s3_input(train_s3_uri, distribution ='ShardedByS3Key', shuffle_config=shuffle_config)\n", " remote_inputs[f'train_{idx}'] = train_s3_input\n", " remote_inputs['valid_{}'.format(idx)] = '{}/valid'.format(svs_tf_sharded)\n", "remote_inputs['test'] = '{}/test'.format(svs_tf_sharded)\n", "remote_inputs" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Training script" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, we visualize the training script to be used by SageMaker." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "!pygmentize src/train.py" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we are ready to initialize our SageMaker TensorFlow estimator, specifying `input_mode='Pipe'` to engage Pipe mode and providing our `distributions` variable defined above to activate distributed training. Finally, we call the `fit` method with the `remote_inputs` as the first argument." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "local_hyperparameters = {'epochs': 5, 'batch-size' : 16, 'num-train':160000, 'num-val':8192, 'num-test':8192}\n", "\n", "estimator_dist = TensorFlow(base_job_name='svs-horovod-cloud-pipe',\n", " entry_point='src/train.py',\n", " role=role,\n", " framework_version='2.1.0',\n", " py_version='py3',\n", " distribution=distributions,\n", " volume_size=1024,\n", " hyperparameters=local_hyperparameters,\n", " output_path=f's3://{bucket}/output/',\n", " instance_count=4, \n", " instance_type=train_instance_type,\n", " input_mode='Pipe')\n", "\n", "estimator_dist.fit(remote_inputs, wait=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Deploy the trained model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After training the model using Amazon SageMaker, we can now deploy the trained model to peform inference on new images. A model can be deployed using Amazon SageMaker to get predictions in the following ways:\n", "\n", "* To set up a persistent endpoint to get one prediction at a time, use SageMaker hosting services.\n", "* To get predictions for an entire dataset, use SageMaker batch transform.\n", "\n", "In this blog post, we will deploy the trained model as a SageMaker endpoint." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%matplotlib inline\n", "\n", "plt.style.use('bmh')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `deploy()` method creates an endpoint that serves prediction requests in real-time.\n", "The model saves keras artifacts; to use TensorFlow serving for deployment, you'll need to save the artifacts in SavedModel format." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Create predictor from S3 instead\n", "\n", "model_data = f's3://{bucket}/output/{estimator_dist.latest_training_job.name}/output/model.tar.gz'\n", "\n", "model = TensorFlowModel(model_data=model_data, \n", " role=role, framework_version='2.1.0')\n", "\n", "predictor = model.deploy(initial_instance_count=1, instance_type='ml.c5.xlarge')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Make some predictions\n", "\n", "Remember that the model is trained on individual tile images. During inference, the SageMaker endpoint provides classification scores for each tile. These scores are averaged out across all tiles to generate the slide-level score and prediction. A majority-vote scheme would also be appropriate. \n", "\n", "The following cells read preprocessed image data from a TFRecords file and use the SageMaker endpoint to compute predictions for each of the tiles. We first define a helper function to extract the individual tile images." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "HEIGHT=512\n", "WIDTH=512\n", "DEPTH=3\n", "NUM_CLASSES=3\n", "\n", "def _dataset_parser_with_slide(value):\n", " image_feature_description = {\n", " 'label': tf.io.FixedLenFeature([], tf.int64),\n", " 'image_raw': tf.io.FixedLenFeature([], tf.string),\n", " 'slide_string': tf.io.FixedLenFeature([], tf.string)\n", " }\n", " example = tf.io.parse_single_example(value, image_feature_description)\n", " image = tf.io.decode_raw(example['image_raw'], tf.float32)\n", " image = tf.cast(image, tf.float32)\n", " image.set_shape([DEPTH * HEIGHT * WIDTH])\n", " image = tf.cast(tf.reshape(image, [HEIGHT, WIDTH, DEPTH]), tf.float32)\n", " label = tf.cast(example['label'], tf.int32)\n", " slide = example['slide_string']\n", " \n", " return image, label, slide" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Tile-level prediction" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the following cell, we create and parse a `TFRecordDataset` from a TFRecords file stored locally at `./images` and use the `predict()` method to perform inference on each of the extracted tiles." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "local_file = [each for each in os.listdir('./images') if each.endswith('.tfrecords')][0]\n", "\n", "raw_image_dataset = tf.data.TFRecordDataset(f'./images/{local_file}') ## read a TFrecord\n", "parsed_image_dataset = raw_image_dataset.map(_dataset_parser_with_slide) ## Parse TFrecord to JPEGs\n", "\n", "pred_scores_list = []\n", "for i, element in enumerate(parsed_image_dataset):\n", " if i > 10:\n", " break\n", " image = element[0].numpy()\n", " label = element[1].numpy()\n", " slide = element[2].numpy().decode()\n", " if i == 0:\n", " print(f'Making tile-level predictions for slide: {slide}...')\n", "\n", " print(f'Querying endpoint for a prediction for tile {i+1}...')\n", " pred_scores = predictor.predict(np.expand_dims(image, axis=0))['predictions'][0]\n", " print(pred_scores)\n", " pred_class = np.argmax(pred_scores) \n", " print(pred_class)\n", " \n", " if i > 0 and i % 10 == 0:\n", " plt.figure()\n", " plt.title(f'Tile {i} prediction: {pred_class}') \n", " plt.imshow(image / 255)\n", " \n", " pred_scores_list.append(pred_scores)\n", "print('Done.')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Slide-level prediction (average score over all tiles)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Once the endpoint has classified each of the tiles, we can average them together for a final classification of the entire slide image." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mean_pred_scores = np.mean(np.vstack(pred_scores_list), axis=0)\n", "mean_pred_class = np.argmax(mean_pred_scores)\n", "\n", "print(f\"Slide-level prediction for {slide}:\", mean_pred_class)" ] } ], "metadata": { "kernelspec": { "display_name": "conda_python3", "language": "python", "name": "conda_python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.13" } }, "nbformat": 4, "nbformat_minor": 4 }