{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Module 2. Training on Local Environment \n", "---\n", "\n", "This hands-on lab fine-tunes a pre-trained Image Classification model stored in model zoo, and train purely without using SageMaker training instance.\n", "\n", "***If you already have experience with Deep Learning training using PyTorch, you can skip this notebook and go straight to SageMaker training notebook. The main purpose of this notebook is to show that SageMaker is a Docker container based and you can easily move your training code to SageMaker with just a few lines of code.***\n", "\n", "This hands-on can be completed in about **20 minutes**. \n", "\n", "

Note

It is recommended to use a GPU instance including g4dn.xlarge, and p3.2xlarge for this notebook. It also works on a CPU instance, but it can take 10-15 minutes to train just one epoch.

" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%store -r\n", "%load_ext autoreload\n", "%autoreload 2\n", "%matplotlib inline\n", "\n", "import os\n", "import sys\n", "import logging\n", "import IPython\n", "\n", "try:\n", " bucket \n", " dataset_dir \n", " print(\"[OK] You can proceed.\")\n", "except NameError:\n", " print(\"+\"*60)\n", " print(\"[ERROR] Please run '01_make_augmented_imgs.ipynb' before you continue.\")\n", " print(\"+\"*60)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%store" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "\n", "# 1. Preparation\n", "---\n", "\n", "Environment variables that start with `SM_` are SageMaker environment variables, which are automatically set when you create a SageMaker training instance. We recommend using the `sagemaker-training-toolkit` when configuring your own containers (aka. BYOC; Bring Your Own Container).\n", "For reference, the path of the SageMaker training Docker container is as follows.\n", "\n", "```\n", "/opt/ml/\n", "    input/\n", "        config/\n", "        data/\n", "    model/\n", "    output/\n", "        failure/\n", "```\n", "\n", "For example, `SM_MODEL_DIR` corresponds to `/opt/ml/model`. When testing locally, you can designate your own folder instead. When testing Docker containers, it is recommended to map bind mount volumes as in the example below.\n", "\n", "```shell\n", "docker run --mount type=bind,source=./model,target=/opt/ml/model [YOUR IMAGE TAG]\n", "\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sys.path.append('./src/')\n", "\n", "import copy\n", "import time\n", "import numpy as np\n", "import torch, os\n", "import torchvision\n", "import json\n", "import argparse\n", "import matplotlib.pyplot as plt\n", "import src.train_utils as train_utils\n", "import src.train_single_gpu as train\n", "\n", "num_gpus = 1 if torch.cuda.is_available() else 0\n", "device = torch.device(\"cuda:0\" if num_gpus == 1 else \"cpu\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "src_dir = os.getcwd()\n", "os.environ['SM_CURRENT_HOST'] = 'algo-1'\n", "os.environ['SM_HOSTS'] = json.dumps([\"algo-1\"])\n", "os.environ['SM_MODEL_DIR'] = '/opt/ml/model'\n", "os.environ['SM_NUM_GPUS'] = str(1)\n", "os.environ['SM_CHANNEL_TRAIN'] = f'{src_dir}/{dataset_dir}/train'\n", "os.environ['SM_CHANNEL_VALID'] = f'{src_dir}/{dataset_dir}/valid'\n", "\n", "args = train.parser_args(train_notebook=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "args.use_cuda = args.num_gpus > 0\n", "args.device = torch.device(\"cuda\" if args.use_cuda else \"cpu\")\n", "args.rank = 0\n", "args.world_size = 1\n", "\n", "args.classes, args.classes_dict = train_utils.get_classes(args.train_dir) \n", "args.num_classes = len(args.classes)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "args.classes_dict" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Create DataLoader" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dataloaders, transforms, train_sampler = train_utils.create_dataloaders(\n", " args.train_dir, args.valid_dir, rank=args.rank, \n", " world_size=args.world_size, batch_size=args.batch_size,\n", " num_workers=args.num_workers\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Visualize mini-batch samples" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train_utils.visualize_dataloader_samples(dataloaders['train'], args.classes, nrow=8)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We recommend MobileNet-v2 or MnasNet among TorchVision's pre-trained models.\n", "\n", "- MobileNet-V2 is built into TorchVision with an architecture that reduces the amount of computation by utilizing 1x1 convolution and Bottleneck Residual Block. Please note that TorchVision corresponding to the latest version of PyTorch also has built-in MobileNet-V3, but TorchVision corresponding to PyTorch 1.6.0 does not have built-in MobileNet-V3. (Paper: https://arxiv.org/pdf/1801.04381.pdf)\n", "\n", "- MnasNet is a reinforcement learning-based neural architecture search that considers both accuracy and latency of mobile devices, and TorchVision has built-in MNasNet-B1 optimized for image classification.\n", "(Paper: https://arxiv.org/pdf/1807.11626.pdf)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "feature_extract = False\n", "model = train_utils.initialize_ft_model(args.model_name, num_classes=args.num_classes, feature_extract=feature_extract)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "\n", "# 2. Training Loop\n", "---\n", "\n", "Perform the main training loop. The script code is pre-configured for easy migration to the SageMaker environment." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%writefile src/train_single_gpu.py\n", "import os\n", "import json\n", "import random\n", "import warnings\n", "import logging\n", "import sys\n", "import train_utils\n", "import copy\n", "import time\n", "import argparse\n", "from typing import Tuple\n", "from tqdm import tqdm\n", "import numpy as np\n", "import torch\n", "import torch.backends.cudnn as cudnn\n", "from torch import nn, optim\n", "from torch.distributed import Backend\n", "from torch.utils.data import DataLoader, DistributedSampler\n", "from torchvision import datasets, transforms\n", "logger = train_utils.set_logger()\n", "\n", " \n", "def parser_args(train_notebook=False):\n", " parser = argparse.ArgumentParser()\n", "\n", " # Default Setting\n", " parser.add_argument('--log_interval', type=int, default=10, metavar='N',\n", " help='how many batches to wait before logging training status')\n", " parser.add_argument('--seed', type=int, default=1, metavar='S',\n", " help='random seed (default: 1)')\n", "\n", " # Hyperparameter Setting\n", " parser.add_argument('--model_name', type=str, default='mobilenetv2')\n", " parser.add_argument('--lr', type=float, default=0.001)\n", " parser.add_argument('--num_workers', type=int, default=4)\n", " parser.add_argument('--num_epochs', type=int, default=10)\n", " parser.add_argument('--batch_size', type=int, default=128)\n", "\n", " # SageMaker Container environment\n", " parser.add_argument('--hosts', type=list,\n", " default=json.loads(os.environ['SM_HOSTS']))\n", " parser.add_argument('--current_host', type=str,\n", " default=os.environ['SM_CURRENT_HOST'])\n", " parser.add_argument('--model_dir', type=str,\n", " default=os.environ['SM_MODEL_DIR'])\n", " parser.add_argument('--model_chkpt_dir', type=str,\n", " default='/opt/ml/checkpoints') \n", " parser.add_argument('--train_dir', type=str,\n", " default=os.environ['SM_CHANNEL_TRAIN'])\n", " parser.add_argument('--valid_dir', type=str,\n", " default=os.environ['SM_CHANNEL_VALID']) \n", " parser.add_argument('--num_gpus', type=int,\n", " default=os.environ['SM_NUM_GPUS'])\n", " parser.add_argument('--output_data_dir', type=str,\n", " default=os.environ.get('SM_OUTPUT_DATA_DIR'))\n", " \n", " if train_notebook:\n", " args = parser.parse_args([])\n", " else:\n", " args = parser.parse_args()\n", " return args\n", "\n", "\n", "def trainer(current_gpu, model, dataloaders, transforms, args):\n", " \n", " batch_size = args.batch_size\n", " num_epochs = args.num_epochs\n", " feature_extract = False \n", " \n", " optimizer = train_utils.initialize_optimizer(model, feature_extract, lr=1e-3, momentum=0.9) \n", " criterion = nn.CrossEntropyLoss()\n", "\n", " # Send the model to GPU\n", " model = model.to(args.device)\n", " \n", " since = time.time()\n", " best_acc1 = 0.0\n", "\n", " num_samples = {k: len(dataloaders[k].dataset) for k, v in dataloaders.items()}\n", " num_steps = {k: int(np.ceil(len(dataloaders[k].dataset) / (batch_size))) for k, v in dataloaders.items()}\n", "\n", " for epoch in range(1, num_epochs+1):\n", "\n", " batch_time = train_utils.AverageMeter('Time', ':6.3f')\n", " data_time = train_utils.AverageMeter('Data', ':6.3f')\n", " losses = train_utils.AverageMeter('Loss', ':.4e')\n", " top1 = train_utils.AverageMeter('Acc@1', ':6.2f')\n", "\n", " logger.info('-' * 40)\n", " logger.info('[Epoch {}/{}] Processing...'.format(epoch, num_epochs))\n", " logger.info('-' * 40)\n", "\n", " # Each epoch has a training and validation phase\n", " for phase in ['train', 'valid']:\n", " if phase == 'train':\n", " model.train() # Set model to training mode\n", " else:\n", " model.eval() # Set model to evaluate mode\n", "\n", " running_loss = 0.0\n", " running_corrects = 0\n", " running_num_samples = 0\n", " epoch_tic = time.time() \n", " tic = time.time() \n", "\n", " for i, (inputs, labels) in enumerate(dataloaders[phase]):\n", " # measure data loading time\n", " data_time.update(time.time() - tic)\n", "\n", " inputs = inputs.to(args.device)\n", " labels = labels.to(args.device)\n", "\n", " optimizer.zero_grad()\n", "\n", " with torch.set_grad_enabled(phase=='train'):\n", " outputs = model(inputs)\n", " loss = criterion(outputs, labels)\n", " probs, preds = torch.max(outputs, 1)\n", "\n", " if phase == 'train':\n", " loss.backward()\n", " optimizer.step()\n", "\n", " running_loss += loss.item() * inputs.size(0)\n", " running_corrects += torch.sum(preds == labels.data)\n", " running_num_samples += inputs.size(0)\n", " \n", " acc1 = train_utils.accuracy(outputs, labels, topk=(1,)) \n", "\n", " losses.update(train_utils.to_python_float(loss.data), inputs.size(0))\n", " top1.update(train_utils.to_python_float(acc1[0]), inputs.size(0))\n", " batch_time.update(time.time() - tic)\n", " tic = time.time()\n", "\n", " if phase == 'train' and i % args.log_interval == 1:\n", " step_loss = running_loss / running_num_samples\n", " step_acc = running_corrects.double() / running_num_samples\n", " logger.info(f'[Epoch {epoch}/{num_epochs}, Step {i+1}/{num_steps[phase]}] {phase}-acc: {step_acc:.4f}, '\n", " f'{phase}-loss: {step_loss:.4f}, data-time: {data_time.val:.4f}, batch-time: {batch_time.val:.4f}') \n", " logger.info(f'[Epoch {epoch}/{num_epochs}] {phase}-acc: {top1.avg:.4f}, '\n", " f'{phase}-loss: {losses.val:.4f}, time: {time.time()-epoch_tic:.4f}') \n", "\n", " if phase == 'valid':\n", " is_best = top1.avg > best_acc1\n", " best_acc1 = max(top1.avg, best_acc1)\n", "\n", " train_utils.save_model({\n", " 'epoch': epoch + 1,\n", " 'model_name': args.model_name,\n", " 'state_dict': model.state_dict(),\n", " 'optimizer': optimizer.state_dict(),\n", " 'best_acc1': best_acc1,\n", " 'loss': losses\n", " }, is_best, args.model_chkpt_dir, args.model_dir) \n", "\n", " time_elapsed = time.time() - since\n", " logger.info('Training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))\n", " logger.info('Best val acc: {:.4f}'.format(best_acc1)) \n", " \n", " \n", "if __name__ == '__main__':\n", " \n", " is_sm_container = True \n", " if os.environ.get('SM_CURRENT_HOST') is None:\n", " is_sm_container = False\n", " \n", " src_dir = '/'.join(os.getcwd().split('/')[:-1])\n", " os.environ['SM_CURRENT_HOST'] = 'algo-1'\n", " os.environ['SM_HOSTS'] = json.dumps([\"algo-1\"])\n", " os.environ['SM_MODEL_DIR'] = f'{src_dir}/model'\n", " os.environ['SM_NUM_GPUS'] = str(1)\n", " dataset_dir = f'{src_dir}/smartfactory'\n", " os.environ['SM_CHANNEL_TRAIN'] = f'{dataset_dir}/train'\n", " os.environ['SM_CHANNEL_VALID'] = f'{dataset_dir}/valid' \n", " \n", " args = parser_args()\n", " args.use_cuda = args.num_gpus > 0\n", " \n", " print(\"args.use_cuda : {} , args.num_gpus : {}\".format(\n", " args.use_cuda, args.num_gpus))\n", " args.kwargs = {'pin_memory': True} if args.use_cuda else {}\n", " args.device = torch.device(\"cuda\" if args.use_cuda else \"cpu\")\n", " args.rank = 0\n", " args.world_size = 1\n", " \n", " os.makedirs(args.model_chkpt_dir, exist_ok=True)\n", " os.makedirs(args.model_dir, exist_ok=True)\n", "\n", " args.classes, args.classes_dict = train_utils.get_classes(args.train_dir) \n", " args.num_classes = len(args.classes)\n", " \n", " dataloaders, transforms, train_sampler = train_utils.create_dataloaders(\n", " args.train_dir, args.valid_dir, rank=args.rank, \n", " world_size=args.world_size, batch_size=args.batch_size,\n", " num_workers=args.num_workers\n", " )\n", "\n", " feature_extract = False\n", " model = train_utils.initialize_ft_model(args.model_name, num_classes=args.num_classes, feature_extract=feature_extract)\n", "\n", " trainer(0, model, dataloaders, transforms, args)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%bash\n", "sudo rm -rf /opt/ml/model /opt/ml/checkpoints\n", "sudo mkdir -p /opt/ml/model\n", "sudo mkdir -p /opt/ml/checkpoints\n", "sudo chown ec2-user:ec2-user -R /opt/ml/model\n", "sudo chown ec2-user:ec2-user -R /opt/ml/checkpoints" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "train.trainer(0, model, dataloaders, transforms, args)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "\n", "## 2. Check for Validation Data\n", "---\n", "\n", "Try performing inference on the validation dataset in mini-batch." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model_name = 'model_best.pth'\n", "chkpt = torch.load(os.path.join(args.model_dir, model_name))\n", "model.load_state_dict(chkpt['state_dict'])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model.eval()\n", "images_so_far = 0\n", "fig = plt.figure()\n", "num_images = 8\n", "with torch.no_grad():\n", " for i, (inputs, labels) in enumerate(dataloaders['valid']):\n", " inputs = inputs.to(device)\n", " labels = labels.to(device)\n", " outputs = model(inputs)\n", " _, preds = torch.max(outputs, 1)\n", " \n", " plt.figure(figsize=(10, 12)) \n", " for j in range(num_images):\n", " images_so_far += 1\n", " ax = plt.subplot(num_images//2, 2, images_so_far)\n", " ax.axis('off')\n", " ax.set_title('predicted: {}'.format(args.classes[preds[j]]))\n", " \n", " m = inputs.cpu().data[j]\n", " inv_normalize = train_utils.create_inv_transform()\n", " m = inv_normalize(m)\n", " \n", " m = np.transpose(m.numpy(), (1,2,0))\n", " m = np.clip(m, 0, 1) \n", " plt.imshow(m)\n", " break" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "local_model_path = args.model_dir\n", "base_model_name = args.model_name\n", "%store base_model_name local_model_path model_name" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "\n", "# Next Step\n", "\n", "In this session, the model was trained in the local environment without invoking the SageMaker Training job. If you need hands-on practice with SageMaker training, continue with `3_sm_training.ipynb`. If you have a greater need for how to compile and deploy a trained model to a target edge device than SageMaker training, skip `3_sm_training.ipynb` and proceed to `4_neo_compile.ipynb`." ] } ], "metadata": { "kernelspec": { "display_name": "conda_python3", "language": "python", "name": "conda_python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.12" } }, "nbformat": 4, "nbformat_minor": 4 }