{ "cells": [ { "attachments": {}, "cell_type": "markdown", "id": "daf4a537", "metadata": { "papermill": { "duration": 0.02532, "end_time": "2022-08-10T21:51:36.976044", "exception": false, "start_time": "2022-08-10T21:51:36.950724", "status": "completed" }, "tags": [] }, "source": [ "# Run a SageMaker Experiment with Pytorch DDP - MNIST Handwritten Digits Classification\n" ] }, { "attachments": {}, "cell_type": "markdown", "id": "7936183d", "metadata": {}, "source": [ "---\n", "\n", "This notebook's CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook. \n", "\n", "![This us-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-2/sagemaker-experiments|sagemaker_job_tracking|pytorch_distributed_training_experiment.ipynb)\n", "\n", "---" ] }, { "attachments": {}, "cell_type": "markdown", "id": "33d758a1", "metadata": { "papermill": { "duration": 0.02532, "end_time": "2022-08-10T21:51:36.976044", "exception": false, "start_time": "2022-08-10T21:51:36.950724", "status": "completed" }, "tags": [] }, "source": [ "\n", "This notebook shows how you can use the SageMaker SDK to track a Machine Learning experiment. \n", "\n", "We introduce two concepts in this notebook -\n", "\n", "* *Experiment:* An experiment is a collection of runs. When you initialize a run in your training loop, you include the name of the experiment that the run belongs to. Experiment names must be unique within your AWS account. \n", "* *Run:* A run consists of all the inputs, parameters, configurations, and results for one iteration of model training. Initialize an experiment run for tracking a training job with Run(). \n", "\n", "To execute this notebook in SageMaker Studio, you should select the `PyTorch 1.12 Python 3.8 CPU Optimizer image`.\n", "\n", "\n", "You can track artifacts for experiments, including datasets, algorithms, hyperparameters and metrics. Experiments executed on SageMaker such as SageMaker training jobs are automatically tracked and any existing SageMaker experiment on your AWS account is automatically migrated to the new UI version.\n", "\n", "We demonstrate these capabilities through a PyTorch DDP - MNIST handwritten digits classification example. The experiment is organized as follows:\n", "\n", "1. Download and prepare the MNIST dataset.\n", "2. Train a Convolutional Neural Network (CNN) Model. Tune the hyperparameter that configures the number of hidden channels in the model. Track the parameter configurations and resulting model accuracy using the SageMaker Experiments Python SDK.\n", "3. Finally use the search and analytics capabilities of the SDK to search, compare and evaluate the performance of all model versions generated from model tuning in Step 2.\n", "4. We also show an example of tracing the complete lineage of a model version: the collection of all the data pre-processing and training configurations and inputs that went into creating that model version.\n", "\n", "Make sure you select the `PyTorch 1.12 Python 3.8 CPU Optimized` kernel in Amazon SageMaker Studio.\n", "\n", "## Runtime\n", "\n", "This notebook takes approximately 25 minutes to run.\n", "\n", "## Contents\n", "\n", "1. [Install modules](#Install-modules)\n", "1. [Setup](#Setup)\n", "1. [Download the dataset](#Download-the-dataset)\n", "1. [Step 1: Set up the Experiment](#Step-1:-Set-up-the-Experiment)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "aa27e803", "metadata": { "papermill": { "duration": 0.024762, "end_time": "2022-08-10T21:51:37.025914", "exception": false, "start_time": "2022-08-10T21:51:37.001152", "status": "completed" }, "tags": [] }, "source": [ "## Install modules" ] }, { "cell_type": "code", "execution_count": null, "id": "d0a1cd1f-0c57-4b21-aff7-225fcce94606", "metadata": { "tags": [] }, "outputs": [], "source": [ "import sys" ] }, { "cell_type": "code", "execution_count": null, "id": "f4d18318-8995-4711-8f76-321fa4654ac9", "metadata": { "scrolled": true, "tags": [] }, "outputs": [], "source": [ "# update boto3 and sagemaker to ensure latest SDK version\n", "!{sys.executable} -m pip uninstall -y sagemaker\n", "!{sys.executable} -m pip install --upgrade pip\n", "!{sys.executable} -m pip install --upgrade boto3 --no-cache-dir\n", "!{sys.executable} -m pip install --upgrade sagemaker==2.123.0 --no-cache-dir\n", "!{sys.executable} -m pip install --upgrade torch\n", "!{sys.executable} -m pip install --upgrade torchvision" ] }, { "attachments": {}, "cell_type": "markdown", "id": "bb0a6ed2", "metadata": { "papermill": { "duration": 0.025572, "end_time": "2022-08-10T21:51:37.132910", "exception": false, "start_time": "2022-08-10T21:51:37.107338", "status": "completed" }, "tags": [] }, "source": [ "### Install the SageMaker Experiments Python SDK" ] }, { "attachments": {}, "cell_type": "markdown", "id": "b2c708fd", "metadata": { "papermill": { "duration": 0.808621, "end_time": "2022-08-10T21:53:54.461296", "exception": false, "start_time": "2022-08-10T21:53:53.652675", "status": "completed" }, "tags": [] }, "source": [ "## Setup" ] }, { "cell_type": "code", "execution_count": null, "id": "0baec74a", "metadata": { "papermill": { "duration": 6.893331, "end_time": "2022-08-10T21:54:02.167120", "exception": false, "start_time": "2022-08-10T21:53:55.273789", "status": "completed" }, "tags": [] }, "outputs": [], "source": [ "import time\n", "import os\n", "import importlib\n", "import boto3\n", "import numpy as np\n", "import pandas as pd\n", "from IPython.display import set_matplotlib_formats\n", "from matplotlib import pyplot as plt\n", "from torchvision import datasets, transforms\n", "\n", "import sagemaker\n", "from sagemaker import get_execution_role\n", "from sagemaker.session import Session\n", "\n", "\n", "s3 = boto3.client(\"s3\")\n", "\n", "set_matplotlib_formats(\"retina\")" ] }, { "cell_type": "code", "execution_count": null, "id": "bdb38697", "metadata": { "papermill": { "duration": 1.907829, "end_time": "2022-08-10T21:54:04.863387", "exception": false, "start_time": "2022-08-10T21:54:02.955558", "status": "completed" }, "tags": [] }, "outputs": [], "source": [ "sm_sess = sagemaker.Session()\n", "sess = sm_sess.boto_session\n", "sm = sm_sess.sagemaker_client\n", "role = get_execution_role()\n", "region = sess.region_name" ] }, { "attachments": {}, "cell_type": "markdown", "id": "97f67a29", "metadata": { "papermill": { "duration": 0.796698, "end_time": "2022-08-10T21:54:06.154129", "exception": false, "start_time": "2022-08-10T21:54:05.357431", "status": "completed" }, "tags": [] }, "source": [ "## Download the dataset\n", "We download the MNIST handwritten digits dataset, and then apply a transformation on each image." ] }, { "cell_type": "code", "execution_count": null, "id": "a5afddda", "metadata": { "papermill": { "duration": 4.705622, "end_time": "2022-08-10T21:54:11.677415", "exception": false, "start_time": "2022-08-10T21:54:06.971793", "status": "completed" }, "tags": [] }, "outputs": [], "source": [ "bucket = sm_sess.default_bucket()\n", "prefix = \"DEMO-mnist\"\n", "print(\"Using S3 location: s3://\" + bucket + \"/\" + prefix + \"/\")\n", "\n", "datasets.MNIST.urls = [\n", " f\"https://sagemaker-example-files-prod-{region}.s3.amazonaws.com/datasets/image/MNIST/train-images-idx3-ubyte.gz\",\n", " f\"https://sagemaker-example-files-prod-{region}.s3.amazonaws.com/datasets/image/MNIST/train-labels-idx1-ubyte.gz\",\n", " f\"https://sagemaker-example-files-prod-{region}.s3.amazonaws.com/datasets/image/MNIST/t10k-images-idx3-ubyte.gz\",\n", " f\"https://sagemaker-example-files-prod-{region}.s3.amazonaws.com/datasets/image/MNIST/t10k-labels-idx1-ubyte.gz\",\n", "]\n", "\n", "# Download the dataset to the ./mnist folder, and load and transform (normalize) them\n", "train_set = datasets.MNIST(\n", " \"mnist\",\n", " train=True,\n", " transform=transforms.Compose(\n", " [transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]\n", " ),\n", " download=True,\n", ")\n", "\n", "test_set = datasets.MNIST(\n", " \"mnist\",\n", " train=False,\n", " transform=transforms.Compose(\n", " [transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]\n", " ),\n", " download=False,\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "id": "bf4d2cf2", "metadata": { "papermill": { "duration": 0.424392, "end_time": "2022-08-10T21:54:12.574066", "exception": false, "start_time": "2022-08-10T21:54:12.149674", "status": "completed" }, "tags": [] }, "source": [ "View an example image from the dataset." ] }, { "cell_type": "code", "execution_count": null, "id": "baa9b226", "metadata": { "papermill": { "duration": 1.398533, "end_time": "2022-08-10T21:54:14.255018", "exception": false, "start_time": "2022-08-10T21:54:12.856485", "status": "completed" }, "tags": [] }, "outputs": [], "source": [ "plt.imshow(train_set.data[2].numpy())" ] }, { "attachments": {}, "cell_type": "markdown", "id": "2095111d", "metadata": { "papermill": { "duration": 0.812931, "end_time": "2022-08-10T21:54:15.860198", "exception": false, "start_time": "2022-08-10T21:54:15.047267", "status": "completed" }, "tags": [] }, "source": [ "After transforming the images in the dataset, we upload it to S3." ] }, { "cell_type": "code", "execution_count": null, "id": "f5381859", "metadata": { "papermill": { "duration": 5.752617, "end_time": "2022-08-10T21:54:22.428962", "exception": false, "start_time": "2022-08-10T21:54:16.676345", "status": "completed" }, "tags": [] }, "outputs": [], "source": [ "inputs = sagemaker.Session().upload_data(path=\"mnist\", bucket=bucket, key_prefix=prefix)\n", "print(\"S3 path for data: \", inputs)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "8349b52b-c984-4924-978f-a53b6e646071", "metadata": {}, "source": [ "## Prepare Training Script for Distributed Data Parallel" ] }, { "cell_type": "code", "execution_count": null, "id": "5064acca-c600-4797-b59b-996d3abd2329", "metadata": { "tags": [] }, "outputs": [], "source": [ "%%writefile ./mnist_ddp.py\n", "\n", "\n", "import argparse\n", "import json\n", "import logging\n", "import os\n", "import sys\n", "import time\n", "from os.path import join\n", "\n", "os.system(\"pip install -U sagemaker\")\n", "\n", "import boto3\n", "import torch\n", "import torch.distributed as dist\n", "import torch.nn as nn\n", "import torch.nn.functional as F\n", "import torch.optim as optim\n", "import torch.utils.data\n", "import torch.utils.data.distributed\n", "from torchvision import datasets, transforms\n", "from sagemaker.session import Session\n", "from sagemaker.experiments.run import Run, load_run\n", "\n", "logger = logging.getLogger(__name__)\n", "logger.setLevel(logging.DEBUG)\n", "logger.addHandler(logging.StreamHandler(sys.stdout))\n", "\n", "boto_session = boto3.session.Session(region_name=os.environ[\"AWS_REGION\"])\n", "sagemaker_session = Session(boto_session=boto_session)\n", "\n", "\n", "if \"SAGEMAKER_METRICS_DIRECTORY\" in os.environ:\n", " log_file_handler = logging.FileHandler(\n", " join(os.environ[\"SAGEMAKER_METRICS_DIRECTORY\"], \"metrics.json\")\n", " )\n", " formatter = logging.Formatter(\n", " \"{'time':'%(asctime)s', 'name': '%(name)s', \\\n", " 'level': '%(levelname)s', 'message': '%(message)s'}\",\n", " style=\"%\",\n", " )\n", " log_file_handler.setFormatter(formatter)\n", " logger.addHandler(log_file_handler)\n", "\n", "\n", "# Based on https://github.com/pytorch/examples/blob/master/mnist/main.py\n", "class Net(nn.Module):\n", " def __init__(self, hidden_channels, kernel_size, drop_out):\n", " super(Net, self).__init__()\n", " self.conv1 = nn.Conv2d(1, hidden_channels, kernel_size=kernel_size)\n", " self.conv2 = nn.Conv2d(hidden_channels, 20, kernel_size=kernel_size)\n", " self.conv2_drop = nn.Dropout2d(p=drop_out)\n", " self.fc1 = nn.Linear(320, 50)\n", " self.fc2 = nn.Linear(50, 10)\n", "\n", " def forward(self, x):\n", " x = F.relu(F.max_pool2d(self.conv1(x), 2))\n", " x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))\n", " x = x.view(-1, 320)\n", " x = F.relu(self.fc1(x))\n", " x = F.dropout(x, training=self.training)\n", " x = self.fc2(x)\n", " return F.log_softmax(x, dim=1)\n", "\n", "\n", "def _get_train_data_loader(batch_size, training_dir, is_distributed, **kwargs):\n", " logger.info(\"Get train data loader\")\n", " dataset = datasets.MNIST(\n", " training_dir,\n", " train=True,\n", " transform=transforms.Compose(\n", " [transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]\n", " ),\n", " download=False,\n", " )\n", " train_sampler = (\n", " torch.utils.data.distributed.DistributedSampler(dataset) if is_distributed else None\n", " )\n", " return torch.utils.data.DataLoader(\n", " dataset,\n", " batch_size=batch_size,\n", " shuffle=train_sampler is None,\n", " sampler=train_sampler,\n", " **kwargs,\n", " )\n", "\n", "\n", "def _get_test_data_loader(test_batch_size, training_dir, **kwargs):\n", " logger.info(\"Get test data loader\")\n", " return torch.utils.data.DataLoader(\n", " datasets.MNIST(\n", " training_dir,\n", " train=False,\n", " transform=transforms.Compose(\n", " [transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]\n", " ),\n", " download=False,\n", " ),\n", " batch_size=test_batch_size,\n", " shuffle=True,\n", " **kwargs,\n", " )\n", "\n", "\n", "def _average_gradients(model):\n", " # Gradient averaging.\n", " size = float(dist.get_world_size())\n", " for param in model.parameters():\n", " dist.all_reduce(param.grad.data, op=dist.reduce_op.SUM)\n", " param.grad.data /= size\n", "\n", "\n", "def train(args, tracker=None):\n", " print(\"------ number of hosts --------\", len(args.hosts))\n", " is_distributed = len(args.hosts) > 1 and args.backend is not None\n", " logger.debug(\"Distributed training - {}\".format(is_distributed))\n", " use_cuda = args.num_gpus > 0\n", " logger.debug(\"Number of gpus available - {}\".format(args.num_gpus))\n", " kwargs = {\"num_workers\": 1, \"pin_memory\": True} if use_cuda else {}\n", " device = torch.device(\"cuda\" if use_cuda else \"cpu\")\n", " rank = None\n", "\n", " if is_distributed:\n", " # Initialize the distributed environment.\n", " world_size = len(args.hosts)\n", " os.environ[\"WORLD_SIZE\"] = str(world_size)\n", " host_rank = args.hosts.index(args.current_host)\n", " os.environ[\"RANK\"] = str(host_rank)\n", " dist.init_process_group(backend=args.backend, rank=host_rank, world_size=world_size)\n", " rank = dist.get_rank()\n", " print(\"------- rank --------\", rank)\n", " logger.info(\n", " \"Initialized the distributed environment: '{}' backend on {} nodes. \".format(\n", " args.backend, dist.get_world_size()\n", " )\n", " + \"Current host rank is {}. Number of gpus: {}\".format(dist.get_rank(), args.num_gpus)\n", " )\n", "\n", " # set the seed for generating random numbers\n", " torch.manual_seed(args.seed)\n", " if use_cuda:\n", " torch.cuda.manual_seed(args.seed)\n", "\n", " train_loader = _get_train_data_loader(args.batch_size, args.data_dir, is_distributed, **kwargs)\n", " test_loader = _get_test_data_loader(args.test_batch_size, args.data_dir, **kwargs)\n", "\n", " logger.info(\n", " \"Processes {}/{} ({:.0f}%) of train data\".format(\n", " len(train_loader.sampler),\n", " len(train_loader.dataset),\n", " 100.0 * len(train_loader.sampler) / len(train_loader.dataset),\n", " )\n", " )\n", "\n", " logger.info(\n", " \"Processes {}/{} ({:.0f}%) of test data\".format(\n", " len(test_loader.sampler),\n", " len(test_loader.dataset),\n", " 100.0 * len(test_loader.sampler) / len(test_loader.dataset),\n", " )\n", " )\n", "\n", " model = Net(args.hidden_channels, args.kernel_size, args.dropout).to(device)\n", " if is_distributed and use_cuda:\n", " # multi-machine multi-gpu case\n", " model = torch.nn.parallel.DistributedDataParallel(model)\n", " else:\n", " # single-machine multi-gpu case or single-machine or multi-machine cpu case\n", " model = torch.nn.DataParallel(model)\n", "\n", " if args.optimizer == \"sgd\":\n", " optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum)\n", " else:\n", " optimizer = optim.Adam(model.parameters(), lr=args.lr)\n", "\n", " with load_run(sagemaker_session=sagemaker_session) as run:\n", " run.log_parameters(vars(args))\n", " for epoch in range(1, args.epochs + 1):\n", " model.train()\n", " for batch_idx, (data, target) in enumerate(train_loader, 1):\n", " data, target = data.to(device), target.to(device)\n", " optimizer.zero_grad()\n", " output = model(data)\n", " loss = F.nll_loss(output, target)\n", " loss.backward()\n", " if is_distributed and not use_cuda:\n", " # average gradients manually for multi-machine cpu case only\n", " _average_gradients(model)\n", " optimizer.step()\n", " if batch_idx % args.log_interval == 0 and rank == 0:\n", " logger.info(\n", " \"Train Epoch: {} [{}/{} ({:.0f}%)], Train Loss: {:.6f};\".format(\n", " epoch,\n", " batch_idx * len(data),\n", " len(train_loader.sampler),\n", " 100.0 * batch_idx / len(train_loader),\n", " loss.item(),\n", " )\n", " )\n", " if rank == 0:\n", " test_loss, correct, target, pred = test(model, test_loader, device, tracker)\n", " logger.info(\n", " \"Test Average loss: {:.4f}, Test Accuracy: {:.0f}%;\\n\".format(\n", " test_loss, 100.0 * correct / len(test_loader.dataset)\n", " )\n", " )\n", " run.log_metric(name=\"train_loss\", value=loss.item(), step=epoch)\n", " run.log_metric(name=\"test_loss\", value=test_loss, step=epoch)\n", " run.log_metric(\n", " name=\"test_accuracy\",\n", " value=100.0 * correct / len(test_loader.dataset),\n", " step=epoch,\n", " )\n", " save_model(model, args.model_dir)\n", "\n", "\n", "def test(model, test_loader, device, tracker=None):\n", " model.eval()\n", " test_loss = 0\n", " correct = 0\n", " with torch.no_grad():\n", " for data, target in test_loader:\n", " data, target = data.to(device), target.to(device)\n", " output = model(data)\n", " test_loss += F.nll_loss(output, target, size_average=False).item() # sum up batch loss\n", " pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability\n", " correct += pred.eq(target.view_as(pred)).sum().item()\n", "\n", " test_loss /= len(test_loader.dataset)\n", " return test_loss, correct, target, pred\n", "\n", "\n", "def model_fn(model_dir):\n", " device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n", "\n", " hidden_channels = int(os.environ.get(\"hidden_channels\", \"5\"))\n", " kernel_size = int(os.environ.get(\"kernel_size\", \"5\"))\n", " dropout = float(os.environ.get(\"dropout\", \"0.5\"))\n", " model = torch.nn.DataParallel(Net(hidden_channels, kernel_size, dropout))\n", " with open(os.path.join(model_dir, \"model.pth\"), \"rb\") as f:\n", " model.load_state_dict(torch.load(f))\n", " return model.to(device)\n", "\n", "\n", "def save_model(model, model_dir):\n", " logger.info(\"Saving the model.\")\n", " path = os.path.join(model_dir, \"model.pth\")\n", " # recommended way from http://pytorch.org/docs/master/notes/serialization.html\n", " torch.save(model.cpu().state_dict(), path)\n", "\n", "\n", "if __name__ == \"__main__\":\n", " parser = argparse.ArgumentParser()\n", "\n", " # Data and model checkpoints directories\n", " parser.add_argument(\n", " \"--batch-size\",\n", " type=int,\n", " default=64,\n", " metavar=\"N\",\n", " help=\"input batch size for training (default: 64)\",\n", " )\n", " parser.add_argument(\n", " \"--test-batch-size\",\n", " type=int,\n", " default=1000,\n", " metavar=\"N\",\n", " help=\"input batch size for testing (default: 1000)\",\n", " )\n", " parser.add_argument(\n", " \"--epochs\",\n", " type=int,\n", " default=10,\n", " metavar=\"N\",\n", " help=\"number of epochs to train (default: 10)\",\n", " )\n", " parser.add_argument(\"--optimizer\", type=str, default=\"sgd\", help=\"optimizer for training.\")\n", " parser.add_argument(\n", " \"--lr\",\n", " type=float,\n", " default=0.01,\n", " metavar=\"LR\",\n", " help=\"learning rate (default: 0.01)\",\n", " )\n", " parser.add_argument(\n", " \"--dropout\",\n", " type=float,\n", " default=0.5,\n", " metavar=\"DROP\",\n", " help=\"dropout rate (default: 0.5)\",\n", " )\n", " parser.add_argument(\n", " \"--kernel_size\",\n", " type=int,\n", " default=5,\n", " metavar=\"KERNEL\",\n", " help=\"conv2d filter kernel size (default: 5)\",\n", " )\n", " parser.add_argument(\n", " \"--momentum\",\n", " type=float,\n", " default=0.5,\n", " metavar=\"M\",\n", " help=\"SGD momentum (default: 0.5)\",\n", " )\n", " parser.add_argument(\n", " \"--hidden_channels\",\n", " type=int,\n", " default=10,\n", " help=\"number of channels in hidden conv layer\",\n", " )\n", " parser.add_argument(\"--seed\", type=int, default=1, metavar=\"S\", help=\"random seed (default: 1)\")\n", " parser.add_argument(\n", " \"--log-interval\",\n", " type=int,\n", " default=100,\n", " metavar=\"N\",\n", " help=\"how many batches to wait before logging training status\",\n", " )\n", " parser.add_argument(\n", " \"--backend\",\n", " type=str,\n", " default=\"nccl\",\n", " help=\"backend for distributed training (tcp, gloo on cpu and gloo, nccl on gpu)\",\n", " )\n", "\n", " # Container environment\n", " parser.add_argument(\"--hosts\", type=list, default=json.loads(os.environ[\"SM_HOSTS\"]))\n", " parser.add_argument(\"--current-host\", type=str, default=os.environ[\"SM_CURRENT_HOST\"])\n", " parser.add_argument(\"--model-dir\", type=str, default=os.environ[\"SM_MODEL_DIR\"])\n", " parser.add_argument(\"--data-dir\", type=str, default=os.environ[\"SM_CHANNEL_TRAINING\"])\n", " parser.add_argument(\"--num-gpus\", type=int, default=os.environ[\"SM_NUM_GPUS\"])\n", "\n", " args = parser.parse_args()\n", "\n", " train(args)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "04ef9e20", "metadata": { "papermill": { "duration": 0.806991, "end_time": "2022-08-10T21:54:27.281808", "exception": false, "start_time": "2022-08-10T21:54:26.474817", "status": "completed" }, "tags": [] }, "source": [ "## Step 1: Set up the Experiment" ] }, { "attachments": {}, "cell_type": "markdown", "id": "dc161575", "metadata": { "papermill": { "duration": 0.72498, "end_time": "2022-08-10T21:54:28.880347", "exception": false, "start_time": "2022-08-10T21:54:28.155367", "status": "completed" }, "tags": [] }, "source": [ "### Create an Experiment" ] }, { "cell_type": "code", "execution_count": null, "id": "e392c92e", "metadata": { "papermill": { "duration": 0.912762, "end_time": "2022-08-10T21:54:34.077683", "exception": false, "start_time": "2022-08-10T21:54:33.164921", "status": "completed" }, "tags": [] }, "outputs": [], "source": [ "from sagemaker.pytorch import PyTorch" ] }, { "attachments": {}, "cell_type": "markdown", "id": "13d272ee", "metadata": { "papermill": { "duration": 0.804412, "end_time": "2022-08-10T21:54:37.481946", "exception": false, "start_time": "2022-08-10T21:54:36.677534", "status": "completed" }, "tags": [] }, "source": [ "If you want to run the following five training jobs in parallel, you may need to increase your resource limit. Here we run them sequentially." ] }, { "cell_type": "code", "execution_count": null, "id": "ef52c1e7-5881-4e41-a9e5-b7ceb6900d40", "metadata": { "scrolled": true, "tags": [] }, "outputs": [], "source": [ "from sagemaker.experiments.run import Run\n", "\n", "experiment_name = \"distributed-train-job-experiment-final\"\n", "run_name = \"run-ddp-1\"\n", "with Run(\n", " experiment_name=experiment_name,\n", " run_name=run_name,\n", " sagemaker_session=sm_sess,\n", ") as run:\n", " est = PyTorch(\n", " entry_point=\"./mnist_ddp.py\",\n", " role=role,\n", " model_dir=False,\n", " framework_version=\"1.12\",\n", " py_version=\"py38\",\n", " instance_type=\"ml.g4dn.12xlarge\",\n", " instance_count=2,\n", " hyperparameters={\n", " \"epochs\": 10,\n", " \"hidden_channels\": 20,\n", " \"backend\": \"nccl\",\n", " \"dropout\": 0.2,\n", " \"kernel_size\": 5,\n", " \"optimizer\": \"sgd\",\n", " },\n", " keep_alive_period_in_seconds=10 * 60, # keeping the instance warm for 10mins\n", " )\n", " est.fit(\n", " inputs={\"training\": inputs},\n", " )" ] }, { "attachments": {}, "cell_type": "markdown", "id": "3072dd93", "metadata": {}, "source": [ "## Notebook CI Test Results\n", "\n", "This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.\n", "\n", "![This us-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-1/sagemaker-experiments|sagemaker_job_tracking|pytorch_distributed_training_experiment.ipynb)\n", "\n", "![This us-east-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-2/sagemaker-experiments|sagemaker_job_tracking|pytorch_distributed_training_experiment.ipynb)\n", "\n", "![This us-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-1/sagemaker-experiments|sagemaker_job_tracking|pytorch_distributed_training_experiment.ipynb)\n", "\n", "![This ca-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ca-central-1/sagemaker-experiments|sagemaker_job_tracking|pytorch_distributed_training_experiment.ipynb)\n", "\n", "![This sa-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/sa-east-1/sagemaker-experiments|sagemaker_job_tracking|pytorch_distributed_training_experiment.ipynb)\n", "\n", "![This eu-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-1/sagemaker-experiments|sagemaker_job_tracking|pytorch_distributed_training_experiment.ipynb)\n", "\n", "![This eu-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-2/sagemaker-experiments|sagemaker_job_tracking|pytorch_distributed_training_experiment.ipynb)\n", "\n", "![This eu-west-3 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-3/sagemaker-experiments|sagemaker_job_tracking|pytorch_distributed_training_experiment.ipynb)\n", "\n", "![This eu-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-central-1/sagemaker-experiments|sagemaker_job_tracking|pytorch_distributed_training_experiment.ipynb)\n", "\n", "![This eu-north-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-north-1/sagemaker-experiments|sagemaker_job_tracking|pytorch_distributed_training_experiment.ipynb)\n", "\n", "![This ap-southeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-1/sagemaker-experiments|sagemaker_job_tracking|pytorch_distributed_training_experiment.ipynb)\n", "\n", "![This ap-southeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-2/sagemaker-experiments|sagemaker_job_tracking|pytorch_distributed_training_experiment.ipynb)\n", "\n", "![This ap-northeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-1/sagemaker-experiments|sagemaker_job_tracking|pytorch_distributed_training_experiment.ipynb)\n", "\n", "![This ap-northeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-2/sagemaker-experiments|sagemaker_job_tracking|pytorch_distributed_training_experiment.ipynb)\n", "\n", "![This ap-south-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-south-1/sagemaker-experiments|sagemaker_job_tracking|pytorch_distributed_training_experiment.ipynb)\n" ] } ], "metadata": { "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (PyTorch 1.12 Python 3.8 CPU Optimized)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-2:429704687514:image/pytorch-1.12-cpu-py38" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.13" }, "papermill": { "default_parameters": {}, "duration": 1889.723815, "end_time": "2022-08-10T22:23:05.721503", "environment_variables": {}, "exception": null, "input_path": "mnist-handwritten-digits-classification-experiment.ipynb", "output_path": "/opt/ml/processing/output/mnist-handwritten-digits-classification-experiment-2022-08-10-21-39-25.ipynb", "parameters": { "kms_key": "arn:aws:kms:us-west-2:000000000000:1234abcd-12ab-34cd-56ef-1234567890ab" }, "start_time": "2022-08-10T21:51:35.997688", "version": "2.3.4" } }, "nbformat": 4, "nbformat_minor": 5 }