{
"cells": [
{
"cell_type": "markdown",
"id": "de7e4dca-3653-4bf6-80a0-d964492d1d91",
"metadata": {},
"source": [
"# Track an experiment while training a Pytorch model locally or in your notebook\n",
"\n",
"***\n",
"To execute this notebook in SageMaker Studio, you should select the `PyTorch 1.12 Python 3.8 CPU Optimizer image`.\n",
"***\n",
"\n",
"This notebook shows how you can use the SageMaker SDK to track a Machine Learning experiment using a Pytorch model trained locally.\n",
"\n",
"We introduce two concepts in this notebook -\n",
"\n",
"* *Experiment:* An experiment is a collection of runs. When you initialize a run in your training loop, you include the name of the experiment that the run belongs to. Experiment names must be unique within your AWS account. \n",
"* *Run:* A run consists of all the inputs, parameters, configurations, and results for one iteration of model training. Initialize an experiment run for tracking a training job with Run(). \n",
"\n",
"\n",
"\n",
"\n",
"You can track artifacts for experiments, including datasets, algorithms, hyperparameters and metrics. Experiments executed on SageMaker such as SageMaker training jobs are automatically tracked and any existen SageMaker experiment on your AWS account is automatically migrated to the new UI version.\n",
"\n",
"In this notebook we will demonstrate the capabilities through an MNIST handwritten digits classification example. The notebook is organized as follow:\n",
"\n",
"1. Download and prepare the MNIST dataset\n",
"2. Train a Convolutional Neural Network (CNN) Model and log the model training metrics\n",
"3. Tune the hyperparameters that configures the number of hidden channels and the optimized in the model. Track teh parameter's configuration, resulting model loss and accuracy and automatically plot a confusion matrix using the Experiments capabilities of the SageMaker SDK.\n",
"4. Analyse your model results and plot graphs comparing your model different runs generated from the tunning step 3.\n",
"\n",
"## Runtime\n",
"This notebook takes approximately 6~8 minutes to run.\n",
"\n",
"## Reference to the github repo\n",
"\n",
"[github link](https://github.com/aws/amazon-sagemaker-examples/tree/main/sagemaker-experiments)\n",
"\n",
"\n",
"## Contents\n",
"1. [Install modules](#Install-modules)\n",
"1. [Setup](#Setup)\n",
"1. [Download the dataset](#Download-the-dataset)\n",
"1. [Create experiment and log dataset information](#Create-experiment-and-log-dataset-information)\n",
"1. [Create model training functions](#Create-model-training-functions)\n",
"1. [Run first experiment](#Run-first-experiment)\n",
"1. [Run multiple experiments](#Run-multiple-experiments)"
]
},
{
"cell_type": "markdown",
"id": "1141d3f8-45ed-4a56-8651-8964446befac",
"metadata": {},
"source": [
"## Install modules\n",
"\n",
"Let's ensure we have the latest SageMaker SDK available, including the SageMaker Experiments functionality"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ba6534d0-316b-4227-af84-37349d39c81b",
"metadata": {
"scrolled": true,
"tags": []
},
"outputs": [],
"source": [
"# update boto3 and sagemaker to ensure latest SDK version\n",
"%pip install --upgrade pip\n",
"%pip install --upgrade boto3\n",
"%pip install --upgrade sagemaker\n",
"%pip install torch\n",
"%pip install torchvision"
]
},
{
"cell_type": "markdown",
"id": "3368d208-aebb-4844-bf27-2b2e373ef3d2",
"metadata": {
"tags": []
},
"source": [
"## Setup\n",
"\n",
"Import required libraries and set logging and experiment configuration\n",
"\n",
"SageMaker Experiments now provides the `Run` class that allows you to create a new experiment run. You can retrieve an existent experiment run using the `load_run` function.\n",
"\n",
"You also define a unique name for the experiment that will be used to create and load the experiment later in this notebook"
]
},
{
"cell_type": "markdown",
"id": "664c8740-73d6-41ca-bcd7-5fe0041e6342",
"metadata": {},
"source": [
"[documentation on experiment api](https://docs.aws.amazon.com/sagemaker/latest/dg/experiments-create.html)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c42ef846-cfcc-4258-81d4-f628930a8544",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import os\n",
"import sys\n",
"\n",
"import torch\n",
"from IPython.display import set_matplotlib_formats\n",
"from matplotlib import pyplot as plt\n",
"\n",
"# A collection of parameters, metrics, and artifacts to create a ML model.\n",
"from sagemaker.experiments.run import Run, load_run\n",
"\n",
"# to manage interactions with the Amazon SageMaker APIs and any other AWS services needed.\n",
"from sagemaker.session import Session\n",
"\n",
"# adding unique id for a name\n",
"from sagemaker.utils import name_from_base, unique_name_from_base\n",
"from torchvision import datasets, transforms"
]
},
{
"cell_type": "markdown",
"id": "d9dc0054-d7dd-4ec8-b1e9-0b292fc7b1c0",
"metadata": {},
"source": [
"## Download the dataset\n",
"Let's now use the torchvision library to download the MNIST dataset from tensorflow and apply a transformation on each image"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "85981bdd-211f-4fb6-8c66-76d5c08557f8",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"datasets.MNIST.mirrors = [\"https://sagemaker-sample-files.s3.amazonaws.com/datasets/image/MNIST/\"]\n",
"\n",
"\n",
"train_set = datasets.MNIST(\n",
" \"mnist_data\",\n",
" train=True,\n",
" transform=transforms.Compose(\n",
" [transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]\n",
" ),\n",
" download=True,\n",
")\n",
"\n",
"test_set = datasets.MNIST(\n",
" \"mnist_data\",\n",
" train=False,\n",
" transform=transforms.Compose(\n",
" [transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]\n",
" ),\n",
" download=True,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "1d7fc1d6-5b50-4a96-9e02-1d623072dd39",
"metadata": {},
"source": [
"View and example image from the dataset"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "786002b6-90c3-4910-8ac2-1970f6f27953",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"print(\" --training: \", len(train_set.data), \" --testing: \", len(test_set.data))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5f8cf8ed-5b35-4474-88b6-f98e0179d4d4",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"plt.imshow(train_set.data[2].numpy())"
]
},
{
"cell_type": "markdown",
"id": "cc1913d9-fe5f-4cf1-aca6-c6f6c12bd21c",
"metadata": {},
"source": [
"## Create experiment and log dataset information\n",
"\n",
"Create an experiment run to track the model training. SageMaker Experiments is a great way to organize your data science work. You can create an experiment to organize all your model runs and analyse the different model metrics with the SageMaker Experiments UI.\n",
"\n",
"Here we create an experiment run and log parameters for the size of our training and test datasets. We also log all the downloaded files as inputs to our model."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3dadf8a1-77fd-42e5-9e55-3ec7081f3674",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"experiment_name = \"local-experiment-example\"\n",
"run_name = name_from_base(\"experiment-run\", short=True)\n",
"print(f\"Experiment name: {experiment_name}\\nRun name: {run_name}\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "93f266e0-d73d-452c-a3ed-51b0fc48075d",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# create an experiment and start a new run\n",
"with Run(experiment_name=experiment_name, run_name=run_name, sagemaker_session=Session()) as run:\n",
" run.log_parameters(\n",
" {\"num_train_samples\": len(train_set.data), \"num_test_samples\": len(test_set.data)},\n",
" )\n",
" for f in os.listdir(train_set.raw_folder):\n",
" print(\"Logging\", train_set.raw_folder + \"/\" + f)\n",
" run.log_file(train_set.raw_folder + \"/\" + f, name=f, is_output=False)"
]
},
{
"cell_type": "markdown",
"id": "259c4fca-5128-4213-8065-cb68bd50b973",
"metadata": {},
"source": [
"Checking the SageMaker Experiments UI, you can observe that a new Experiment was created with the run associated to it.\n",
"\n",
"
\n",
"
\n",
"
"
]
},
{
"cell_type": "markdown",
"id": "87b3cdd3-7b79-4ef2-a2ae-d48b3e5ae393",
"metadata": {},
"source": [
"## Create model training functions\n",
"\n",
"Define your CNN architecture and training function. You can use `run.log_metric` with a defined step to log the metrics of your model for each epoch, in order to plot those metrics with SageMaker Experiments. With `run.log_confusion_matrix` you can automatically plot the confusion matrix of your model."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2ba41e3d-c6b7-4758-81c1-7e6fc1230a4e",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# Based on https://github.com/pytorch/examples/blob/master/mnist/main.py\n",
"class Net(torch.nn.Module):\n",
" def __init__(self, hidden_channels, kernel_size, drop_out):\n",
" super(Net, self).__init__()\n",
" self.conv1 = torch.nn.Conv2d(1, hidden_channels, kernel_size=kernel_size)\n",
" self.conv2 = torch.nn.Conv2d(hidden_channels, 20, kernel_size=kernel_size)\n",
" self.conv2_drop = torch.nn.Dropout2d(p=drop_out)\n",
" self.fc1 = torch.nn.Linear(320, 50)\n",
" self.fc2 = torch.nn.Linear(50, 10)\n",
"\n",
" def forward(self, x):\n",
" x = torch.nn.functional.relu(torch.nn.functional.max_pool2d(self.conv1(x), 2))\n",
" x = torch.nn.functional.relu(\n",
" torch.nn.functional.max_pool2d(self.conv2_drop(self.conv2(x)), 2)\n",
" )\n",
" x = x.view(-1, 320)\n",
" x = torch.nn.functional.relu(self.fc1(x))\n",
" x = torch.nn.functional.dropout(x, training=self.training)\n",
" x = self.fc2(x)\n",
" return torch.nn.functional.log_softmax(x, dim=1)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a15d44f7-e8bb-43d6-a433-54a5912283c2",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"def log_performance(model, data_loader, device, epoch, run, metric_type=\"Test\"):\n",
" \"\"\"\n",
" Compute test metrics and log to SageMkaer expriment Run\n",
" \"\"\"\n",
" model.eval()\n",
" loss = 0\n",
" correct = 0\n",
" with torch.no_grad():\n",
" for data, target in data_loader:\n",
" data, target = data.to(device), target.to(device)\n",
" output = model(data)\n",
" loss += torch.nn.functional.nll_loss(\n",
" output, target, reduction=\"sum\"\n",
" ).item() # sum up batch loss\n",
" # get the index of the max log-probability\n",
" pred = output.max(1, keepdim=True)[1]\n",
" correct += pred.eq(target.view_as(pred)).sum().item()\n",
" loss /= len(data_loader.dataset)\n",
" accuracy = 100.0 * correct / len(data_loader.dataset)\n",
" print(\"loss: \", loss)\n",
" print(\"accuracy: \", accuracy)\n",
"\n",
" # log metrics\n",
" run.log_metric(name=metric_type + \":loss\", value=loss, step=epoch)\n",
" run.log_metric(name=metric_type + \":accuracy\", value=accuracy, step=epoch)\n",
"\n",
"\n",
"def train_model(\n",
" run, train_set, test_set, data_dir=\"mnist_data\", optimizer=\"sgd\", epochs=10, hidden_channels=10\n",
"):\n",
" \"\"\"\n",
" Function that trains the CNN classifier to identify the MNIST digits.\n",
" Args:\n",
" run (sagemaker.experiments.run.Run): SageMaker Experiment run object\n",
" train_set (torchvision.datasets.mnist.MNIST): train dataset\n",
" test_set (torchvision.datasets.mnist.MNIST): test dataset\n",
" data_dir (str): local directory where the MNIST datasource is stored\n",
" optimizer (str): the optimization algorthm to use for training your CNN\n",
" available options are sgd and adam\n",
" epochs (int): number of complete pass of the training dataset through the algorithm\n",
" hidden_channels (int): number of hidden channels in your model\n",
" \"\"\"\n",
"\n",
" # log the parameters of your model\n",
" run.log_parameter(\"device\", \"cpu\")\n",
" run.log_parameters(\n",
" {\n",
" \"data_dir\": data_dir,\n",
" \"optimizer\": optimizer,\n",
" \"epochs\": epochs,\n",
" \"hidden_channels\": hidden_channels,\n",
" }\n",
" )\n",
"\n",
" # train the model on the CPU (no GPU)\n",
" device = torch.device(\"cpu\")\n",
"\n",
" # set the seed for generating random numbers\n",
" torch.manual_seed(42)\n",
"\n",
" train_loader = torch.utils.data.DataLoader(train_set, batch_size=64, shuffle=True)\n",
" test_loader = torch.utils.data.DataLoader(test_set, batch_size=1000, shuffle=True)\n",
" model = Net(hidden_channels, kernel_size=5, drop_out=0.5).to(device)\n",
" model = torch.nn.DataParallel(model)\n",
" momentum = 0.5\n",
" lr = 0.01\n",
" log_interval = 100\n",
" if optimizer == \"sgd\":\n",
" optimizer = torch.optim.SGD(model.parameters(), lr=lr, momentum=momentum)\n",
" else:\n",
" optimizer = torch.optim.Adam(model.parameters(), lr=lr)\n",
"\n",
" for epoch in range(1, epochs + 1):\n",
" print(\"Training Epoch:\", epoch)\n",
" model.train()\n",
" for batch_idx, (data, target) in enumerate(train_loader, 1):\n",
" data, target = data.to(device), target.to(device)\n",
" optimizer.zero_grad()\n",
" output = model(data)\n",
" loss = torch.nn.functional.nll_loss(output, target)\n",
" loss.backward()\n",
" optimizer.step()\n",
"\n",
" log_performance(model, train_loader, device, epoch, run, \"Train\")\n",
" log_performance(model, test_loader, device, epoch, run, \"Test\")\n",
" # log confusion matrix\n",
" with torch.no_grad():\n",
" for data, target in test_loader:\n",
" data, target = data.to(device), target.to(device)\n",
" output = model(data)\n",
" pred = output.max(1, keepdim=True)[1]\n",
" run.log_confusion_matrix(target, pred, \"Confusion-Matrix-Test-Data\")"
]
},
{
"cell_type": "markdown",
"id": "586a8712-f4c9-4769-8e3f-299a45869980",
"metadata": {},
"source": [
"## Run first experiment\n",
"\n",
"You can load an existent run using the `load_run` function with `experiment_name` and `run_name` as parameters. \n",
"Here we train the CNN with 5 hidden channels and ADAM as optimizer. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1e3e836e-2deb-4bc4-b246-062bb1a719de",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"%%time\n",
"with load_run(\n",
" experiment_name=experiment_name,\n",
" run_name=run_name,\n",
" sagemaker_session=Session(),\n",
") as run:\n",
" train_model(\n",
" run=run,\n",
" train_set=train_set,\n",
" test_set=test_set,\n",
" epochs=6,\n",
" hidden_channels=3,\n",
" optimizer=\"SGD\",\n",
" )"
]
},
{
"cell_type": "markdown",
"id": "df712f1f-c85d-4200-98b6-89a648e34cc7",
"metadata": {},
"source": [
"In the SageMaker Experiments UI, you can observe that the new model parameters are added to the run. The model training metrics are captured and can be used to plot graphs. Additionally, the confusion matrix graph is automatically plotted in the UI.\n",
"\n",
"
\n",
"
\n",
"
\n",
"
\n"
]
},
{
"cell_type": "markdown",
"id": "d8ef9a56-fff5-40a0-8d08-db766e66d7ec",
"metadata": {},
"source": [
"## Run multiple experiments\n",
"\n",
"You can now create multiple runs of your experiment using the functions created before"
]
},
{
"cell_type": "markdown",
"id": "5c978f9b-eba9-4db9-bdfa-01f6ee12210f",
"metadata": {
"tags": []
},
"source": [
"In the SageMaker Experiments UI, you can compare the different runs and analyze the metrics for those runs \n",
"\n",
"\n",
"
\n"
]
},
{
"cell_type": "markdown",
"id": "ca3963d8-1275-4915-ba8b-2d0b8c80dccc",
"metadata": {},
"source": [
"## Analysis of multiple runs"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5aa38f23-b063-4dfa-81dd-fb45a865d268",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import matplotlib as mpl\n",
"import matplotlib.pyplot as plt\n",
"import seaborn as sns\n",
"import seaborn.objects as so\n",
"from sagemaker.analytics import ExperimentAnalytics"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e82ee6e1-0b93-46e8-b677-2143541189f9",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"def analyze_experiment(parameter_names: str, metric_names: str, stat_name: str = \"Last\"):\n",
" re_expr = (\n",
" f\"(?:{'|'.join([f'{k}.*- {stat_name}' for k in metric_names] + parameter_names + ['DisplayName'])})\"\n",
" )\n",
"\n",
" trial_component_analytics = ExperimentAnalytics(\n",
" experiment_name=experiment_name,\n",
" parameter_names=parameter_names,\n",
" )\n",
" df = trial_component_analytics.dataframe()\n",
" # df = df[df[\"SourceArn\"].isna()]\n",
" df = df.filter(regex=re_expr)\n",
"\n",
" # join the categorical parameters\n",
" df_temp = df[parameter_names].select_dtypes(\"object\")\n",
"\n",
" cat_col_name = \"_\".join(df_temp.columns.values)\n",
"\n",
" if len(df_temp.columns) > 1:\n",
" df.loc[:, cat_col_name] = df_temp.astype(str).apply(\"_\".join, axis=1)\n",
" df = df.drop(columns=df_temp.columns.values)\n",
"\n",
" ordinal_params = df[parameter_names].select_dtypes(\"number\").columns.tolist()\n",
" df_plot = df.melt(id_vars=[\"DisplayName\"] + ordinal_params + [cat_col_name])\n",
" df_plot[[\"Dataset\", \"Metrics\"]] = (\n",
" df_plot.variable.str.split(\" - \").str[0].str.split(\":\", expand=True)\n",
" )\n",
" f = plt.Figure(\n",
" figsize=(8, 6 * len(ordinal_params)),\n",
" facecolor=\"w\",\n",
" layout=\"constrained\",\n",
" frameon=True,\n",
" )\n",
" f.suptitle(\"Experiment Analysis\")\n",
" sf = f.subfigures(1, len(ordinal_params))\n",
"\n",
" if isinstance(sf, mpl.figure.SubFigure):\n",
" sf = [sf]\n",
"\n",
" for k, p in zip(sf, ordinal_params):\n",
"\n",
" (\n",
" so.Plot(\n",
" df_plot,\n",
" y=\"value\",\n",
" x=p,\n",
" color=cat_col_name,\n",
" )\n",
" .facet(col=\"Dataset\", row=\"Metrics\")\n",
" .add(so.Dot())\n",
" .share(y=False)\n",
" .limit(y=(0, None))\n",
" .on(k)\n",
" .plot()\n",
" )\n",
" return f"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5a05115f-9f42-4c57-8a8c-f558d36da438",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"parameter_names = [\"hidden_channels\", \"optimizer\"]\n",
"metric_names = [\"accuracy\", \"loss\"]\n",
"stat_name = \"Last\"\n",
"\n",
"\n",
"analyze_experiment(parameter_names, metric_names)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "930c3a3f-c077-443f-a645-ad0e22743302",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"forced_instance_type": "ml.t3.medium",
"forced_lcc_arn": "",
"instance_type": "ml.t3.medium",
"kernelspec": {
"display_name": "Python 3 (PyTorch 1.12 Python 3.8 CPU Optimized)",
"language": "python",
"name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/pytorch-1.12-cpu-py38"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.13"
}
},
"nbformat": 4,
"nbformat_minor": 5
}