{ "cells": [ { "cell_type": "markdown", "id": "94d85a49", "metadata": {}, "source": [ "# Compile and Train a Vision Transformer Model on the Caltech-256 Dataset using a Single Node " ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "This notebook's CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook. \n", "\n", "![This us-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-2/sagemaker-training-compiler|tensorflow|single_gpu_single_node|vision-transformer.ipynb)\n", "\n", "---" ] }, { "cell_type": "markdown", "id": "e5d6e9dc", "metadata": {}, "source": [ "1. [Introduction](#Introduction) \n", "2. [Development Environment and Permissions](#Development-Environment-and-Permissions)\n", " 1. [Installation](#Installation) \n", " 2. [SageMaker environment](#SageMaker-environment)\n", "3. [Working with the Caltech-256 dataset](#Working-with-the-Caltech-256-dataset) \n", "4. [SageMaker Training Job](#SageMaker-Training-Job) \n", " 1. [Training Setup](#Training-Setup) \n", " 2. [Training with Native TensorFlow](#Training-with-Native-TensorFlow) \n", " 3. [Training with Optimized TensorFlow](#Training-with-Optimized-TensorFlow) \n", "5. [Analysis](#Analysis)\n", " 1. [Savings from Training Compiler](#Savings-from-Training-Compiler)\n", " 2. [Convergence of Training](#Convergence-of-Training)\n", "6. [Clean up](#Clean-up)\n" ] }, { "cell_type": "markdown", "id": "1f0615a1", "metadata": {}, "source": [ "## SageMaker Training Compiler Overview\n", "\n", "SageMaker Training Compiler is a capability of SageMaker that makes hard-to-implement optimizations to reduce training time on GPU instances. The compiler optimizes Deep Learning (DL) models to accelerate training by more efficiently using SageMaker machine learning (ML) GPU instances. SageMaker Training Compiler is available at no additional charge within SageMaker and can help reduce total billable time as it accelerates training. \n", "\n", "SageMaker Training Compiler is integrated into the AWS Deep Learning Containers (DLCs). Using the SageMaker Training Compiler enabled AWS DLCs, you can compile and optimize training jobs on GPU instances with minimal changes to your code. Bring your deep learning models to SageMaker and enable SageMaker Training Compiler to accelerate the speed of your training job on SageMaker ML instances for accelerated computing. \n", "\n", "For more information, see [SageMaker Training Compiler](https://docs.aws.amazon.com/sagemaker/latest/dg/training-compiler.html) in the *Amazon SageMaker Developer Guide*.\n", "\n", "## Introduction\n", "\n", "In this demo, you'll use Amazon SageMaker Training Compiler to train the `Vision Transformer` model on the `Caltech-256` dataset. To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on. \n", "\n", "**NOTE:** You can run this demo in SageMaker Studio, SageMaker notebook instances, or your local machine with AWS CLI set up. If using SageMaker Studio or SageMaker notebook instances, make sure you choose one of the TensorFlow-based kernels, `Python 3 (TensorFlow x.y Python 3.x CPU Optimized)` or `conda_tensorflow_p39` respectively.\n", "\n", "**NOTE:** This notebook uses a `ml.p3.2xlarge` instance with a single GPU. However, it can easily be extended to multiple GPUs on a single node. If you don't have enough quota, see [Request a service quota increase for SageMaker resources](https://docs.aws.amazon.com/sagemaker/latest/dg/regions-quotas.html#service-limit-increase-request-procedure). " ] }, { "cell_type": "markdown", "id": "f204a759", "metadata": {}, "source": [ "## Development Environment \n" ] }, { "cell_type": "markdown", "id": "508ebf26", "metadata": {}, "source": [ "### Installation\n", "\n", "This example notebook requires **SageMaker Python SDK v2.115.0 or later**\n" ] }, { "cell_type": "code", "execution_count": null, "id": "e2fab364", "metadata": {}, "outputs": [], "source": [ "!pip install \"sagemaker>=2.129\" botocore boto3 awscli matplotlib --upgrade" ] }, { "cell_type": "code", "execution_count": null, "id": "2a8e5dfc", "metadata": {}, "outputs": [], "source": [ "import botocore\n", "import boto3\n", "import sagemaker\n", "\n", "print(f\"botocore: {botocore.__version__}\")\n", "print(f\"boto3: {boto3.__version__}\")\n", "print(f\"sagemaker: {sagemaker.__version__}\")" ] }, { "cell_type": "markdown", "id": "fa0fae4b", "metadata": {}, "source": [ "### SageMaker environment " ] }, { "cell_type": "code", "execution_count": null, "id": "f257c805", "metadata": {}, "outputs": [], "source": [ "import sagemaker\n", "\n", "sess = sagemaker.Session()\n", "\n", "# SageMaker session bucket -> used for uploading data, models and logs\n", "# SageMaker will automatically create this bucket if it does not exist\n", "sagemaker_session_bucket = None\n", "if sagemaker_session_bucket is None and sess is not None:\n", " # set to default bucket if a bucket name is not given\n", " sagemaker_session_bucket = sess.default_bucket()\n", "\n", "role = sagemaker.get_execution_role()\n", "sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)\n", "\n", "print(f\"sagemaker role arn: {role}\")\n", "print(f\"sagemaker bucket: {sagemaker_session_bucket}\")\n", "print(f\"sagemaker session region: {sess.boto_region_name}\")" ] }, { "cell_type": "markdown", "id": "8ded0b2b", "metadata": {}, "source": [ "## Working with the Caltech-256 dataset\n", "\n", "We have hosted the [Caltech-256](https://authors.library.caltech.edu/7694/) dataset in S3 in us-east-1. We will transfer this dataset to your account and region for use with SageMaker Training.\n", "\n", "The dataset consists of JPEG images organized into directories with each directory representing an object category." ] }, { "cell_type": "code", "execution_count": null, "id": "dc4d8796", "metadata": {}, "outputs": [], "source": [ "import os\n", "\n", "source = \"s3://sagemaker-sample-files/datasets/image/caltech-256/256_ObjectCategories\"\n", "destn = f\"s3://{sagemaker_session_bucket}/caltech-256\"\n", "local = \"caltech-256\"\n", "\n", "os.system(f\"aws s3 sync {source} {local}\")\n", "os.system(f\"aws s3 sync {local} {destn}\")" ] }, { "cell_type": "markdown", "id": "0b1b7e0b", "metadata": {}, "source": [ "## SageMaker Training Job\n", "\n", "To create a SageMaker training job, we use a `TensorFlow` estimator. Using the estimator, you can define which training script should SageMaker use through `entry_point`, which `instance_type` to use for training, which `hyperparameters` to pass, and so on.\n", "\n", "When a SageMaker training job starts, SageMaker takes care of starting and managing all the required machine learning instances, picks up the `TensorFlow` Deep Learning Container, uploads your training script, and downloads the data from `sagemaker_session_bucket` into the container at `/opt/ml/input/data`.\n", "\n", "In the following section, you learn how to set up two versions of the SageMaker `TensorFlow` estimator, a native one without the compiler and an optimized one with the compiler." ] }, { "cell_type": "markdown", "id": "f5cbe5e6", "metadata": {}, "source": [ "### Training with Native TensorFlow\n", "\n", "The `BATCH_SIZE` in the following code cell is the maximum batch that can fit into the memory of a `ml.p3.2xlarge` instance while giving the best training speed. If you change the model, instance type, and other parameters, you need to do some experiments to find the largest batch size that will fit into GPU memory.\n", "\n", "Set `EPOCHS` to the number of times you would like to loop over the training data." ] }, { "cell_type": "code", "execution_count": null, "id": "14c81f22", "metadata": {}, "outputs": [], "source": [ "from sagemaker.tensorflow import TensorFlow\n", "\n", "EPOCHS = 10\n", "BATCH_SIZE = 77\n", "LEARNING_RATE = 1e-3\n", "WEIGHT_DECAY = 1e-4\n", "\n", "kwargs = dict(\n", " source_dir=\"scripts\",\n", " entry_point=\"vit.py\",\n", " model_dir=False,\n", " instance_type=\"ml.p3.2xlarge\",\n", " instance_count=1,\n", " framework_version=\"2.11\",\n", " py_version=\"py39\",\n", " debugger_hook_config=None,\n", " disable_profiler=True,\n", " max_run=60 * 60, # 60 minutes\n", " role=role,\n", " metric_definitions=[\n", " {\"Name\": \"training_loss\", \"Regex\": \"loss: ([0-9.]*?) \"},\n", " {\"Name\": \"training_accuracy\", \"Regex\": \"accuracy: ([0-9.]*?) \"},\n", " {\"Name\": \"training_latency_per_epoch\", \"Regex\": \"- ([0-9.]*?)s/epoch\"},\n", " {\"Name\": \"training_avg_latency_per_step\", \"Regex\": \"- ([0-9.]*?)ms/step\"},\n", " ],\n", ")\n", "\n", "# Configure the training job\n", "native_estimator = TensorFlow(\n", " hyperparameters={\n", " \"EPOCHS\": EPOCHS,\n", " \"BATCH_SIZE\": BATCH_SIZE,\n", " \"LEARNING_RATE\": LEARNING_RATE,\n", " \"WEIGHT_DECAY\": WEIGHT_DECAY,\n", " },\n", " base_job_name=\"native-tf210-vit\",\n", " **kwargs,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "1e40a103", "metadata": {}, "outputs": [], "source": [ "# Start training with our uploaded datasets as input\n", "native_estimator.fit(inputs=destn, wait=False)\n", "\n", "# The name of the training job.\n", "native_estimator.latest_training_job.name" ] }, { "cell_type": "markdown", "id": "8c3af6c5", "metadata": {}, "source": [ "### Training with Optimized TensorFlow\n", "\n", "Compilation through Training Compiler changes the memory footprint of the model. Most commonly, this manifests as a reduction in memory utilization and a consequent increase in the largest batch size that can fit on the GPU. But in some cases the compiler intelligently promotes caching which leads to a decrease in the largest batch size that can fit on the GPU. Note that if you want to change the batch size, you must adjust the learning rate appropriately.\n", "\n", "**Note:** We recommend you to turn the SageMaker Debugger's profiling and debugging tools off when you use compilation to avoid additional overheads." ] }, { "cell_type": "code", "execution_count": null, "id": "720b14dd", "metadata": {}, "outputs": [], "source": [ "from sagemaker.tensorflow import TensorFlow, TrainingCompilerConfig\n", "\n", "OPTIMIZED_BATCH_SIZE = 56\n", "LEARNING_RATE = LEARNING_RATE / BATCH_SIZE * OPTIMIZED_BATCH_SIZE\n", "WEIGHT_DECAY = WEIGHT_DECAY * BATCH_SIZE / OPTIMIZED_BATCH_SIZE\n", "\n", "# Configure the training job\n", "optimized_estimator = TensorFlow(\n", " hyperparameters={\n", " \"EPOCHS\": EPOCHS,\n", " \"BATCH_SIZE\": OPTIMIZED_BATCH_SIZE,\n", " \"LEARNING_RATE\": LEARNING_RATE,\n", " \"WEIGHT_DECAY\": WEIGHT_DECAY,\n", " },\n", " compiler_config=TrainingCompilerConfig(),\n", " base_job_name=\"optimized-tf210-vit\",\n", " **kwargs,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "a8f79ba0", "metadata": {}, "outputs": [], "source": [ "# Start training with our uploaded datasets as input\n", "optimized_estimator.fit(inputs=destn, wait=False)\n", "\n", "# The name of the training job.\n", "optimized_estimator.latest_training_job.name" ] }, { "cell_type": "markdown", "id": "736601a4", "metadata": {}, "source": [ "### Wait for training jobs to complete\n", "\n", "The training jobs described above typically take around 40 mins to complete" ] }, { "cell_type": "markdown", "id": "c83587ce", "metadata": {}, "source": [ "**Note:** If the estimator object is no longer available due to a kernel break or refresh, you need to directly use the training job name and manually attach the training job to a new TensorFlow estimator. For example:\n", "\n", "```python\n", "native_estimator = TensorFlow.attach(\"\")\n", "```" ] }, { "cell_type": "code", "execution_count": null, "id": "0f9c7d6b", "metadata": { "scrolled": true }, "outputs": [], "source": [ "waiter = sess.sagemaker_client.get_waiter(\"training_job_completed_or_stopped\")\n", "\n", "waiter.wait(TrainingJobName=native_estimator.latest_training_job.name)\n", "waiter.wait(TrainingJobName=optimized_estimator.latest_training_job.name)" ] }, { "cell_type": "code", "execution_count": null, "id": "3dbf0c02", "metadata": {}, "outputs": [], "source": [ "native_estimator = TensorFlow.attach(native_estimator.latest_training_job.name)\n", "optimized_estimator = TensorFlow.attach(optimized_estimator.latest_training_job.name)" ] }, { "cell_type": "markdown", "id": "e36f4066", "metadata": {}, "source": [ "## Analysis\n", "\n", "Here we view the training metrics from the training jobs as a Pandas dataframe\n", "\n", "#### Native TensorFlow" ] }, { "cell_type": "code", "execution_count": null, "id": "7d1fc9bc", "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "\n", "# Extract training metrics from the estimator\n", "native_metrics = native_estimator.training_job_analytics.dataframe()\n", "\n", "# Restructure table for viewing\n", "for metric in native_metrics[\"metric_name\"].unique():\n", " native_metrics[metric] = native_metrics[native_metrics[\"metric_name\"] == metric][\"value\"]\n", "native_metrics = native_metrics.drop(columns=[\"metric_name\", \"value\"])\n", "native_metrics = native_metrics.groupby(\"timestamp\").max()\n", "native_metrics[\"epochs\"] = range(1, 11)\n", "native_metrics = native_metrics.set_index(\"epochs\")\n", "\n", "native_metrics" ] }, { "cell_type": "markdown", "id": "38a1ce25", "metadata": {}, "source": [ "#### Optimized TensorFlow" ] }, { "cell_type": "code", "execution_count": null, "id": "0fe8cffe", "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "\n", "# Extract training metrics from the estimator\n", "optimized_metrics = optimized_estimator.training_job_analytics.dataframe()\n", "\n", "# Restructure table for viewing\n", "for metric in optimized_metrics[\"metric_name\"].unique():\n", " optimized_metrics[metric] = optimized_metrics[optimized_metrics[\"metric_name\"] == metric][\n", " \"value\"\n", " ]\n", "optimized_metrics = optimized_metrics.drop(columns=[\"metric_name\", \"value\"])\n", "optimized_metrics = optimized_metrics.groupby(\"timestamp\").max()\n", "optimized_metrics[\"epochs\"] = range(1, 11)\n", "optimized_metrics = optimized_metrics.set_index(\"epochs\")\n", "\n", "optimized_metrics" ] }, { "cell_type": "markdown", "id": "8bc53257", "metadata": {}, "source": [ "### Savings from Training Compiler\n", "\n", "Let us calculate the actual savings on the training jobs above and the potential for savings for a longer training job.\n", "\n", "#### Actual Savings\n", "\n", "To get the actual savings, we use the describe_training_job API to get the billable seconds for each training job." ] }, { "cell_type": "code", "execution_count": null, "id": "d04f1f57", "metadata": {}, "outputs": [], "source": [ "# Billable seconds for the Native TensorFlow Training job\n", "\n", "details = sess.describe_training_job(job_name=native_estimator.latest_training_job.name)\n", "native_secs = details[\"BillableTimeInSeconds\"]\n", "\n", "native_secs" ] }, { "cell_type": "code", "execution_count": null, "id": "6fe00795", "metadata": {}, "outputs": [], "source": [ "# Billable seconds for the Optimized TensorFlow Training job\n", "\n", "details = sess.describe_training_job(job_name=optimized_estimator.latest_training_job.name)\n", "optimized_secs = details[\"BillableTimeInSeconds\"]\n", "\n", "optimized_secs" ] }, { "cell_type": "code", "execution_count": null, "id": "55792cff", "metadata": {}, "outputs": [], "source": [ "# Calculating percentage Savings from Training Compiler\n", "\n", "percentage = (native_secs - optimized_secs) * 100 / native_secs\n", "\n", "f\"Training Compiler yielded {percentage:.2f}% savings in training cost.\"" ] }, { "cell_type": "markdown", "id": "d790d598", "metadata": {}, "source": [ "#### Potential savings\n", "\n", "The Training Compiler works by compiling the model graph once per input shape and reusing the cached graph for subsequent steps. As a result the first few steps of training incur an increased latency owing to compilation which we refer to as the compilation overhead. This overhead is amortized over time thanks to the subsequent steps being much faster. We will demonstrate this below." ] }, { "cell_type": "code", "execution_count": null, "id": "3e7f171f", "metadata": { "scrolled": true }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "\n", "plt.plot(native_metrics[\"training_latency_per_epoch\"], label=\"native_epoch_latency\")\n", "plt.plot(optimized_metrics[\"training_latency_per_epoch\"], label=\"optimized_epoch_latency\")\n", "plt.legend()" ] }, { "cell_type": "markdown", "id": "51aec9c2", "metadata": {}, "source": [ "We calculate the potential savings below from the difference in steady state epoch latency between native TensorFlow and optimized TensorFlow" ] }, { "cell_type": "code", "execution_count": null, "id": "da8afc01", "metadata": {}, "outputs": [], "source": [ "native_steady_state_latency = native_metrics[\"training_latency_per_epoch\"].iloc[-1]\n", "\n", "native_steady_state_latency" ] }, { "cell_type": "code", "execution_count": null, "id": "e761f62a", "metadata": {}, "outputs": [], "source": [ "optimized_steady_state_latency = optimized_metrics[\"training_latency_per_epoch\"].iloc[-1]\n", "\n", "optimized_steady_state_latency" ] }, { "cell_type": "code", "execution_count": null, "id": "483d3034", "metadata": {}, "outputs": [], "source": [ "# Calculating potential percentage Savings from Training Compiler\n", "\n", "percentage = (\n", " (native_steady_state_latency - optimized_steady_state_latency)\n", " * 100\n", " / native_steady_state_latency\n", ")\n", "\n", "f\"Training Compiler can potentially yield {percentage:.2f}% savings in training cost for a longer training job.\"" ] }, { "cell_type": "markdown", "id": "04315c17", "metadata": {}, "source": [ "### Convergence of Training\n", "\n", "Training Compiler brings down total training time by intelligently choosing between memory utilization and core utilization in the GPU. This does not have any effect on the model arithmetic and consequently convergence of the model.\n", "\n", "However, since we are working with a new batch size, hyperparameters like - learning rate, learning rate schedule and weight decay might have to be scaled and tuned for the new batch size" ] }, { "cell_type": "code", "execution_count": null, "id": "63b8ab58", "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "\n", "plt.plot(native_metrics[\"training_loss\"], label=\"native_loss\")\n", "plt.plot(optimized_metrics[\"training_loss\"], label=\"optimized_loss\")\n", "plt.legend()" ] }, { "cell_type": "markdown", "id": "cab43c11", "metadata": {}, "source": [ "We can see that the model's convergence behavior is similar with and without Training Compiler. Here we have tuned the batch size specific hyperparameters - Learning Rate and Weight Decay using a linear scaling.\n", "\n", "Learning rate is directly proportional to the batch size:\n", "```python\n", "new_learning_rate = old_learning_rate * new_batch_size/old_batch_size\n", "```\n", "\n", "Weight decay is inversely proportional to the batch size:\n", "```python\n", "new_weight_decay = old_weight_decay * old_batch_size/new_batch_size\n", "```\n", "\n", "Better results can be achieved with further tuning. Check out [Automatic Model Tuning](https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning.html) for tuning." ] }, { "cell_type": "markdown", "id": "064131c4", "metadata": {}, "source": [ "## Clean up\n", "\n", "Stop all training jobs launched if the jobs are still running." ] }, { "cell_type": "code", "execution_count": null, "id": "2164e93a", "metadata": {}, "outputs": [], "source": [ "def stop_training_job(name):\n", " status = sess.describe_training_job(name)[\"TrainingJobStatus\"]\n", " if status == \"InProgress\":\n", " sm.stop_training_job(TrainingJobName=name)\n", "\n", "\n", "stop_training_job(native_estimator.latest_training_job.name)\n", "stop_training_job(optimized_estimator.latest_training_job.name)" ] }, { "cell_type": "markdown", "id": "86fcb9a6", "metadata": {}, "source": [ "Also, to find instructions on cleaning up resources, see [Clean Up](https://docs.aws.amazon.com/sagemaker/latest/dg/ex1-cleanup.html) in the *Amazon SageMaker Developer Guide*." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Notebook CI Test Results\n", "\n", "This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.\n", "\n", "![This us-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-1/sagemaker-training-compiler|tensorflow|single_gpu_single_node|vision-transformer.ipynb)\n", "\n", "![This us-east-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-2/sagemaker-training-compiler|tensorflow|single_gpu_single_node|vision-transformer.ipynb)\n", "\n", "![This us-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-1/sagemaker-training-compiler|tensorflow|single_gpu_single_node|vision-transformer.ipynb)\n", "\n", "![This ca-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ca-central-1/sagemaker-training-compiler|tensorflow|single_gpu_single_node|vision-transformer.ipynb)\n", "\n", "![This sa-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/sa-east-1/sagemaker-training-compiler|tensorflow|single_gpu_single_node|vision-transformer.ipynb)\n", "\n", "![This eu-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-1/sagemaker-training-compiler|tensorflow|single_gpu_single_node|vision-transformer.ipynb)\n", "\n", "![This eu-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-2/sagemaker-training-compiler|tensorflow|single_gpu_single_node|vision-transformer.ipynb)\n", "\n", "![This eu-west-3 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-3/sagemaker-training-compiler|tensorflow|single_gpu_single_node|vision-transformer.ipynb)\n", "\n", "![This eu-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-central-1/sagemaker-training-compiler|tensorflow|single_gpu_single_node|vision-transformer.ipynb)\n", "\n", "![This eu-north-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-north-1/sagemaker-training-compiler|tensorflow|single_gpu_single_node|vision-transformer.ipynb)\n", "\n", "![This ap-southeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-1/sagemaker-training-compiler|tensorflow|single_gpu_single_node|vision-transformer.ipynb)\n", "\n", "![This ap-southeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-2/sagemaker-training-compiler|tensorflow|single_gpu_single_node|vision-transformer.ipynb)\n", "\n", "![This ap-northeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-1/sagemaker-training-compiler|tensorflow|single_gpu_single_node|vision-transformer.ipynb)\n", "\n", "![This ap-northeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-2/sagemaker-training-compiler|tensorflow|single_gpu_single_node|vision-transformer.ipynb)\n", "\n", "![This ap-south-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-south-1/sagemaker-training-compiler|tensorflow|single_gpu_single_node|vision-transformer.ipynb)\n" ] } ], "metadata": { "kernelspec": { "display_name": "conda_tensorflow2_p38", "language": "python", "name": "conda_tensorflow2_p38" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.12" } }, "nbformat": 4, "nbformat_minor": 5 }