{ "cells": [ { "cell_type": "markdown", "id": "284f25c0", "metadata": {}, "source": [ "## BERT Pre training using Trainum\n", "\n", "This tutorial explains how to run BERT pretraining using Amazon SageMaker and AWS trainium.This example demonstrates the steps required to perform multinode/multi-accelerator training using AWS Trainium and SageMaker." ] }, { "cell_type": "markdown", "id": "38361b2b", "metadata": {}, "source": [ "We will be doing the below activites as part of this example\n", "\n", "1. Download the Wiki data needed for training and upload it to S3.\n", "2. Run the model compilation and save the result in S3. This is recommended but not mandatory for long running jobs which might need multiple restarts.\n", "3. Efficiently Train the model on multi node /multi accelerator setup." ] }, { "cell_type": "markdown", "id": "d5609609", "metadata": {}, "source": [ "## 1. Download Training Data\n", "\n", "For this example we will use data that is tokenized and sharded in prior. We can always use our own data if needed. We will use wiki corpus data that is tokenized and sharded using sequence length 128. We can also use tokenized wiki data of sequence length 512.\n", "\n", "To understand how the data is created please refer to this link -> \n", "\n", "https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/LanguageModeling/BERT/README.md#getting-the-data" ] }, { "cell_type": "code", "execution_count": null, "id": "8075185e", "metadata": {}, "outputs": [], "source": [ "# Lets begin by upgrading the SageMaker SDK to the latest version\n", "!pip install --upgrade sagemaker" ] }, { "cell_type": "code", "execution_count": null, "id": "5ae6df4c", "metadata": {}, "outputs": [], "source": [ "# inorder to use trainium we need the SDK version should be minimum 2.116.0\n", "import sagemaker\n", "\n", "assert sagemaker.__version__ >= \"2.116.0\"" ] }, { "cell_type": "markdown", "id": "3fc8a50b", "metadata": {}, "source": [ "#### Download and uncompress the training data.\n", "\n", "We will download pre created tokenized data and use it. The data is about 48 GB and will take sometime to finish downlaod." ] }, { "cell_type": "code", "execution_count": null, "id": "ed19eeae", "metadata": {}, "outputs": [], "source": [ "! aws s3 cp s3://neuron-s3/training_datasets/bert_pretrain_wikicorpus_tokenized_hdf5/bert_pretrain_wikicorpus_tokenized_hdf5_seqlen128.tar . --no-sign-request\n", "! tar -xf bert_pretrain_wikicorpus_tokenized_hdf5_seqlen128.tar\n", "!rm bert_pretrain_wikicorpus_tokenized_hdf5_seqlen128.tar" ] }, { "cell_type": "markdown", "id": "9cb94d52", "metadata": {}, "source": [ "### Initialize SageMaker and Upload the training data to our S3 Bucket" ] }, { "cell_type": "code", "execution_count": null, "id": "8febb689", "metadata": {}, "outputs": [], "source": [ "%%time\n", "import os\n", "\n", "import boto3\n", "import sagemaker\n", "from sagemaker import get_execution_role\n", "from sagemaker.huggingface import HuggingFace\n", "\n", "role = (\n", " get_execution_role()\n", ") # provide a pre-existing role ARN as an alternative to creating a new role\n", "print(f\"SageMaker Execution Role: {role}\")\n", "\n", "client = boto3.client(\"sts\")\n", "account = client.get_caller_identity()[\"Account\"]\n", "print(f\"AWS account: {account}\")\n", "\n", "session = boto3.session.Session()\n", "region = session.region_name\n", "print(f\"AWS region: {region}\")\n", "\n", "sm_boto_client = boto3.client(\"sagemaker\")\n", "sagemaker_session = sagemaker.session.Session(boto_session=session)" ] }, { "cell_type": "code", "execution_count": null, "id": "6c61101f", "metadata": {}, "outputs": [], "source": [ "sagemaker_session_bucket = (\n", " None # Provide a bucket if you don't want to use the default bucket\n", ")\n", "\n", "if sagemaker_session_bucket is None and sagemaker_session is not None:\n", " # set to default bucket if a bucket name is not given\n", " sagemaker_session_bucket = sagemaker_session.default_bucket()" ] }, { "cell_type": "markdown", "id": "0035e66f", "metadata": {}, "source": [ "#### Upload the data to our s3 bucket." ] }, { "cell_type": "code", "execution_count": null, "id": "c0c48433", "metadata": {}, "outputs": [], "source": [ "# Upload data to s3\n", "\n", "train_path_128 = sagemaker_session.upload_data(\n", " \"bert_pretrain_wikicorpus_tokenized_hdf5_seqlen128\",\n", " sagemaker_session_bucket,\n", " \"train/wiki128\",\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "123ce7e5", "metadata": {}, "outputs": [], "source": [ "print(f\"The training data used for training {train_path_128}\")" ] }, { "cell_type": "markdown", "id": "b531117d", "metadata": {}, "source": [ "## 2. Compile the model using Neuron SDK\n", "\n", "The compilation job runs the training with fewer steps and then creates a neuron cache file which will be used for training the larger model.\n", "\n", "PyTorch Neuron evaluates operations lazily during execution of the training loops, which means it builds a symbolic graph in the background and the graph is executed in hardware only when the tensor is printed, transfered to CPU, or xm.mark_step() is encountered (xm.mark_step() is implicitly called by pl.MpDeviceLoader/pl.ParallelLoader). \n", "\n", "During execution of the training loops, PyTorch Neuron can build multiple graphs depending on the number of conditional paths taken. For BERT-Large pretraining, PyTorch Neuron builds multiple unique graphs that should be compiled before running on the NeuronCores. PyTorch Neuron will compile those graphs only if they are not in the XLA in-memory cache or the persistent cache. To reduce the compilation time of these graphs, you can pre-compile those graphs using the utility neuron_parallel_compile (provided by the libneuronxla package, a transitive dependency of torch-neuronx)." ] }, { "cell_type": "code", "execution_count": null, "id": "890280a3", "metadata": {}, "outputs": [], "source": [ "instance_type = \"ml.trn1.32xlarge\"\n", "instance_count = 1" ] }, { "cell_type": "code", "execution_count": null, "id": "b30da912", "metadata": {}, "outputs": [], "source": [ "hyperparameters = {\n", " \"batch_size\": 16,\n", " \"grad_accum_usteps\": 32,\n", " \"data_dir\": \"/opt/ml/input/data/training/\", # this is the path where sagemaker will copy the data into from s3\n", " \"output_dir\": \"/opt/ml/model\",\n", "}" ] }, { "cell_type": "code", "execution_count": null, "id": "972c7eea", "metadata": {}, "outputs": [], "source": [ "checkpoint_s3 = \"s3://\" + sagemaker_session_bucket + \"/trainium/bert/cache\"" ] }, { "cell_type": "code", "execution_count": null, "id": "47a75439", "metadata": {}, "outputs": [], "source": [ "from sagemaker.pytorch import PyTorch\n", "\n", "smp_estimator = PyTorch(\n", " entry_point=\"compile.sh\",\n", " source_dir=\"code\",\n", " role=role,\n", " instance_type=instance_type,\n", " volume_size=1024,\n", " instance_count=instance_count,\n", " sagemaker_session=sagemaker_session,\n", " framework_version=\"1.11.0\",\n", " py_version=\"py38\",\n", " hyperparameters=hyperparameters,\n", " checkpoint_local_path=\"/opt/ml/checkpoints\",\n", " checkpoint_s3_uri=checkpoint_s3,\n", " debugger_hook_config=False,\n", " disable_profiler=True,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "5d3a2070", "metadata": {}, "outputs": [], "source": [ "smp_estimator.fit(inputs={\"training\": train_path_128}, logs=True)" ] }, { "cell_type": "markdown", "id": "e3ea5100", "metadata": {}, "source": [ "This pre step performs a fast trial run of the training script to build graphs and then do parallel compilations on those graphs using multiple processes of Neuron Compiler before populating the on-disk persistent cache with compiled graphs. This helps make the actual training run faster because the compiled graphs will loaded from the persistent cache." ] }, { "cell_type": "markdown", "id": "e434c306", "metadata": {}, "source": [ "## 3. Train the model" ] }, { "cell_type": "markdown", "id": "e47c7d39", "metadata": {}, "source": [ "After running the pre-compilation step, continue with the actual pretraining by running the following set of commands to launch 32 data parallel distributed training workers on trn1.32xlarge. SageMaker pytorch Estimator provides an option to support torchrun which makes sure to run a separate process for each neuron core available in the training cluster.\n", "\n", "We will pass the compiled model as an input channel. This will be used during the training process rather than recompiling the model. " ] }, { "cell_type": "code", "execution_count": null, "id": "2de6f8b8", "metadata": {}, "outputs": [], "source": [ "hyperparameters = {\n", " \"batch_size\": 16,\n", " \"grad_accum_usteps\": 32,\n", " \"data_dir\": \"/opt/ml/input/data/training/\", # this is the path where sagemaker will copy the data into from s3\n", " \"output_dir\": \"/opt/ml/model\",\n", " \"cache_dir\": \"/opt/ml/input/data/cache/\", # the compiled model will be copied to this path.\n", " \"max_steps\": 200,\n", "}" ] }, { "cell_type": "code", "execution_count": null, "id": "72326b8b", "metadata": {}, "outputs": [], "source": [ "from sagemaker.pytorch import PyTorch\n", "\n", "smp_estimator = PyTorch(\n", " entry_point=\"bert_pretrain.py\",\n", " source_dir=\"code\",\n", " role=role,\n", " instance_type=instance_type,\n", " volume_size=512,\n", " instance_count=instance_count,\n", " sagemaker_session=sagemaker_session,\n", " framework_version=\"1.11.0\",\n", " py_version=\"py38\",\n", " hyperparameters=hyperparameters,\n", " debugger_hook_config=False,\n", " disable_profiler=True,\n", " distribution={\"torch_distributed\": {\"enabled\": True}},\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "39994978", "metadata": {}, "outputs": [], "source": [ "smp_estimator.fit(\n", " inputs={\"training\": train_path_128, \"cache\": checkpoint_s3}, logs=True\n", ")" ] }, { "cell_type": "markdown", "id": "486fc0fa", "metadata": {}, "source": [ "Congrats!!! we successfully trained a BERT model using AWS Trainium and Amazon SageMaker." ] }, { "cell_type": "code", "execution_count": null, "id": "3bdae61a", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.9 (default, Apr 13 2022, 08:48:06) \n[Clang 13.1.6 (clang-1316.0.21.2.5)]" }, "vscode": { "interpreter": { "hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6" } } }, "nbformat": 4, "nbformat_minor": 5 }