{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Regression with Amazon SageMaker XGBoost algorithm\n", "_**Distributed training for regression with Amazon SageMaker XGBoost script mode**_\n", "\n", "---\n", "\n", "## Contents\n", "1. [Introduction](#Introduction)\n", "2. [Setup](#Setup)\n", " 1. [Fetching the dataset](#Fetching-the-dataset)\n", " 2. [Data Ingestion](#Data-ingestion)\n", "3. [Training the XGBoost model](#Training-the-XGBoost-model)\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Introduction\n", "\n", "This notebook demonstrates the use of Amazon SageMaker XGBoost to train and host a regression model. [XGBoost (eXtreme Gradient Boosting)](https://xgboost.readthedocs.io) is a popular and efficient machine learning algorithm used for regression and classification tasks on tabular datasets. It implements a technique know as gradient boosting on trees, and performs remarkably well in machine learning competitions, and gets a lot of attention from customers. \n", "\n", "We use the [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html), originally from the [UCI data repository](https://archive.ics.uci.edu/ml/datasets/abalone). More details about the original dataset can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.names). In this libsvm converted version, the nominal feature (Male/Female/Infant) has been converted into a real valued feature as required by XGBoost. Age of abalone is to be predicted from eight physical measurements. \n", "\n", "---\n", "## Setup\n", "\n", "\n", "This notebook was created and tested on an ml.t3.medium Studio notebook. This notebook was run with the Python 3 (Data Science) kernel.\n", "\n", "Let's start by specifying:\n", "1. The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.\n", "1. The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# ensure sagemaker version >= 2.00.0\n", "!pip show sagemaker" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "isConfigCell": true, "tags": [] }, "outputs": [], "source": [ "%%time\n", "\n", "import os\n", "import boto3\n", "import re\n", "import sagemaker\n", "\n", "# Get a SageMaker-compatible role used by this Notebook Instance.\n", "role = sagemaker.get_execution_role()\n", "region = boto3.Session().region_name\n", "\n", "### update below values appropriately ###\n", "bucket = sagemaker.Session().default_bucket()\n", "prefix = \"sagemaker/DEMO-xgboost-dist-script\"\n", "####\n", "\n", "print(region)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# put a string here so you know which jobs are yours, no puncutation or spaces\n", "your_user_string = 'your-name'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Fetching the dataset\n", "\n", "Following methods split the data into train/test/validation datasets and upload files to S3." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "%%time\n", "\n", "import io\n", "import boto3\n", "import random\n", "\n", "random.seed(42)\n", "\n", "\n", "def data_split(\n", " FILE_DATA,\n", " DATA_DIR,\n", " FILE_TRAIN_BASE,\n", " FILE_TRAIN_1,\n", " FILE_VALIDATION,\n", " FILE_TEST,\n", " PERCENT_TRAIN_0,\n", " PERCENT_TRAIN_1,\n", " PERCENT_VALIDATION,\n", " PERCENT_TEST,\n", "):\n", " data = [l for l in open(FILE_DATA, \"r\")]\n", " train_file_0 = open(DATA_DIR + \"/\" + FILE_TRAIN_0, \"w\")\n", " train_file_1 = open(DATA_DIR + \"/\" + FILE_TRAIN_1, \"w\")\n", " valid_file = open(DATA_DIR + \"/\" + FILE_VALIDATION, \"w\")\n", " tests_file = open(DATA_DIR + \"/\" + FILE_TEST, \"w\")\n", "\n", " num_of_data = len(data)\n", " num_train_0 = int((PERCENT_TRAIN_0 / 100.0) * num_of_data)\n", " num_train_1 = int((PERCENT_TRAIN_1 / 100.0) * num_of_data)\n", " num_valid = int((PERCENT_VALIDATION / 100.0) * num_of_data)\n", " num_tests = int((PERCENT_TEST / 100.0) * num_of_data)\n", "\n", " data_fractions = [num_train_0, num_train_1, num_valid, num_tests]\n", " split_data = [[], [], [], []]\n", "\n", " rand_data_ind = 0\n", "\n", " for split_ind, fraction in enumerate(data_fractions):\n", " for i in range(fraction):\n", " rand_data_ind = random.randint(0, len(data) - 1)\n", " split_data[split_ind].append(data[rand_data_ind])\n", " data.pop(rand_data_ind)\n", "\n", " for l in split_data[0]:\n", " train_file_0.write(l)\n", "\n", " for l in split_data[1]:\n", " train_file_1.write(l)\n", "\n", " for l in split_data[2]:\n", " valid_file.write(l)\n", "\n", " for l in split_data[3]:\n", " tests_file.write(l)\n", "\n", " train_file_0.close()\n", " train_file_1.close()\n", " valid_file.close()\n", " tests_file.close()\n", "\n", "\n", "def write_to_s3(fobj, bucket, key):\n", " return (\n", " boto3.Session(region_name=region)\n", " .resource(\"s3\")\n", " .Bucket(bucket)\n", " .Object(key)\n", " .upload_fileobj(fobj)\n", " )\n", "\n", "\n", "def upload_to_s3(bucket, channel, filename):\n", " fobj = open(filename, \"rb\")\n", " key = prefix + \"/\" + channel\n", " url = \"s3://{}/{}/{}\".format(bucket, key, filename)\n", " print(\"Writing to {}\".format(url))\n", " write_to_s3(fobj, bucket, key)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Data ingestion\n", "\n", "Next, we read the dataset from the existing repository into memory, for preprocessing prior to training. This processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "%%time\n", "import boto3\n", "\n", "# Load the dataset\n", "FILE_DATA = \"abalone\"\n", "s3 = boto3.client(\"s3\")\n", "s3.download_file(\n", " f\"sagemaker-sample-files\", \"datasets/tabular/uci_abalone/abalone.libsvm\", FILE_DATA\n", ")\n", "\n", "# split the downloaded data into train/test/validation files\n", "FILE_TRAIN_0 = \"abalone.train_0\"\n", "FILE_TRAIN_1 = \"abalone.train_1\"\n", "FILE_VALIDATION = \"abalone.validation\"\n", "FILE_TEST = \"abalone.test\"\n", "PERCENT_TRAIN_0 = 35\n", "PERCENT_TRAIN_1 = 35\n", "PERCENT_VALIDATION = 15\n", "PERCENT_TEST = 15\n", "\n", "DATA_DIR = \"data\"\n", "\n", "if not os.path.exists(DATA_DIR):\n", " os.mkdir(DATA_DIR)\n", "\n", "data_split(\n", " FILE_DATA,\n", " DATA_DIR,\n", " FILE_TRAIN_0,\n", " FILE_TRAIN_1,\n", " FILE_VALIDATION,\n", " FILE_TEST,\n", " PERCENT_TRAIN_0,\n", " PERCENT_TRAIN_1,\n", " PERCENT_VALIDATION,\n", " PERCENT_TEST,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# upload the files to the S3 bucket\n", "upload_to_s3(bucket, \"train/train_0.libsvm\", DATA_DIR + \"/\" + FILE_TRAIN_0)\n", "upload_to_s3(bucket, \"train/train_1.libsvm\", DATA_DIR + \"/\" + FILE_TRAIN_1)\n", "upload_to_s3(bucket, \"validation/validation.libsvm\", DATA_DIR + \"/\" + FILE_VALIDATION)\n", "upload_to_s3(bucket, \"test/test.libsvm\", DATA_DIR + \"/\" + FILE_TEST)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create a XGBoost script to train with \n", "\n", "SageMaker can now run an XGboost script using the XGBoost estimator. When executed on SageMaker a number of helpful environment variables are available to access properties of the training environment, such as:\n", "\n", "- `SM_MODEL_DIR`: A string representing the path to the directory to write model artifacts to. Any artifacts saved in this folder are uploaded to S3 for model hosting after the training job completes.\n", "- `SM_OUTPUT_DIR`: A string representing the filesystem path to write output artifacts to. Output artifacts may include checkpoints, graphs, and other files to save, not including model artifacts. These artifacts are compressed and uploaded to S3 to the same S3 prefix as the model artifacts.\n", "\n", "Supposing two input channels, 'train' and 'validation', were used in the call to the XGBoost estimator's fit() method, the following environment variables will be set, following the format `SM_CHANNEL_[channel_name]`:\n", "\n", "`SM_CHANNEL_TRAIN`: A string representing the path to the directory containing data in the 'train' channel\n", "`SM_CHANNEL_VALIDATION`: Same as above, but for the 'validation' channel.\n", "\n", "A typical training script loads data from the input channels, configures training with hyperparameters, trains a model, and saves a model to model_dir so that it can be hosted later. Hyperparameters are passed to your script as arguments and can be retrieved with an argparse.ArgumentParser instance. For example, the script that we will run in this notebook is provided as the accompanying file (`abalone.py`) and also shown below:\n", "\n", "```python\n", "\n", "import argparse\n", "import json\n", "import logging\n", "import os\n", "import pandas as pd\n", "import pickle as pkl\n", "\n", "from sagemaker_containers import entry_point\n", "from sagemaker_xgboost_container.data_utils import get_dmatrix\n", "from sagemaker_xgboost_container import distributed\n", "\n", "import xgboost as xgb\n", "\n", "\n", "def _xgb_train(params, dtrain, evals, num_boost_round, model_dir, is_master):\n", " \"\"\"Run xgb train on arguments given with rabit initialized.\n", "\n", " This is our rabit execution function.\n", "\n", " :param args_dict: Argument dictionary used to run xgb.train().\n", " :param is_master: True if current node is master host in distributed training, or is running single node training job. Note that rabit_run will include this argument.\n", " \"\"\"\n", " booster = xgb.train(params=params, dtrain=dtrain, evals=evals, num_boost_round=num_boost_round)\n", "\n", " if is_master:\n", " model_location = model_dir + '/xgboost-model'\n", " pkl.dump(booster, open(model_location, 'wb'))\n", " logging.info(\"Stored trained model at {}\".format(model_location))\n", "\n", "\n", "if __name__ == '__main__':\n", " parser = argparse.ArgumentParser()\n", "\n", " # Hyperparameters are described here. In this simple example we are just including one hyperparameter.\n", " parser.add_argument('--max_depth', type=int,)\n", " parser.add_argument('--eta', type=float)\n", " parser.add_argument('--gamma', type=int)\n", " parser.add_argument('--min_child_weight', type=int)\n", " parser.add_argument('--subsample', type=float)\n", " parser.add_argument('--verbose', type=int)\n", " parser.add_argument('--objective', type=str)\n", " parser.add_argument('--num_round', type=int)\n", "\n", " # Sagemaker specific arguments. Defaults are set in the environment variables.\n", " parser.add_argument('--output_data_dir', type=str, default=os.environ['SM_OUTPUT_DATA_DIR'])\n", " parser.add_argument('--model_dir', type=str, default=os.environ['SM_MODEL_DIR'])\n", " parser.add_argument('--train', type=str, default=os.environ['SM_CHANNEL_TRAIN'])\n", " parser.add_argument('--validation', type=str, default=os.environ['SM_CHANNEL_VALIDATION'])\n", " parser.add_argument('--sm_hosts', type=str, default=os.environ['SM_HOSTS'])\n", " parser.add_argument('--sm_current_host', type=str, default=os.environ['SM_CURRENT_HOST'])\n", "\n", " args, _ = parser.parse_known_args()\n", "\n", " # Get SageMaker host information from runtime environment variables\n", " sm_hosts = json.loads(os.environ['SM_HOSTS'])\n", " sm_current_host = args.sm_current_host\n", "\n", " dtrain = get_dmatrix(args.train, 'libsvm')\n", " dval = get_dmatrix(args.validation, 'libsvm')\n", " watchlist = [(dtrain, 'train'), (dval, 'validation')] if dval is not None else [(dtrain, 'train')]\n", "\n", " train_hp = {\n", " 'max_depth': args.max_depth,\n", " 'eta': args.eta,\n", " 'gamma': args.gamma,\n", " 'min_child_weight': args.min_child_weight,\n", " 'subsample': args.subsample,\n", " 'verbose': args.verbose,\n", " 'objective': args.objective}\n", "\n", " xgb_train_args = dict(\n", " params=train_hp,\n", " dtrain=dtrain,\n", " evals=watchlist,\n", " num_boost_round=args.num_round,\n", " model_dir=args.model_dir)\n", "\n", " if len(sm_hosts) > 1:\n", " # Wait until all hosts are able to find each other\n", " entry_point._wait_hostname_resolution()\n", "\n", " # Execute training function after initializing rabit.\n", " distributed.rabit_run(\n", " exec_fun=_xgb_train,\n", " args=xgb_train_args,\n", " include_in_training=(dtrain is not None),\n", " hosts=sm_hosts,\n", " current_host=sm_current_host,\n", " update_rabit_args=True\n", " )\n", " else:\n", " # If single node training, call training method directly.\n", " if dtrain:\n", " xgb_train_args['is_master'] = True\n", " _xgb_train(**xgb_train_args)\n", " else:\n", " raise ValueError(\"Training channel must have data to train model.\")\n", "\n", "\n", "def model_fn(model_dir):\n", " \"\"\"Deserialized and return fitted model.\n", "\n", " Note that this should have the same name as the serialized model in the _xgb_train method\n", " \"\"\"\n", " model_file = 'xgboost-model'\n", " booster = pkl.load(open(os.path.join(model_dir, model_file), 'rb'))\n", " return booster\n", "```\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Because the container imports your training script, always put your training code in a main guard `(if __name__=='__main__':)` so that the container does not inadvertently run your training code at the wrong point in execution.\n", "\n", "For more information about training environment variables, please visit https://github.com/aws/sagemaker-containers." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Training the XGBoost model\n", "\n", "After setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between few minutes.\n", "\n", "To run our training script on SageMaker, we construct a sagemaker.xgboost.estimator.XGBoost estimator, which accepts several constructor arguments:\n", "\n", "* __entry_point__: The path to the Python script SageMaker runs for training and prediction.\n", "* __role__: Role ARN\n", "* __instance_type__ *(optional)*: The type of SageMaker instances for training. __Note__: Because Scikit-learn does not natively support GPU training, Sagemaker Scikit-learn does not currently support training on GPU instance types.\n", "* __sagemaker_session__ *(optional)*: The session used to train on Sagemaker.\n", "* __hyperparameters__ *(optional)*: A dictionary passed to the train function as hyperparameters." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "hyperparams = {\n", " \"max_depth\": \"5\",\n", " \"eta\": \"0.2\",\n", " \"gamma\": \"4\",\n", " \"min_child_weight\": \"6\",\n", " \"subsample\": \"0.7\",\n", " \"verbose\": \"1\",\n", " \"objective\": \"reg:linear\",\n", " \"num_round\": \"50\",\n", "}\n", "\n", "instance_type = \"ml.m5.2xlarge\"\n", "output_path = \"s3://{}/{}/{}/output\".format(bucket, prefix, \"abalone-dist-xgb\")\n", "content_type = \"libsvm\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# Open Source distributed script mode\n", "from sagemaker.session import TrainingInput, Session\n", "from sagemaker.xgboost.estimator import XGBoost\n", "\n", "boto_session = boto3.Session(region_name=region)\n", "session = Session(boto_session=boto_session)\n", "script_path = \"abalone.py\"\n", "\n", "xgb_script_mode_estimator = XGBoost(\n", " entry_point=script_path,\n", " base_job_name=\"{}-xgboost\".format(your_user_string),\n", " framework_version=\"0.90-1\", # Note: framework_version is mandatory\n", " hyperparameters=hyperparams,\n", " role=role,\n", " instance_count=2,\n", " instance_type=instance_type,\n", " output_path=output_path,\n", ")\n", "\n", "train_input = TrainingInput(\n", " \"s3://{}/{}/{}/\".format(bucket, prefix, \"train\"), content_type=content_type\n", ")\n", "validation_input = TrainingInput(\n", " \"s3://{}/{}/{}/\".format(bucket, prefix, \"validation\"), content_type=content_type\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Train XGBoost Estimator on abalone data \n", "\n", "\n", "Training is as simple as calling `fit` on the Estimator. This will start a SageMaker Training job that will download the data, invoke the entry point code (in the provided script file), and save any model artifacts that the script creates." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true, "tags": [] }, "outputs": [], "source": [ "xgb_script_mode_estimator.fit({\"train\": train_input, \"validation\": validation_input})" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "anaconda-cloud": {}, "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (Data Science)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/datascience-1.0" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.10" } }, "nbformat": 4, "nbformat_minor": 4 }