{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Develop, Train, Optimize and Deploy Scikit-Learn Random Forest\n", "\n", "> *This notebook should work well with the `Python 3 (Data Science 2.0)` kernel on SageMaker Studio, or `conda_python3` on classic SageMaker Notebook Instances*\n", "\n", "In this notebook we show how to use Amazon SageMaker to develop, train, tune and deploy a Random Forest model based using the popular ML framework [Scikit-Learn](https://scikit-learn.org/stable/index.html).\n", "\n", "The example uses the *California Housing dataset* (provided by Scikit-Learn) - more details of which can be found [here](https://inria.github.io/scikit-learn-mooc/python_scripts/datasets_california_housing.html).\n", "\n", "To understand the code, you might also find it useful to refer to:\n", "\n", "* The guide on [Using Scikit-Learn with the SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable/using_sklearn.html)\n", "* The API doc for [Scikit-Learn classes in the SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable/sagemaker.sklearn.html)\n", "* The [SageMaker reference for Boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html#client) (The general AWS SDK for Python, including low-level bindings for SageMaker as well as many other AWS services)\n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "We start by importing dependencies and seting up basic configurations like the current AWS Region and target S3 bucket:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Python Built-Ins:\n", "import os\n", "\n", "# External Dependencies:\n", "import boto3 # General-purpose AWS SDK for Python\n", "import numpy as np # Tools for working with numeric arrays\n", "import pandas as pd # Tools for warking with data tables (dataframes)\n", "import sagemaker # High-level SDK for Amazon SageMaker in particular\n", "from sklearn.model_selection import train_test_split\n", "from sklearn.datasets import fetch_california_housing\n", "\n", "sm_boto3 = boto3.client(\"sagemaker\")\n", "sess = sagemaker.Session()\n", "region = sess.boto_session.region_name\n", "bucket = sess.default_bucket() # this could also be a hard-coded bucket name\n", "\n", "print(f\"Using bucket {bucket}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Prepare data\n", "\n", "Next, we'll load our raw example dataset from SKLearn and prepare it into the format the training job will use: A separate CSV for training and for validation/test." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data = fetch_california_housing()\n", "\n", "X_train, X_test, y_train, y_test = train_test_split(\n", " data.data, data.target, test_size=0.25, random_state=42\n", ")\n", "\n", "trainX = pd.DataFrame(X_train, columns=data.feature_names)\n", "trainX[\"target\"] = y_train\n", "\n", "testX = pd.DataFrame(X_test, columns=data.feature_names)\n", "testX[\"target\"] = y_test\n", "\n", "trainX.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# create directories\n", "os.makedirs(\"data\", exist_ok=True)\n", "os.makedirs(\"src\", exist_ok=True)\n", "os.makedirs(\"model\", exist_ok=True)\n", "\n", "# save data as csv\n", "trainX.to_csv(\"data/california_train.csv\")\n", "testX.to_csv(\"data/california_test.csv\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create a training script\n", "\n", "The SageMaker Scikit-Learn [Framework Container](https://docs.aws.amazon.com/sagemaker/latest/dg/pre-built-docker-containers-scikit-learn-spark.html) provides the basic runtime, and we as users specify the actual training steps to run as a script file (or even a folder of several, perhaps including a `requirements.txt`).\n", "\n", "The below code initializes a [`src/main.py`](src/main.py) file from here in the notebook. You can also create Python scripts and other files from the launcher or the File menu.\n", "\n", "In this example, the same file will be used at training time (run as as script), and at inference time (imported as a [module](https://docs.python.org/3/tutorial/modules.html)) - So below we:\n", "\n", "- Define some specific **inference functions** to override default behavior (e.g. `model_fn()`), and\n", "- Enclose the **training entry point** in an `if __name__ == '__main__'` [guard clause](https://docs.python.org/3/library/__main__.html) so it only executes when the module is run as a script.\n", "\n", "You can find detailed guidance in the documentation on [Preparing a Scikit-Learn training script](https://sagemaker.readthedocs.io/en/stable/frameworks/sklearn/using_sklearn.html#prepare-a-scikit-learn-training-script) (for training) and the [SageMaker Scikit-Learn model server](https://sagemaker.readthedocs.io/en/stable/frameworks/sklearn/using_sklearn.html#sagemaker-scikit-learn-model-server) (for inference)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%writefile src/main.py\n", "# Python Built-Ins:\n", "import argparse\n", "import os\n", "\n", "# External Dependencies:\n", "import joblib\n", "import numpy as np\n", "import pandas as pd\n", "from sklearn.ensemble import RandomForestRegressor\n", "\n", "\n", "# ---- INFERENCE FUNCTIONS ----\n", "def model_fn(model_dir):\n", " clf = joblib.load(os.path.join(model_dir, \"model.joblib\"))\n", " return clf\n", "\n", "\n", "if __name__ == \"__main__\":\n", " # ---- TRAINING ENTRY POINT ----\n", " \n", " # Arguments like data location and hyper-parameters are passed from SageMaker to your script\n", " # via command line arguments and/or environment variables. You can use Python's built-in\n", " # argparse module to parse them:\n", " print(\"Parsing training arguments\")\n", " parser = argparse.ArgumentParser()\n", "\n", " # RandomForest hyperparameters\n", " parser.add_argument(\"--n_estimators\", type=int, default=10)\n", " parser.add_argument(\"--min_samples_leaf\", type=int, default=3)\n", "\n", " # Data, model, and output directories\n", " parser.add_argument(\"--model_dir\", type=str, default=os.environ.get(\"SM_MODEL_DIR\"))\n", " parser.add_argument(\"--train_dir\", type=str, default=os.environ.get(\"SM_CHANNEL_TRAIN\"))\n", " parser.add_argument(\"--test_dir\", type=str, default=os.environ.get(\"SM_CHANNEL_TEST\"))\n", " parser.add_argument(\"--train_file\", type=str, default=\"train.csv\")\n", " parser.add_argument(\"--test_file\", type=str, default=\"test.csv\")\n", " parser.add_argument(\"--features\", type=str) # explicitly name which features to use\n", " parser.add_argument(\"--target_variable\", type=str) # name the column to be used as target\n", "\n", " args, _ = parser.parse_known_args()\n", "\n", " # -- DATA PREPARATION --\n", " # Load the data from the local folder(s) SageMaker pointed us to:\n", " print(\"Reading data\")\n", " train_df = pd.read_csv(os.path.join(args.train_dir, args.train_file))\n", " test_df = pd.read_csv(os.path.join(args.test_dir, args.test_file))\n", "\n", " print(\"building training and testing datasets\")\n", " X_train = train_df[args.features.split()]\n", " X_test = test_df[args.features.split()]\n", " y_train = train_df[args.target_variable]\n", " y_test = test_df[args.target_variable]\n", "\n", " # -- MODEL TRAINING --\n", " print(\"Training model\")\n", " model = RandomForestRegressor(\n", " n_estimators=args.n_estimators,\n", " min_samples_leaf=args.min_samples_leaf,\n", " n_jobs=-1)\n", "\n", " model.fit(X_train, y_train)\n", "\n", " # -- MODEL EVALUATION --\n", " print(\"Testing model\")\n", " abs_err = np.abs(model.predict(X_test) - y_test)\n", " # Output metrics to the console (in this case, percentile absolute errors):\n", " for q in [10, 50, 90]:\n", " print(f\"AE-at-{q}th-percentile: {np.percentile(a=abs_err, q=q)}\")\n", "\n", " # -- SAVE THE MODEL --\n", " # ...To the specific folder SageMaker pointed us to:\n", " path = os.path.join(args.model_dir, \"model.joblib\")\n", " joblib.dump(model, path)\n", " print(f\"model saved at {path}\")\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Local training\n", "\n", "Since configuration is by command line arguments, we can test our training script locally before uploading to a SageMaker training job.\n", "\n", "> ⚠️ **Note:** This is good for quick, functional tests of your script against small sample datasets... But once you're confident your script *functionally* works, you probably want to move your experiments to reproduceable, trackable, SageMaker training jobs. Be aware that the libraries in your notebook kernel may not exactly match the container image you configure for the training job later." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!python src/main.py \\\n", " --n_estimators 100 \\\n", " --min_samples_leaf 3 \\\n", " --model_dir model/ \\\n", " --train_dir data/ \\\n", " --test_dir data/ \\\n", " --train_file california_train.csv \\\n", " --test_file california_test.csv \\\n", " --features 'MedInc HouseAge AveRooms AveBedrms Population AveOccup Latitude Longitude' \\\n", " --target_variable target" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## SageMaker Training\n", "\n", "To run your script in a training job, first we need to upload the data somewhere SageMaker can access it: Typically this will be [Amazon S3](https://aws.amazon.com/s3/).\n", "\n", "### Creating data input \"channels\" (copy to S3)\n", "\n", "Note that the number and naming of multiple data \"channels\" for SageMaker is up to you: You don't need to have exactly 2, and they don't need to be called \"train\" and \"test\"." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train_data_s3uri = sess.upload_data(\n", " path=\"data/california_train.csv\", # Local source\n", " bucket=bucket,\n", " key_prefix=\"sm101/sklearn\", # Destination path in S3 bucket\n", ")\n", "\n", "test_data_s3uri = sess.upload_data(\n", " path=\"data/california_test.csv\", # Local source\n", " bucket=bucket,\n", " key_prefix=\"sm101/sklearn\", # Destination path in S3 bucket\n", ")\n", "\n", "print(\"Train set URI:\", train_data_s3uri)\n", "print(\"Test set URI:\", test_data_s3uri)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Launching a training job with the Python SDK\n", "\n", "With the data uploaded and script prepared, you're ready to configure your SageMaker training job:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# We use the Estimator from the SageMaker Python SDK\n", "from sagemaker.sklearn.estimator import SKLearn\n", "\n", "sklearn_estimator = SKLearn(\n", " entry_point=\"main.py\",\n", " source_dir=\"src\", # To upload the whole folder - or instead set entry_point=\"src/main.py\"\n", " role=sagemaker.get_execution_role(), # Use same IAM role as notebook is currently using\n", " instance_count=1,\n", " instance_type=\"ml.m5.large\",\n", " framework_version=\"1.0-1\",\n", " base_job_name=\"rf-scikit\",\n", " metric_definitions=[\n", " # SageMaker can extract metrics from your console logs via Regular Expressions:\n", " {\"Name\": \"median-AE\", \"Regex\": \"AE-at-50th-percentile: ([0-9.]+).*$\"},\n", " ],\n", " hyperparameters={\n", " \"n_estimators\": 100,\n", " \"min_samples_leaf\": 3,\n", " \"features\": \"MedInc HouseAge AveRooms AveBedrms Population AveOccup Latitude Longitude\",\n", " \"target_variable\": \"target\",\n", " # SageMaker data channels are always folders. Even if you point to a particular object\n", " # S3URI, you'll need to either: Properly support loading folder inputs in your script; or\n", " # use extra configuration parameters to identify specific filename(s):\n", " \"train_file\": \"california_train.csv\",\n", " \"test_file\": \"california_test.csv\",\n", " },\n", " # Optional settings to run with SageMaker Managed Spot:\n", " max_run=20*60, # Maximum allowed active runtime (in seconds)\n", " use_spot_instances=True, # Use spot instances to reduce cost\n", " max_wait=30*60, # Maximum clock time (including spot delays)\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sklearn_estimator.fit({\"train\": train_data_s3uri, \"test\": test_data_s3uri}, wait=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Remember that the training job that we ran is very \"light\", due to the very small dataset. As such, running locally on the notebook instance results in a faster execution time, compared to SageMaker. SageMaker takes longer time to run the job because it has to provision the training infrastructure. Since this example training job not very resource-intensive, the infrastructure provisioning process adds more overhead, compared to the training job itself. \n", "\n", "In a real situation, where datasets are large, running on SageMaker can considerably speed up the execution process - and help us optimize costs, by keeping this interactive notebook environment modest and spinning up more powerful training job resources on-demand.\n", "\n", "Note that this training job *did not run here on the notebook itself*. You'll be able to see the history in the [AWS Console for SageMaker - Training Jobs tab](https://console.aws.amazon.com/sagemaker/home?#/jobs) and also the [SageMaker Studio Experiments and Trials UI](https://docs.aws.amazon.com/sagemaker/latest/dg/experiments-view-compare.html).\n", "\n", "> ℹ️ **Tip:** There's **no need to re-run** a training job if your notebook kernel restarts or the estimator state is lost for some other reason... You can just *attach* to a previous training job by name - for example:\n", ">\n", "> ```python\n", "> estimator = SKLearn.attach(\"rf-scikit-2025-01-01-00-00-00-000\")\n", "> ```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Deploy to a real-time endpoint\n", "\n", "### Deploy with Python SDK\n", "\n", "It's possible to deploy a trained `Estimator` to a SageMaker endpoint for real-time inference in one line of code, with `Estimator.deploy(...)` - which implicitly creates a SageMaker [Model](https://console.aws.amazon.com/sagemaker/home?#/models), [Endpoint Configuration](https://console.aws.amazon.com/sagemaker/home?#/endpointConfig), and [Endpoint](https://console.aws.amazon.com/sagemaker/home?#/endpoints).\n", "\n", "For more fine-grained control though, you can choose to create a `Model` object through the SageMaker Python SDK - referencing the `model.tar.gz` produced on Amazon S3 by the training job. This would allow us to, for example:\n", "\n", "- Modify environment variables or the Python files used between training and inference\n", "- Import a model trained outside SageMaker that's been packaged to a compatible `model.tar.gz` on Amazon S3\n", "\n", "We'll demonstrate the longer route here:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sklearn_estimator.latest_training_job.wait(logs=\"None\") # Check the job is finished\n", "\n", "# describe() here is equivalent to low-level boto3 SageMaker describe_training_job\n", "job_desc = sklearn_estimator.latest_training_job.describe()\n", "model_s3uri = job_desc[\"ModelArtifacts\"][\"S3ModelArtifacts\"]\n", "\n", "print(\"Model artifact saved at:\", model_s3uri)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.sklearn.model import SKLearnModel\n", "\n", "model = SKLearnModel(\n", " model_data=model_s3uri,\n", " framework_version=\"1.0-1\",\n", " py_version=\"py3\",\n", " role=sagemaker.get_execution_role(),\n", " entry_point=\"src/main.py\",\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "predictor = model.deploy(\n", " instance_type=\"ml.c5.large\",\n", " initial_instance_count=1,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Realtime inference\n", "\n", "The [Predictor](https://sagemaker.readthedocs.io/en/stable/api/inference/predictors.html) class from the SageMaker Python SDK provides a Python wrapper around the endpoint which also handles (configurable) de/serialization of the request and response.\n", "\n", "Alternatively for clients which cannot use the SageMaker Python SDK (for example non-Python clients, or Python environments where the PyPI [sagemaker](https://pypi.org/project/sagemaker/) package can't be installed for some reason): The general AWS SDKs can be used to call the lower-level [SageMaker InvokeEndpoint API](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_runtime_InvokeEndpoint.html)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# the SKLearnPredictor does the serialization from pandas for us\n", "print(predictor.predict(testX[data.feature_names]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Delete endpoint\n", "\n", "While training job infrastructure is started on-demand and terminated as soon as the job stops, endpoints are live until we turn them off. Delete unused endpoints to prevent ongoing costs:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "predictor.delete_endpoint(delete_endpoint_config=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## (Optional) Batch inference\n", "\n", "Above we saw how you can deploy your trained model to a real-time API, But what if you want to process a whole batch of data at once? There's no need to manually orchestrate sending this data through an endpoint: You can use [SageMaker Batch Transform](https://docs.aws.amazon.com/sagemaker/latest/dg/batch-transform.html).\n", "\n", "Like with training, your input data for a batch transform job needs to be accessible to SageMaker (i.e. uploaded to Amazon S3) and the result will be stored to S3. The compute infrastructure spun up for the job will be released as soon as the data is processed.\n", "\n", "Unlike with training, the input data in S3 needs to match the format your model expects for *inference*. This means we'll need to remove `target`, any unused features, and also column headers (although we could have instead overridden [input_fn](https://sagemaker.readthedocs.io/en/stable/frameworks/sklearn/using_sklearn.html#process-input) to make our model handle more input shapes)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "testX[data.feature_names].to_csv(\"data/transform_input.csv\", header=False, index=False)\n", "\n", "transform_input_s3uri = sess.upload_data(\n", " path=\"data/transform_input.csv\", # Local source\n", " bucket=bucket,\n", " key_prefix=\"sm101/sklearn\", # Destination path in S3 bucket\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With the input data uploaded, you're ready to run a transform job using the `model` from before:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "transformer = model.transformer(\n", " instance_count=1,\n", " instance_type=\"ml.m5.xlarge\",\n", " # Input Parameters:\n", " strategy=\"MultiRecord\", # Batch multiple records per request to the endpoint\n", " max_payload=2, # Max 2MB payload per request\n", " max_concurrent_transforms=2, # 2 concurrent request threads per instance\n", " # Output Parameters:\n", " output_path=f\"s3://{bucket}/sm101/sklearn-transforms\",\n", " accept=\"text/csv\", # Request CSV output format\n", " assemble_with=\"Line\", # Records in CSV output are newline-separated\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "transformer.transform(\n", " transform_input_s3uri,\n", " content_type=\"text/csv\", # Input files are CSV format\n", " split_type=\"Line\", # Interpret each line of the CSV as a separate record\n", " join_source=\"Input\", # Bring input features through to the output file\n", " wait=True, # Keep this notebook blocked until the job completes\n", " logs=True, # Stream logs to the notebook\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For each input object in S3, Batch Transform will generate a similar object under the output folder with `.out` appended to the file name. In our simple example, there was just one input CSV so there will be one `csv.out` result file:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "job_desc = sm_boto3.describe_transform_job(TransformJobName=transformer.latest_transform_job.name)\n", "output_s3uri = job_desc[\"TransformOutput\"][\"S3OutputPath\"]\n", "\n", "# pd.read_csv() can take an \"s3://.../.../\" folder, but doesn't like that our Batch Transform\n", "# results have .csv.out extension instead of .csv: So instead manually specify which file we want:\n", "!echo \"Output folder contents:\" && aws s3 ls {output_s3uri}/\n", "\n", "input_filename = transform_input_s3uri.rpartition(\"/\")[2]\n", "output_file_s3uri = f\"{output_s3uri}/{input_filename}.out\"\n", "\n", "print(f\"\\nReading {output_file_s3uri} from S3\")\n", "pd.read_csv(output_file_s3uri, names=data.feature_names + [\"prediction\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Conclusions\n", "\n", "In this notebook, we saw an example of:\n", "\n", "- Running your own Scikit-Learn-based model training script as a SageMaker training job, with configurable parameters and output accuracy metrics\n", "- Deploying the trained model to a real-time inference API\n", "- Using the model for batch inference\n", "\n", "SageMaker took care of the model serving stack for us with no boilerplate code required: Just define [override functions](https://sagemaker.readthedocs.io/en/stable/frameworks/sklearn/using_sklearn.html#sagemaker-scikit-learn-model-server) if needed (like `input_fn` and `model_fn`) to customize the default behaviour. At training time, our script read parameters from the command line arguments and environment variables provided through SageMaker - and loaded data from local folder because download from S3 is taken care of by SageMaker too.\n", "\n", "By using the SageMaker APIs (instead of just working locally in the notebook), we can improve the traceability and reproducibility of experiments; optimize our compute resource usage; and accelerate the path from trained model to production deployment." ] } ], "metadata": { "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (Data Science 2.0)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:ap-southeast-1:492261229750:image/sagemaker-data-science-38" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.13" } }, "nbformat": 4, "nbformat_minor": 4 }