{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Direct Marketing with Amazon SageMaker Autopilot\n", "---\n", "\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Contents\n", "\n", "1. [Introduction](#Introduction)\n", "1. [Prerequisites](#Prerequisites)\n", "1. [Downloading the dataset](#Downloading)\n", "1. [Upload the dataset to Amazon S3](#Uploading)\n", "1. [Setting up the SageMaker Autopilot Job](#Settingup)\n", "1. [Launching the SageMaker Autopilot Job](#Launching)\n", "1. [Tracking Sagemaker Autopilot Job Progress](#Tracking)\n", "1. [Results](#Results)\n", "1. [Cleanup](#Cleanup)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Introduction\n", "\n", "Amazon SageMaker Autopilot is an automated machine learning (commonly referred to as AutoML) solution for tabular datasets. You can use SageMaker Autopilot in different ways: on autopilot (hence the name) or with human guidance, without code through SageMaker Studio, or using the AWS SDKs. This notebook, as a first glimpse, will use the AWS SDKs to simply create and deploy a machine learning model.\n", "\n", "A typical introductory task in machine learning (the \"Hello World\" equivalent) is one that uses a dataset to predict whether a customer will enroll for a term deposit at a bank, after one or more phone calls. For more information about the task and the dataset used, see [Bank Marketing Data Set](https://archive.ics.uci.edu/ml/datasets/bank+marketing).\n", "\n", "Direct marketing, through mail, email, phone, etc., is a common tactic to acquire customers. Because resources and a customer's attention are limited, the goal is to only target the subset of prospects who are likely to engage with a specific offer. Predicting those potential customers based on readily available information like demographics, past interactions, and environmental factors is a common machine learning problem. You can imagine that this task would readily translate to marketing lead prioritization in your own organization.\n", "\n", "This notebook demonstrates how you can use Autopilot on this dataset to get the most accurate ML pipeline through exploring a number of potential options, or \"candidates\". Each candidate generated by Autopilot consists of two steps. The first step performs automated feature engineering on the dataset and the second step trains and tunes an algorithm to produce a model. When you deploy this model, it follows similar steps. Feature engineering followed by inference, to decide whether the lead is worth pursuing or not. The notebook contains instructions on how to train the model as well as to deploy the model to perform batch predictions on a set of leads. Where it is possible, use the Amazon SageMaker Python SDK, a high level SDK, to simplify the way you interact with Amazon SageMaker.\n", "\n", "Other examples demonstrate how to customize models in various ways. For instance, models deployed to devices typically have memory constraints that need to be satisfied as well as accuracy. Other use cases have real-time deployment requirements and latency constraints. For now, keep it simple." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Prerequisites\n", "\n", "Before you start the tasks in this tutorial, do the following:\n", "\n", "- The Amazon Simple Storage Service (Amazon S3) bucket and prefix that you want to use for training and model data. This should be within the same Region as Amazon SageMaker training. The code below will create, or if it exists, use, the default bucket.\n", "- The IAM role to give Autopilot access to your data. See the Amazon SageMaker documentation for more information on IAM roles: https://docs.aws.amazon.com/sagemaker/latest/dg/security-iam.html" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "# cell 01\n", "import sagemaker\n", "import boto3\n", "from sagemaker import get_execution_role\n", "\n", "region = boto3.Session().region_name\n", "\n", "session = sagemaker.Session()\n", "bucket = session.default_bucket()\n", "prefix = 'sagemaker/autopilot-dm'\n", "\n", "role = get_execution_role()\n", "\n", "sm = boto3.Session().client(service_name='sagemaker',region_name=region)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Downloading the dataset\n", "Download the [direct marketing dataset](!wget -N https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip) from the sample data s3 bucket. \n", "\n", "\\[Moro et al., 2014\\] S. Moro, P. Cortez and P. Rita. A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems, Elsevier, 62:22-31, June 2014" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "--2021-02-02 06:21:51-- https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip\n", "Resolving sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com (sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com)... 52.218.241.41\n", "Connecting to sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com (sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com)|52.218.241.41|:443... connected.\n", "HTTP request sent, awaiting response... 200 OK\n", "Length: 432828 (423K) [application/zip]\n", "Saving to: ‘bank-additional.zip’\n", "\n", "bank-additional.zip 100%[===================>] 422.68K 881KB/s in 0.5s \n", "\n", "2021-02-02 06:21:52 (881 KB/s) - ‘bank-additional.zip’ saved [432828/432828]\n", "\n", "Collecting package metadata (current_repodata.json): done\n", "Solving environment: done\n", "\n", "\n", "==> WARNING: A newer version of conda exists. <==\n", " current version: 4.8.2\n", " latest version: 4.9.2\n", "\n", "Please update conda by running\n", "\n", " $ conda update -n base -c defaults conda\n", "\n", "\n", "\n", "## Package Plan ##\n", "\n", " environment location: /opt/conda\n", "\n", " added / updated specs:\n", " - unzip\n", "\n", "\n", "The following packages will be downloaded:\n", "\n", " package | build\n", " ---------------------------|-----------------\n", " conda-4.9.2 | py37h89c1867_0 3.0 MB conda-forge\n", " python_abi-3.7 | 1_cp37m 4 KB conda-forge\n", " unzip-6.0 | h516909a_2 141 KB conda-forge\n", " ------------------------------------------------------------\n", " Total: 3.2 MB\n", "\n", "The following NEW packages will be INSTALLED:\n", "\n", " python_abi conda-forge/linux-64::python_abi-3.7-1_cp37m\n", " unzip conda-forge/linux-64::unzip-6.0-h516909a_2\n", "\n", "The following packages will be UPDATED:\n", "\n", " conda pkgs/main::conda-4.8.2-py37_0 --> conda-forge::conda-4.9.2-py37h89c1867_0\n", "\n", "\n", "\n", "Downloading and Extracting Packages\n", "python_abi-3.7 | 4 KB | ##################################### | 100% \n", "conda-4.9.2 | 3.0 MB | ##################################### | 100% \n", "unzip-6.0 | 141 KB | ##################################### | 100% \n", "Preparing transaction: done\n", "Verifying transaction: done\n", "Executing transaction: done\n", "Archive: bank-additional.zip\n", " creating: bank-additional/\n", " inflating: bank-additional/bank-additional-names.txt \n", " inflating: bank-additional/bank-additional.csv \n", " inflating: bank-additional/bank-additional-full.csv \n" ] } ], "source": [ "# cell 02\n", "!wget -N https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip\n", "!conda install -y -c conda-forge unzip\n", "!unzip -o bank-additional.zip\n", "\n", "local_data_path = './bank-additional/bank-additional-full.csv'\n" ] }, { "cell_type": "markdown", "metadata": { "toc-hr-collapsed": true }, "source": [ "## Upload the dataset to Amazon S3\n", "\n", "Before you run Autopilot on the dataset, first perform a check of the dataset to make sure that it has no obvious errors. The Autopilot process can take long time, and it's generally a good practice to inspect the dataset before you start a job. This particular dataset is small, so you can inspect it in the notebook instance itself. If you have a larger dataset that will not fit in a notebook instance memory, inspect the dataset offline using a big data analytics tool like Apache Spark. [Deequ](https://github.com/awslabs/deequ) is a library built on top of Apache Spark that can be helpful for performing checks on large datasets. Autopilot is capable of handling datasets up to 5 GB.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Read the data into a Pandas data frame and take a look." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
agejobmaritaleducationdefaulthousingloancontactmonthday_of_weekdurationcampaignpdayspreviouspoutcomeemp.var.ratecons.price.idxcons.conf.idxeuribor3mnr.employedy
056housemaidmarriedbasic.4ynononotelephonemaymon26119990nonexistent1.193.994-36.44.8575191.0no
157servicesmarriedhigh.schoolunknownnonotelephonemaymon14919990nonexistent1.193.994-36.44.8575191.0no
237servicesmarriedhigh.schoolnoyesnotelephonemaymon22619990nonexistent1.193.994-36.44.8575191.0no
340admin.marriedbasic.6ynononotelephonemaymon15119990nonexistent1.193.994-36.44.8575191.0no
456servicesmarriedhigh.schoolnonoyestelephonemaymon30719990nonexistent1.193.994-36.44.8575191.0no
..................................................................
4118373retiredmarriedprofessional.coursenoyesnocellularnovfri33419990nonexistent-1.194.767-50.81.0284963.6yes
4118446blue-collarmarriedprofessional.coursenononocellularnovfri38319990nonexistent-1.194.767-50.81.0284963.6no
4118556retiredmarrieduniversity.degreenoyesnocellularnovfri18929990nonexistent-1.194.767-50.81.0284963.6no
4118644technicianmarriedprofessional.coursenononocellularnovfri44219990nonexistent-1.194.767-50.81.0284963.6yes
4118774retiredmarriedprofessional.coursenoyesnocellularnovfri23939991failure-1.194.767-50.81.0284963.6no
\n", "

41188 rows × 21 columns

\n", "
" ], "text/plain": [ " age job marital education default housing loan \\\n", "0 56 housemaid married basic.4y no no no \n", "1 57 services married high.school unknown no no \n", "2 37 services married high.school no yes no \n", "3 40 admin. married basic.6y no no no \n", "4 56 services married high.school no no yes \n", "... ... ... ... ... ... ... ... \n", "41183 73 retired married professional.course no yes no \n", "41184 46 blue-collar married professional.course no no no \n", "41185 56 retired married university.degree no yes no \n", "41186 44 technician married professional.course no no no \n", "41187 74 retired married professional.course no yes no \n", "\n", " contact month day_of_week duration campaign pdays previous \\\n", "0 telephone may mon 261 1 999 0 \n", "1 telephone may mon 149 1 999 0 \n", "2 telephone may mon 226 1 999 0 \n", "3 telephone may mon 151 1 999 0 \n", "4 telephone may mon 307 1 999 0 \n", "... ... ... ... ... ... ... ... \n", "41183 cellular nov fri 334 1 999 0 \n", "41184 cellular nov fri 383 1 999 0 \n", "41185 cellular nov fri 189 2 999 0 \n", "41186 cellular nov fri 442 1 999 0 \n", "41187 cellular nov fri 239 3 999 1 \n", "\n", " poutcome emp.var.rate cons.price.idx cons.conf.idx euribor3m \\\n", "0 nonexistent 1.1 93.994 -36.4 4.857 \n", "1 nonexistent 1.1 93.994 -36.4 4.857 \n", "2 nonexistent 1.1 93.994 -36.4 4.857 \n", "3 nonexistent 1.1 93.994 -36.4 4.857 \n", "4 nonexistent 1.1 93.994 -36.4 4.857 \n", "... ... ... ... ... ... \n", "41183 nonexistent -1.1 94.767 -50.8 1.028 \n", "41184 nonexistent -1.1 94.767 -50.8 1.028 \n", "41185 nonexistent -1.1 94.767 -50.8 1.028 \n", "41186 nonexistent -1.1 94.767 -50.8 1.028 \n", "41187 failure -1.1 94.767 -50.8 1.028 \n", "\n", " nr.employed y \n", "0 5191.0 no \n", "1 5191.0 no \n", "2 5191.0 no \n", "3 5191.0 no \n", "4 5191.0 no \n", "... ... ... \n", "41183 4963.6 yes \n", "41184 4963.6 no \n", "41185 4963.6 no \n", "41186 4963.6 yes \n", "41187 4963.6 no \n", "\n", "[41188 rows x 21 columns]" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# cell 03\n", "import pandas as pd\n", "\n", "data = pd.read_csv(local_data_path)\n", "pd.set_option('display.max_columns', 500) # Make sure we can see all of the columns\n", "pd.set_option('display.max_rows', 10) # Keep the output on one page\n", "data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that there are 20 features to help predict the target column 'y'.\n", "\n", "Amazon SageMaker Autopilot takes care of preprocessing your data for you. You do not need to perform conventional data preprocssing techniques such as handling missing values, converting categorical features to numeric features, scaling data, and handling more complicated data types.\n", "\n", "Moreover, splitting the dataset into training and validation splits is not necessary. Autopilot takes care of this for you. You may, however, want to split out a test set. That's next, although you use it for batch inference at the end instead of testing the model.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Reserve some data for calling batch inference on the model\n", "\n", "Divide the data into training and testing splits. The training split is used by SageMaker Autopilot. The testing split is reserved to perform inference using the suggested model.\n" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "# cell 04\n", "train_data = data.sample(frac=0.8,random_state=200)\n", "\n", "test_data = data.drop(train_data.index)\n", "\n", "test_data_no_target = test_data.drop(columns=['y'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Upload the dataset to Amazon S3\n", "Copy the file to Amazon Simple Storage Service (Amazon S3) in a .csv format for Amazon SageMaker training to use." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Train data uploaded to: s3://sagemaker-eu-west-1-802173394839/sagemaker/autopilot-dm/train/train_data.csv\n", "Test data uploaded to: s3://sagemaker-eu-west-1-802173394839/sagemaker/autopilot-dm/test/test_data.csv\n" ] } ], "source": [ "# cell 05\n", "train_file = 'train_data.csv';\n", "train_data.to_csv(train_file, index=False, header=True)\n", "train_data_s3_path = session.upload_data(path=train_file, key_prefix=prefix + \"/train\")\n", "print('Train data uploaded to: ' + train_data_s3_path)\n", "\n", "test_file = 'test_data.csv';\n", "test_data_no_target.to_csv(test_file, index=False, header=False)\n", "test_data_s3_path = session.upload_data(path=test_file, key_prefix=prefix + \"/test\")\n", "print('Test data uploaded to: ' + test_data_s3_path)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Setting up the SageMaker Autopilot Job\n", "\n", "After uploading the dataset to Amazon S3, you can invoke Autopilot to find the best ML pipeline to train a model on this dataset. \n", "\n", "The required inputs for invoking a Autopilot job are:\n", "* Amazon S3 location for input dataset and for all output artifacts\n", "* Name of the column of the dataset you want to predict (`y` in this case) \n", "* An IAM role\n", "\n", "Currently Autopilot supports only tabular datasets in CSV format. Either all files should have a header row, or the first file of the dataset, when sorted in alphabetical/lexical order, is expected to have a header row." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "# cell 06\n", "input_data_config = [{\n", " 'DataSource': {\n", " 'S3DataSource': {\n", " 'S3DataType': 'S3Prefix',\n", " 'S3Uri': 's3://{}/{}/train'.format(bucket,prefix)\n", " }\n", " },\n", " 'TargetAttributeName': 'y'\n", " }\n", " ]\n", "\n", "output_data_config = {\n", " 'S3OutputPath': 's3://{}/{}/output'.format(bucket,prefix)\n", " }\n", "\n", "autoMLJobConfig={\n", " 'CompletionCriteria': {\n", " 'MaxCandidates': 5\n", " }\n", "}\n", "\n", "autoMLJobObjective = {\n", " \"MetricName\": \"Accuracy\"\n", "}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can also specify the type of problem you want to solve with your dataset (`Regression, MulticlassClassification, BinaryClassification`). In case you are not sure, SageMaker Autopilot will infer the problem type based on statistics of the target column (the column you want to predict). \n", "\n", "You have the option to limit the running time of a SageMaker Autopilot job by providing either the maximum number of pipeline evaluations or candidates (one pipeline evaluation is called a `Candidate` because it generates a candidate model) or providing the total time allocated for the overall Autopilot job. Under default settings, this job takes about four hours to run. This varies between runs because of the nature of the exploratory process Autopilot uses to find optimal training parameters." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Launching the SageMaker Autopilot Job\n", "\n", "You can now launch the Autopilot job by calling the `create_auto_ml_job` API. https://docs.aws.amazon.com/cli/latest/reference/sagemaker/create-auto-ml-job.html" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "AutoMLJobName: automl-banking-02-06-22-52\n" ] }, { "data": { "text/plain": [ "{'AutoMLJobArn': 'arn:aws:sagemaker:eu-west-1:802173394839:automl-job/automl-banking-02-06-22-52',\n", " 'ResponseMetadata': {'RequestId': '77a5451d-4cd5-48af-8039-54362cebc5c3',\n", " 'HTTPStatusCode': 200,\n", " 'HTTPHeaders': {'x-amzn-requestid': '77a5451d-4cd5-48af-8039-54362cebc5c3',\n", " 'content-type': 'application/x-amz-json-1.1',\n", " 'content-length': '97',\n", " 'date': 'Tue, 02 Feb 2021 06:22:53 GMT'},\n", " 'RetryAttempts': 0}}" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# cell 07\n", "from time import gmtime, strftime, sleep\n", "timestamp_suffix = strftime('%d-%H-%M-%S', gmtime())\n", "\n", "auto_ml_job_name = 'automl-banking-' + timestamp_suffix\n", "print('AutoMLJobName: ' + auto_ml_job_name)\n", "\n", "sm.create_auto_ml_job(AutoMLJobName=auto_ml_job_name,\n", " InputDataConfig=input_data_config,\n", " OutputDataConfig=output_data_config,\n", " AutoMLJobConfig=autoMLJobConfig,\n", " AutoMLJobObjective=autoMLJobObjective,\n", " ProblemType=\"BinaryClassification\",\n", " RoleArn=role)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Tracking SageMaker Autopilot job progress\n", "SageMaker Autopilot job consists of the following high-level steps : \n", "* Analyzing Data, where the dataset is analyzed and Autopilot comes up with a list of ML pipelines that should be tried out on the dataset. The dataset is also split into train and validation sets.\n", "* Feature Engineering, where Autopilot performs feature transformation on individual features of the dataset as well as at an aggregate level.\n", "* Model Tuning, where the top performing pipeline is selected along with the optimal hyperparameters for the training algorithm (the last stage of the pipeline). " ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "JobStatus - Secondary Status\n", "------------------------------\n", "InProgress - AnalyzingData\n", "InProgress - AnalyzingData\n", "InProgress - AnalyzingData\n", "InProgress - AnalyzingData\n", "InProgress - AnalyzingData\n", "InProgress - AnalyzingData\n", "InProgress - AnalyzingData\n", "InProgress - AnalyzingData\n", "InProgress - AnalyzingData\n", "InProgress - AnalyzingData\n", "InProgress - AnalyzingData\n", "InProgress - AnalyzingData\n", "InProgress - AnalyzingData\n", "InProgress - AnalyzingData\n", "InProgress - AnalyzingData\n", "InProgress - AnalyzingData\n", "InProgress - AnalyzingData\n", "InProgress - AnalyzingData\n", "InProgress - AnalyzingData\n", "InProgress - AnalyzingData\n", "InProgress - FeatureEngineering\n", "InProgress - FeatureEngineering\n", "InProgress - FeatureEngineering\n", "InProgress - FeatureEngineering\n", "InProgress - FeatureEngineering\n", "InProgress - FeatureEngineering\n", "InProgress - FeatureEngineering\n", "InProgress - FeatureEngineering\n", "InProgress - FeatureEngineering\n", "InProgress - FeatureEngineering\n", "InProgress - FeatureEngineering\n", "InProgress - FeatureEngineering\n", "InProgress - FeatureEngineering\n", "InProgress - FeatureEngineering\n", "InProgress - FeatureEngineering\n", "InProgress - FeatureEngineering\n", "InProgress - FeatureEngineering\n", "InProgress - ModelTuning\n", "InProgress - ModelTuning\n", "InProgress - ModelTuning\n", "InProgress - ModelTuning\n", "InProgress - ModelTuning\n", "InProgress - ModelTuning\n", "InProgress - ModelTuning\n", "InProgress - ModelTuning\n", "Completed - MaxCandidatesReached\n" ] } ], "source": [ "# cell 08\n", "print ('JobStatus - Secondary Status')\n", "print('------------------------------')\n", "\n", "\n", "describe_response = sm.describe_auto_ml_job(AutoMLJobName=auto_ml_job_name)\n", "print (describe_response['AutoMLJobStatus'] + \" - \" + describe_response['AutoMLJobSecondaryStatus'])\n", "job_run_status = describe_response['AutoMLJobStatus']\n", " \n", "while job_run_status not in ('Failed', 'Completed', 'Stopped'):\n", " describe_response = sm.describe_auto_ml_job(AutoMLJobName=auto_ml_job_name)\n", " job_run_status = describe_response['AutoMLJobStatus']\n", " \n", " print (describe_response['AutoMLJobStatus'] + \" - \" + describe_response['AutoMLJobSecondaryStatus'])\n", " sleep(30)" ] }, { "cell_type": "markdown", "metadata": { "toc-hr-collapsed": true }, "source": [ "## Results\n", "\n", "Now use the describe_auto_ml_job API to look up the best candidate selected by the SageMaker Autopilot job. " ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'CandidateName': 'tuning-job-1-9cfcb4d2ad9f4bdeaf-002-f841b3aa', 'FinalAutoMLJobObjectiveMetric': {'MetricName': 'validation:accuracy', 'Value': 0.9154599905014038}, 'ObjectiveStatus': 'Succeeded', 'CandidateSteps': [{'CandidateStepType': 'AWS::SageMaker::ProcessingJob', 'CandidateStepArn': 'arn:aws:sagemaker:eu-west-1:802173394839:processing-job/db-1-ec9b4a56e2e7407f8dc35485712d9d6d6e4d6f8bfb8e4fb2a300c77dda', 'CandidateStepName': 'db-1-ec9b4a56e2e7407f8dc35485712d9d6d6e4d6f8bfb8e4fb2a300c77dda'}, {'CandidateStepType': 'AWS::SageMaker::TrainingJob', 'CandidateStepArn': 'arn:aws:sagemaker:eu-west-1:802173394839:training-job/automl-ban-dpp0-1-49de0063e27b44d0b838d7397a938b5bf82e5427e2ed4', 'CandidateStepName': 'automl-ban-dpp0-1-49de0063e27b44d0b838d7397a938b5bf82e5427e2ed4'}, {'CandidateStepType': 'AWS::SageMaker::TransformJob', 'CandidateStepArn': 'arn:aws:sagemaker:eu-west-1:802173394839:transform-job/automl-ban-dpp0-csv-1-1df5e23f26e44b27993acc8d644a8c07629d627e8', 'CandidateStepName': 'automl-ban-dpp0-csv-1-1df5e23f26e44b27993acc8d644a8c07629d627e8'}, {'CandidateStepType': 'AWS::SageMaker::TrainingJob', 'CandidateStepArn': 'arn:aws:sagemaker:eu-west-1:802173394839:training-job/tuning-job-1-9cfcb4d2ad9f4bdeaf-002-f841b3aa', 'CandidateStepName': 'tuning-job-1-9cfcb4d2ad9f4bdeaf-002-f841b3aa'}], 'CandidateStatus': 'Completed', 'InferenceContainers': [{'Image': '141502667606.dkr.ecr.eu-west-1.amazonaws.com/sagemaker-sklearn-automl:0.2-1-cpu-py3', 'ModelDataUrl': 's3://sagemaker-eu-west-1-802173394839/sagemaker/autopilot-dm/output/automl-banking-02-06-22-52/data-processor-models/automl-ban-dpp0-1-49de0063e27b44d0b838d7397a938b5bf82e5427e2ed4/output/model.tar.gz', 'Environment': {'AUTOML_TRANSFORM_MODE': 'feature-transform', 'SAGEMAKER_DEFAULT_INVOCATIONS_ACCEPT': 'application/x-recordio-protobuf', 'SAGEMAKER_PROGRAM': 'sagemaker_serve', 'SAGEMAKER_SUBMIT_DIRECTORY': '/opt/ml/model/code'}}, {'Image': '141502667606.dkr.ecr.eu-west-1.amazonaws.com/sagemaker-xgboost:1.0-1-cpu-py3', 'ModelDataUrl': 's3://sagemaker-eu-west-1-802173394839/sagemaker/autopilot-dm/output/automl-banking-02-06-22-52/tuning/automl-ban-dpp0-xgb/tuning-job-1-9cfcb4d2ad9f4bdeaf-002-f841b3aa/output/model.tar.gz', 'Environment': {'MAX_CONTENT_LENGTH': '20971520', 'SAGEMAKER_DEFAULT_INVOCATIONS_ACCEPT': 'text/csv', 'SAGEMAKER_INFERENCE_OUTPUT': 'predicted_label', 'SAGEMAKER_INFERENCE_SUPPORTED': 'predicted_label,probability,probabilities'}}, {'Image': '141502667606.dkr.ecr.eu-west-1.amazonaws.com/sagemaker-sklearn-automl:0.2-1-cpu-py3', 'ModelDataUrl': 's3://sagemaker-eu-west-1-802173394839/sagemaker/autopilot-dm/output/automl-banking-02-06-22-52/data-processor-models/automl-ban-dpp0-1-49de0063e27b44d0b838d7397a938b5bf82e5427e2ed4/output/model.tar.gz', 'Environment': {'AUTOML_TRANSFORM_MODE': 'inverse-label-transform', 'SAGEMAKER_DEFAULT_INVOCATIONS_ACCEPT': 'text/csv', 'SAGEMAKER_INFERENCE_INPUT': 'predicted_label', 'SAGEMAKER_INFERENCE_OUTPUT': 'predicted_label', 'SAGEMAKER_INFERENCE_SUPPORTED': 'predicted_label,probability,labels,probabilities', 'SAGEMAKER_PROGRAM': 'sagemaker_serve', 'SAGEMAKER_SUBMIT_DIRECTORY': '/opt/ml/model/code'}}], 'CreationTime': datetime.datetime(2021, 2, 2, 6, 42, 29, tzinfo=tzlocal()), 'EndTime': datetime.datetime(2021, 2, 2, 6, 43, 57, tzinfo=tzlocal()), 'LastModifiedTime': datetime.datetime(2021, 2, 2, 6, 44, 51, 367000, tzinfo=tzlocal())}\n", "\n", "\n", "CandidateName: tuning-job-1-9cfcb4d2ad9f4bdeaf-002-f841b3aa\n", "FinalAutoMLJobObjectiveMetricName: validation:accuracy\n", "FinalAutoMLJobObjectiveMetricValue: 0.9154599905014038\n" ] } ], "source": [ "# cell 09\n", "best_candidate = sm.describe_auto_ml_job(AutoMLJobName=auto_ml_job_name)['BestCandidate']\n", "best_candidate_name = best_candidate['CandidateName']\n", "print(best_candidate)\n", "print('\\n')\n", "print(\"CandidateName: \" + best_candidate_name)\n", "print(\"FinalAutoMLJobObjectiveMetricName: \" + best_candidate['FinalAutoMLJobObjectiveMetric']['MetricName'])\n", "print(\"FinalAutoMLJobObjectiveMetricValue: \" + str(best_candidate['FinalAutoMLJobObjectiveMetric']['Value']))" ] }, { "cell_type": "markdown", "metadata": { "toc-hr-collapsed": false }, "source": [ "### Perform batch inference using the best candidate\n", "\n", "Now that you have successfully completed the SageMaker Autopilot job on the dataset, create a model from any of the candidates by using [Inference Pipelines](https://docs.aws.amazon.com/sagemaker/latest/dg/inference-pipelines.html). " ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Model ARN corresponding to the best candidate is : arn:aws:sagemaker:eu-west-1:802173394839:model/automl-banking-model-02-06-22-52\n" ] } ], "source": [ "# cell 10\n", "model_name = 'automl-banking-model-' + timestamp_suffix\n", "\n", "model = sm.create_model(Containers=best_candidate['InferenceContainers'],\n", " ModelName=model_name,\n", " ExecutionRoleArn=role)\n", "\n", "print('Model ARN corresponding to the best candidate is : {}'.format(model['ModelArn']))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can use batch inference by using Amazon SageMaker batch transform. The same model can also be deployed to perform online inference using Amazon SageMaker hosting." ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/plain": [ "{'TransformJobArn': 'arn:aws:sagemaker:eu-west-1:802173394839:transform-job/automl-banking-transform-02-06-22-52',\n", " 'ResponseMetadata': {'RequestId': '4a398e95-488f-4340-b05b-a4d76088de79',\n", " 'HTTPStatusCode': 200,\n", " 'HTTPHeaders': {'x-amzn-requestid': '4a398e95-488f-4340-b05b-a4d76088de79',\n", " 'content-type': 'application/x-amz-json-1.1',\n", " 'content-length': '113',\n", " 'date': 'Tue, 02 Feb 2021 06:47:22 GMT'},\n", " 'RetryAttempts': 0}}" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# cell 11\n", "transform_job_name = 'automl-banking-transform-' + timestamp_suffix\n", "\n", "transform_input = {\n", " 'DataSource': {\n", " 'S3DataSource': {\n", " 'S3DataType': 'S3Prefix',\n", " 'S3Uri': test_data_s3_path\n", " }\n", " },\n", " 'ContentType': 'text/csv',\n", " 'CompressionType': 'None',\n", " 'SplitType': 'Line'\n", " }\n", "\n", "transform_output = {\n", " 'S3OutputPath': 's3://{}/{}/inference-results'.format(bucket,prefix),\n", " }\n", "\n", "transform_resources = {\n", " 'InstanceType': 'ml.m5.4xlarge',\n", " 'InstanceCount': 1\n", " }\n", "\n", "sm.create_transform_job(TransformJobName = transform_job_name,\n", " ModelName = model_name,\n", " TransformInput = transform_input,\n", " TransformOutput = transform_output,\n", " TransformResources = transform_resources\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Watch the transform job for completion." ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "JobStatus\n", "----------\n", "InProgress\n", "InProgress\n", "InProgress\n", "InProgress\n", "InProgress\n", "InProgress\n", "InProgress\n", "InProgress\n", "InProgress\n", "InProgress\n", "Completed\n" ] } ], "source": [ "# cell 12\n", "print ('JobStatus')\n", "print('----------')\n", "\n", "\n", "describe_response = sm.describe_transform_job(TransformJobName = transform_job_name)\n", "job_run_status = describe_response['TransformJobStatus']\n", "print (job_run_status)\n", "\n", "while job_run_status not in ('Failed', 'Completed', 'Stopped'):\n", " describe_response = sm.describe_transform_job(TransformJobName = transform_job_name)\n", " job_run_status = describe_response['TransformJobStatus']\n", " print (job_run_status)\n", " sleep(30)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's view the results of the transform job:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
no
0no
1no
2no
3no
4no
......
8232yes
8233yes
8234no
8235yes
8236yes
\n", "

8237 rows × 1 columns

\n", "
" ], "text/plain": [ " no\n", "0 no\n", "1 no\n", "2 no\n", "3 no\n", "4 no\n", "... ...\n", "8232 yes\n", "8233 yes\n", "8234 no\n", "8235 yes\n", "8236 yes\n", "\n", "[8237 rows x 1 columns]" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# cell 13\n", "s3_output_key = '{}/inference-results/test_data.csv.out'.format(prefix);\n", "local_inference_results_path = 'inference_results.csv'\n", "\n", "s3 = boto3.resource('s3')\n", "inference_results_bucket = s3.Bucket(session.default_bucket())\n", "\n", "inference_results_bucket.download_file(s3_output_key, local_inference_results_path);\n", "\n", "data = pd.read_csv(local_inference_results_path, sep=';')\n", "pd.set_option('display.max_rows', 10) # Keep the output on one page\n", "data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### View other candidates explored by SageMaker Autopilot\n", "You can view all the candidates (pipeline evaluations with different hyperparameter combinations) that were explored by SageMaker Autopilot and sort them by their final performance metric." ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "1 tuning-job-1-9cfcb4d2ad9f4bdeaf-002-f841b3aa 0.9154599905014038\n", "2 tuning-job-1-9cfcb4d2ad9f4bdeaf-001-f8483e5d 0.9142500162124634\n", "3 tuning-job-1-9cfcb4d2ad9f4bdeaf-005-9e897a9f 0.9080299735069275\n", "4 tuning-job-1-9cfcb4d2ad9f4bdeaf-004-b2eee8f9 0.9072697162628174\n", "5 tuning-job-1-9cfcb4d2ad9f4bdeaf-003-7f2b193e 0.9045400023460388\n" ] } ], "source": [ "# cell 14\n", "candidates = sm.list_candidates_for_auto_ml_job(AutoMLJobName=auto_ml_job_name, SortBy='FinalObjectiveMetricValue')['Candidates']\n", "index = 1\n", "for candidate in candidates:\n", " print (str(index) + \" \" + candidate['CandidateName'] + \" \" + str(candidate['FinalAutoMLJobObjectiveMetric']['Value']))\n", " index += 1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Candidate Generation Notebook\n", " \n", "Sagemaker AutoPilot also auto-generates a Candidate Definitions notebook. This notebook can be used to interactively step through the various steps taken by the Sagemaker Autopilot to arrive at the best candidate. This notebook can also be used to override various runtime parameters like parallelism, hardware used, algorithms explored, feature extraction scripts and more.\n", " \n", "The notebook can be downloaded from the following Amazon S3 location:" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'s3://sagemaker-eu-west-1-802173394839/sagemaker/autopilot-dm/output/automl-banking-02-06-22-52/sagemaker-automl-candidates/pr-1-5a84605f24d74422b0acacccf5f537dbbcd2886960fe47bcaad3dacf77/notebooks/SageMakerAutopilotCandidateDefinitionNotebook.ipynb'" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# cell 15\n", "sm.describe_auto_ml_job(AutoMLJobName=auto_ml_job_name)['AutoMLJobArtifacts']['CandidateDefinitionNotebookLocation']\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Data Exploration Notebook\n", "Sagemaker Autopilot also auto-generates a Data Exploration notebook, which can be downloaded from the following Amazon S3 location:" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'s3://sagemaker-eu-west-1-802173394839/sagemaker/autopilot-dm/output/automl-banking-02-06-22-52/sagemaker-automl-candidates/pr-1-5a84605f24d74422b0acacccf5f537dbbcd2886960fe47bcaad3dacf77/notebooks/SageMakerAutopilotDataExplorationNotebook.ipynb'" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# cell 16\n", "sm.describe_auto_ml_job(AutoMLJobName=auto_ml_job_name)['AutoMLJobArtifacts']['DataExplorationNotebookLocation']\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Cleanup\n", "\n", "The Autopilot job creates many underlying artifacts such as dataset splits, preprocessing scripts, or preprocessed data, etc. This code, when un-commented, deletes them. This operation deletes all the generated models and the auto-generated notebooks as well. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# cell 17\n", "#s3 = boto3.resource('s3')\n", "#bucket = s3.Bucket(bucket)\n", "\n", "#job_outputs_prefix = '{}/output/{}'.format(prefix,auto_ml_job_name)\n", "#bucket.objects.filter(Prefix=job_outputs_prefix).delete()" ] } ], "metadata": { "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (Data Science)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:eu-west-1:470317259841:image/datascience-1.0" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.6" } }, "nbformat": 4, "nbformat_minor": 4 }