{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Direct Marketing with Amazon SageMaker Autopilot\n",
"\n",
"This notebook works well with the `Python 3 (Data Science 2.0)` kernel on SageMaker Studio.\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {
"heading_collapsed": "true",
"tags": []
},
"source": [
"## Contents\n",
"\n",
"1. [Introduction](#Introduction)\n",
"1. [Prerequisites](#Prerequisites)\n",
"1. [Downloading the dataset](#Downloading)\n",
"1. [Upload the dataset to Amazon S3](#Uploading)\n",
"1. [Setting up the SageMaker Autopilot Job](#Settingup)\n",
"1. [Launching the SageMaker Autopilot Job](#Launching)\n",
"1. [Tracking Sagemaker Autopilot Job Progress](#Tracking)\n",
"1. [Results](#Results)\n",
"1. [Cleanup](#Cleanup)"
]
},
{
"cell_type": "markdown",
"metadata": {
"heading_collapsed": "true",
"tags": []
},
"source": [
"## Introduction\n",
"\n",
"[Amazon SageMaker Autopilot](https://aws.amazon.com/sagemaker/autopilot/) is an automated machine learning (commonly referred to as AutoML) solution for tabular datasets. You can use SageMaker Autopilot in different ways:\n",
"\n",
"- On autopilot (hence the name) or with human guidance\n",
"- Without code (through the SageMaker Studio UI), or using the AWS SDKs.\n",
"\n",
"This notebook, as a first glimpse, will use the AWS SDKs to simply create and deploy a machine learning model.\n",
"\n",
"A typical introductory task in machine learning (the \"Hello World\" equivalent) is one that uses a dataset to predict whether a customer will enroll for a term deposit at a bank, after one or more phone calls. For more information about the task and the dataset used, see [Bank Marketing Data Set](https://archive.ics.uci.edu/ml/datasets/bank+marketing).\n",
"\n",
"Direct marketing, through mail, email, phone, etc., is a common tactic to acquire customers. Because resources and a customer's attention are limited, the goal is to only target the subset of prospects who are likely to engage with a specific offer. Predicting those potential customers based on readily available information like demographics, past interactions, and environmental factors is a common machine learning problem. You can imagine that this task would readily translate to marketing lead prioritization in your own organization.\n",
"\n",
"This notebook demonstrates how you can use Autopilot on this dataset to get the most accurate ML pipeline through exploring a number of potential options, or \"candidates\". Each candidate generated by Autopilot consists of two steps. The first step performs automated feature engineering on the dataset and the second step trains and tunes an algorithm to produce a model. When you deploy this model, it follows similar steps. Feature engineering followed by inference, to decide whether the lead is worth pursuing or not. The notebook contains instructions on how to train the model as well as to deploy the model to perform batch predictions on a set of leads. Where it is possible, use the Amazon SageMaker Python SDK, a high level SDK, to simplify the way you interact with Amazon SageMaker.\n",
"\n",
"Other examples demonstrate how to customize models in various ways. For instance, models deployed to devices typically have memory constraints that need to be satisfied as well as accuracy. Other use cases have real-time deployment requirements and latency constraints. For now, keep it simple."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n",
"\n",
"Before getting started, we'll check & upgrade a few installed library versions to avoid some past incompatibilities (see [numpy#18355](https://github.com/numpy/numpy/issues/18355) and [aiobotocore#905](https://github.com/aio-libs/aiobotocore/issues/905)).\n",
"\n",
"> ⚠️ **Note:** If you have any other notebooks running in the same Studio \"app\" (same kernel and instance type) that have already `import`ed these libraries into memory, you might see unexpected errors and need to restart those notebook kernels after this install."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"!pip install \"pandas>=1.0.5\" \"s3fs>=2022.01.0\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"With dependencies installed, this next code cell will:\n",
"\n",
"- **configure** the S3 bucket and folder where data should be stored (to keep our environment tidy)\n",
"- **connect** to SageMaker with the open-source [Sagemaker Python SDK](https://sagemaker.readthedocs.io/en/stable/), `sagemaker`\n",
"\n",
"Note that while [boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html) is the general-purpose AWS SDK for Python, `sagemaker` provides some powerful, higher-level interfaces designed specifically for ML workflows."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import sagemaker\n",
"\n",
"session = sagemaker.Session()\n",
"\n",
"bucket = session.default_bucket()\n",
"prefix = \"sm101/autopilot-dm\"\n",
"\n",
"sm = session.sagemaker_client\n",
"\n",
"role = sagemaker.get_execution_role()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Downloading the dataset\n",
"\n",
"In this example we'll use the **direct marketing dataset** as per:\n",
"\n",
"> *\\[Moro et al., 2014\\] S. Moro, P. Cortez and P. Rita. A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems, Elsevier, 62:22-31, June 2014*\n",
"\n",
"Here, we'll download [the dataset](https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip) from the SageMaker sample data S3 bucket."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!wget -P data/ -N https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip\n",
"\n",
"import zipfile\n",
"\n",
"with zipfile.ZipFile(\"data/bank-additional.zip\", \"r\") as zip_ref:\n",
" print(\"Unzipping...\")\n",
" zip_ref.extractall(\"data\")\n",
"print(\"Done\")\n",
"\n",
"local_data_path = \"./data/bank-additional/bank-additional-full.csv\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Upload the dataset to Amazon S3\n",
"\n",
"Before you run Autopilot on the dataset, first perform a check of the dataset to make sure that it has no obvious errors. The Autopilot process can take long time, and it's generally a good practice to inspect the dataset before you start a job. This particular dataset is small, so you can inspect it in the notebook instance itself. If you have a larger dataset that will not fit in a notebook instance memory, inspect the dataset offline using a big data analytics tool like Apache Spark. [Deequ](https://github.com/awslabs/deequ) is a library built on top of Apache Spark that can be helpful for performing checks on large datasets. Autopilot is capable of handling datasets up to 5 GB.\n",
"\n",
"Read the data into a Pandas data frame and take a look."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"\n",
"data = pd.read_csv(local_data_path)\n",
"with pd.option_context(\"display.max_columns\", 500):\n",
" # Make sure we can see all of the columns\n",
" display(data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that there are 20 features to help predict the target column 'y'.\n",
"\n",
"Amazon SageMaker Autopilot takes care of preprocessing your data for you. You do not need to perform conventional data preprocssing techniques such as handling missing values, converting categorical features to numeric features, scaling data, and handling more complicated data types.\n",
"\n",
"Moreover, splitting the dataset into training and validation splits is not necessary. Autopilot takes care of this for you. You may, however, want to split out a test set. That's next, although you use it for batch inference at the end instead of testing the model.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Reserve some data for calling batch inference on the model\n",
"\n",
"Divide the data into training and testing splits. The training split is used by SageMaker Autopilot. The testing split is reserved to perform inference using the suggested model.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"train_data = data.sample(frac=0.8, random_state=200)\n",
"\n",
"test_data = data.drop(train_data.index)\n",
"\n",
"test_data_no_target = test_data.drop(columns=[\"y\"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Upload the dataset to Amazon S3\n",
"Copy the file to Amazon Simple Storage Service (Amazon S3) in a .csv format for Amazon SageMaker training to use."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"train_file = \"data/train_data.csv\"\n",
"train_data.to_csv(train_file, index=False, header=True)\n",
"train_data_s3_path = session.upload_data(path=train_file, key_prefix=prefix + \"/train\")\n",
"print(\"Train data uploaded to: \" + train_data_s3_path)\n",
"\n",
"test_file = \"data/test_data.csv\"\n",
"test_data_no_target.to_csv(test_file, index=False, header=False)\n",
"test_data_s3_path = session.upload_data(path=test_file, key_prefix=prefix + \"/test\")\n",
"print(\"Test data uploaded to: \" + test_data_s3_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setting up the SageMaker Autopilot Job\n",
"\n",
"After uploading the dataset to Amazon S3, you can invoke Autopilot to find the best ML pipeline to train a model on this dataset. \n",
"\n",
"The required inputs for invoking a Autopilot job are:\n",
"* Amazon S3 location for input dataset and for all output artifacts\n",
"* Name of the column of the dataset you want to predict (`y` in this case) \n",
"* An IAM role\n",
"\n",
"Currently Autopilot supports only tabular datasets in CSV format. Either all files should have a header row, or the first file of the dataset, when sorted in alphabetical/lexical order, is expected to have a header row."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from datetime import datetime\n",
"from time import sleep\n",
"\n",
"from sagemaker.automl import automl\n",
"\n",
"timestamp_suffix = f\"{datetime.now():%Y-%m-%d-%H-%M-%S}\"\n",
"auto_ml_job_name = f\"automl-banking-{timestamp_suffix}\"\n",
"print(f\"AutoMLJobName: {auto_ml_job_name}\")\n",
"\n",
"automl_job = automl.AutoML(\n",
" role=role,\n",
" target_attribute_name=\"y\",\n",
" output_path=f\"s3://{bucket}/{prefix}/automl-output\",\n",
" problem_type=\"BinaryClassification\",\n",
" max_candidates=10, # (We've set this low to prioritize demo speed over accuracy)\n",
" job_objective={\"MetricName\": \"F1\"},\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Specifying the type of problem you want to solve with your dataset (`Regression, MulticlassClassification, BinaryClassification`) is **optional**. In case you are not sure, SageMaker Autopilot will infer the problem type based on statistics of the target column (the column you want to predict). \n",
"\n",
"You have the option to limit the running time of a SageMaker Autopilot job by providing either the maximum number of pipeline evaluations or candidates (one pipeline evaluation is called a `Candidate` because it generates a candidate model) or providing the total time allocated for the overall Autopilot job. Under default settings, this job takes about four hours to run. This varies between runs because of the nature of the exploratory process Autopilot uses to find optimal training parameters."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Launching the SageMaker Autopilot Job\n",
"\n",
"You can now launch the Autopilot job by calling the `fit()` method as described in the [SageMaker Python SDK AutoML doc](https://sagemaker.readthedocs.io/en/stable/api/training/automl.html#sagemaker.automl.automl.AutoML.fit)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_job.fit(inputs=train_data_s3_path, wait=False, logs=False)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Or if you need (e.g. after notebook restart), you can instead 'attach' to previous jobs by name:\n",
"# automl_job = automl.AutoML.attach(\"automl-2022-02-15-13-51-51-239\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Tracking SageMaker Autopilot job progress\n",
"SageMaker Autopilot job consists of the following high-level steps : \n",
"* Analyzing Data, where the dataset is analyzed and Autopilot comes up with a list of ML pipelines that should be tried out on the dataset. The dataset is also split into train and validation sets.\n",
"* Feature Engineering, where Autopilot performs feature transformation on individual features of the dataset as well as at an aggregate level.\n",
"* Model Tuning, where the top performing pipeline is selected along with the optimal hyperparameters for the training algorithm (the last stage of the pipeline). "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"JobStatus - Secondary Status\\n----------------------------\")\n",
"\n",
"while True:\n",
" sleep(60)\n",
" describe_response = automl_job.describe_auto_ml_job()\n",
" print(\"{AutoMLJobStatus} - {AutoMLJobSecondaryStatus}\".format(**describe_response))\n",
" if describe_response[\"AutoMLJobStatus\"] in (\"Failed\", \"Completed\", \"Stopped\"):\n",
" break"
]
},
{
"cell_type": "markdown",
"metadata": {
"toc-hr-collapsed": true
},
"source": [
"## Results\n",
"\n",
"The Autopilot job is completed, and we now have a set of models with their associated performance metric.\n",
"Let's consider the top 5."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"candidates_list = automl_job.list_candidates(max_results=10, sort_by=\"FinalObjectiveMetricValue\")\n",
"\n",
"models = pd.json_normalize(candidates_list)[\n",
" [\n",
" \"CandidateName\",\n",
" \"FinalAutoMLJobObjectiveMetric.Value\",\n",
" \"FinalAutoMLJobObjectiveMetric.MetricName\",\n",
" ]\n",
"].rename(\n",
" columns={\n",
" \"FinalAutoMLJobObjectiveMetric.Value\": \"metric_value\",\n",
" \"FinalAutoMLJobObjectiveMetric.MetricName\": \"metric_name\",\n",
" \"CandidateName\": \"candidate_name\",\n",
" }\n",
")\n",
"\n",
"models"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_job.best_candidate()"
]
},
{
"cell_type": "markdown",
"metadata": {
"toc-hr-collapsed": false
},
"source": [
"### Perform batch inference using the best candidate\n",
"\n",
"Now that you have successfully completed the SageMaker Autopilot job on the dataset, create a model from any of the candidates by using [Inference Pipelines](https://docs.aws.amazon.com/sagemaker/latest/dg/inference-pipelines.html). \n",
"\n",
"For classification problem types, the inference containers generated by SageMaker Autopilot allow you to select the response content for predictions. Valid inference response content are defined below for binary classification and multiclass classification problem types.\n",
"\n",
"- `predicted_label` - predicted class\n",
"- `probability` - In binary classification, the probability that the result is predicted as the second or True class in the target column. In multiclass classification, the probability of the winning class.\n",
"- `labels` - list of all possible classes\n",
"- `probabilities` - list of all probabilities for all classes (order corresponds with `labels`)\n",
"\n",
"By default the inference contianers are configured to generate the `predicted_label` only.\n",
"\n",
"In this binary classification example we'll request both `predicted_label` and `probability` - demonstrating how this additional \"confidence\" output from the model can be used."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model_name = \"automl-banking-model-\" + timestamp_suffix\n",
"inference_response_keys = [\"predicted_label\", \"probability\"]\n",
"model = automl_job.create_model(\n",
" name=model_name,\n",
" candidate=automl_job.best_candidate(),\n",
" inference_response_keys=inference_response_keys,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can use batch inference by using Amazon SageMaker batch transform. The same model can also be deployed to perform online inference using Amazon SageMaker hosting."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"output_path = f\"s3://{bucket}/{prefix}/inference-results/\"\n",
"\n",
"transformer = model.transformer(\n",
" instance_count=1,\n",
" instance_type=\"ml.m5.2xlarge\",\n",
" assemble_with=\"Line\",\n",
" output_path=output_path,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can now start the transform job."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"transformer_job = transformer.transform(\n",
" data=test_data_s3_path,\n",
" data_type=\"S3Prefix\",\n",
" content_type=\"text/csv\",\n",
" split_type=\"Line\",\n",
" wait=False,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Watch the transform job for completion."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"JobStatus\\n----------\")\n",
"\n",
"while True:\n",
" sleep(30)\n",
" describe_response = sm.describe_transform_job(\n",
" TransformJobName=transformer._current_job_name,\n",
" )\n",
" job_run_status = describe_response[\"TransformJobStatus\"]\n",
" print(job_run_status)\n",
" if job_run_status in (\"Failed\", \"Completed\", \"Stopped\"):\n",
" break"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's view the results of the transform job:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"test_data_preds = pd.read_csv(\n",
" transformer.output_path + \"test_data.csv.out\",\n",
" header=None,\n",
" names=inference_response_keys,\n",
")\n",
"\n",
"test_data_preds"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Additional metrics"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can use the result of the transform job to evaluate additional metrics on the test dataset, using the [scikit-learn](https://scikit-learn.org/stable/index.html) library.\n",
"\n",
"Common metrics for classification problems are [AUC](https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve) and [AP](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.average_precision_score.html)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n",
"from sklearn.metrics import (\n",
" ConfusionMatrixDisplay,\n",
" average_precision_score,\n",
" classification_report,\n",
" confusion_matrix,\n",
" f1_score,\n",
" precision_recall_curve,\n",
" roc_auc_score,\n",
" roc_curve,\n",
")\n",
"\n",
"labels = test_data[\"y\"]\n",
"AUC = roc_auc_score(labels == \"yes\", test_data_preds.probability)\n",
"AP = average_precision_score(labels, test_data_preds.probability, pos_label=\"yes\")\n",
"\n",
"print(f\"AUC: {AUC:.3f}\\nAP {AP:.3f}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can also generate a classification report and a [confusion matrix](https://en.wikipedia.org/wiki/Confusion_matrix):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(classification_report(labels == \"yes\", test_data_preds.predicted_label == \"yes\"))\n",
"cm = confusion_matrix(labels, test_data_preds.predicted_label, labels=[\"yes\", \"no\"])\n",
"ConfusionMatrixDisplay(cm, display_labels=[\"yes\", \"no\"]).plot(\n",
" include_values=[\"yes\", \"no\"], cmap=plt.cm.Blues, values_format=\"d\"\n",
");"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And present the model performance using [ROC](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) and Precision-Recall curves."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"f, [ax0, ax1] = plt.subplots(1, 2, figsize=(16, 9))\n",
"\n",
"fpr, tpr, _ = roc_curve(labels == \"yes\", test_data_preds.probability)\n",
"ax0.step(fpr, tpr, where=\"post\")\n",
"ax0.fill_between(fpr, tpr, step=\"post\", alpha=0.2, color=\"b\")\n",
"ax0.plot([0, 1], [0, 1], linestyle=\"--\")\n",
"ax0.set_xlabel(\"False Positive Rate\")\n",
"ax0.set_ylabel(\"True Positive Rate\")\n",
"ax0.set_title(\"ROC (Receiver Operating Characteristic)\")\n",
"\n",
"precision, recall, _ = precision_recall_curve(labels == \"yes\", test_data_preds.probability)\n",
"ax1.step(recall, precision, where=\"post\")\n",
"ax1.fill_between(recall, precision, step=\"post\", alpha=0.2, color=\"b\")\n",
"ax1.set_xlabel(\"Recall\")\n",
"ax1.set_ylabel(\"Precision\")\n",
"ax1.set_title(\"Precision-Recall Curve\")\n",
"\n",
"for ax in [ax0, ax1]:\n",
" ax.set_xlim([0, 1])\n",
" ax.set_ylim([0, 1])\n",
" ax.set_aspect(1)\n",
" ax.grid()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Exploration and Modelling Notebooks\n",
"\n",
"As well as the results and candidate models themselves, Autopilot generates other artifacts including:\n",
"\n",
"- A **Data Exploration Notebook**: produced during the analysis phase of the job, that helps you identify problems in your dataset.\n",
"- A **Candidate Definitions Notebook**: interactively stepping through the steps taken by Autopilot to define and train candidates, and select the best one.\n",
"- **Supporting Python code**: Including the actual code used for the different pre-processing steps.\n",
"\n",
"To get a good overview of the available assets, we'll download not just the notebooks but the whole output folder:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_job_desc = automl_job.describe_auto_ml_job()\n",
"\n",
"automl_output_s3uri = automl_job_desc[\"OutputDataConfig\"][\"S3OutputPath\"]\n",
"print(f\"Autopilot output:\\n{automl_output_s3uri}\")\n",
"\n",
"print(f\"Downloading to autopilot_output/...\")\n",
"sagemaker.s3.S3Downloader.download(automl_output_s3uri, \"autopilot_output/\")\n",
"print(\"Done\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"From this download we can view not just the notebooks, but also other assets like the generated Python code they link to, and pre-processed datasets. Explore the notebooks linked below, but also check out the other contents in the `autopilot_output` folder!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from IPython.display import Markdown\n",
"\n",
"candidate_notebook_s3uri = automl_job_desc[\"AutoMLJobArtifacts\"][\n",
" \"CandidateDefinitionNotebookLocation\"\n",
"]\n",
"candidate_notebook_path = \"autopilot_output\" + candidate_notebook_s3uri[len(automl_output_s3uri) :]\n",
"\n",
"print(f\"Candidate definition notebook:\\n{candidate_notebook_s3uri}\")\n",
"print(f\"\\nDownloaded at:\")\n",
"display(Markdown(f\"[{candidate_notebook_path}]({candidate_notebook_path})\"))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"dataexp_notebook_s3uri = automl_job_desc[\"AutoMLJobArtifacts\"][\"DataExplorationNotebookLocation\"]\n",
"dataexp_notebook_path = \"autopilot_output\" + dataexp_notebook_s3uri[len(automl_output_s3uri) :]\n",
"\n",
"print(f\"Data exploration notebook:\\n{dataexp_notebook_s3uri}\")\n",
"print(f\"\\nDownloaded at:\")\n",
"display(Markdown(f\"[{dataexp_notebook_path}]({dataexp_notebook_path})\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Best Model Explainability Artifacts\n",
"SageMaker AutoPilot uses SageMaker Clarify to generate an explainability report for the best candidate.\n",
"\n",
"These Clarify artifacts are also available from S3, and were already included in our download above:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"explainability_s3uri = automl_job.best_candidate()[\"CandidateProperties\"][\n",
" \"CandidateArtifactLocations\"\n",
"][\"Explainability\"]\n",
"explainability_path = \"autopilot_output\" + explainability_s3uri[len(automl_output_s3uri) :]\n",
"\n",
"print(f\"Explainability artifacts:\\n{explainability_s3uri}\")\n",
"print(f\"\\nDownloaded to folder:\")\n",
"print(f\"{explainability_path}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Cleanup\n",
"\n",
"The Autopilot job creates many underlying artifacts such as dataset splits, preprocessing scripts, or preprocessed data, etc. This code, when un-commented, deletes them. This operation deletes all the generated models and the auto-generated notebooks as well. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# import boto3\n",
"\n",
"# s3 = boto3.resource('s3')\n",
"# bucket = s3.Bucket(bucket)\n",
"\n",
"# job_outputs_prefix = '{}/output/{}'.format(prefix, auto_ml_job_name)\n",
"# bucket.objects.filter(Prefix=job_outputs_prefix).delete()"
]
}
],
"metadata": {
"instance_type": "ml.t3.medium",
"kernelspec": {
"display_name": "Python 3 (Data Science 2.0)",
"language": "python",
"name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:ap-southeast-1:492261229750:image/sagemaker-data-science-38"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.13"
}
},
"nbformat": 4,
"nbformat_minor": 4
}