{ "cells": [ { "cell_type": "markdown", "id": "a5a5f00d", "metadata": {}, "source": [ "# Tabular regression with Amazon SageMaker AutoGluon-Tabular algorithm" ] }, { "attachments": {}, "cell_type": "markdown", "id": "75a84a95", "metadata": {}, "source": [ "---\n", "\n", "This notebook's CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook. \n", "\n", "![This us-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-2/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Regression_AutoGluon.ipynb)\n", "\n", "---" ] }, { "cell_type": "markdown", "id": "e4a988c0", "metadata": {}, "source": [ "---\n", "This notebook demonstrates the use of Amazon SageMaker [AutoGluon-Tabular](https://auto.gluon.ai/stable/tutorials/tabular_prediction/index.html) algorithm to train and host a tabular regression model. Tabular regression is the task of analyzing the relationship between predictor variables and a response variable in a structured or relational data.\n", "\n", "In this notebook, we demonstrate two use cases of tabular regression models:\n", "\n", "* How to train a tabular model on an example dataset to do regression.\n", "* How to use the trained tabular model to perform inference, i.e., predicting new samples.\n", "\n", "Note: This notebook was tested in Amazon SageMaker Studio on ml.t3.medium instance with Python 3 (Data Science) kernel.\n", "\n", "---" ] }, { "cell_type": "markdown", "id": "20d6932f", "metadata": {}, "source": [ "1. [Set Up](#1.-Set-Up)\n", "2. [Train A Tabular Model on Abalone Dataset](#2.-Train-a-Tabular-Model-on-Abalone-Dataset)\n", " * [Retrieve Training Artifacts](#2.1.-Retrieve-Training-Artifacts)\n", " * [Set Training Parameters](#2.2.-Set-Training-Parameters)\n", " * [Start Training](#2.3.-Start-Training)\n", "3. [Deploy and Run Inference on the Trained Tabular Model](#3.-Deploy-and-Run-Inference-on-the-Trained-Tabular-Model)\n", "4. [Evaluate the Prediction Results Returned from the Endpoint](#4.-Evaluate-the-Prediction-Results-Returned-from-the-Endpoint)" ] }, { "cell_type": "markdown", "id": "b48f84d1", "metadata": {}, "source": [ "## 1. Set Up" ] }, { "cell_type": "markdown", "id": "e64a82bf", "metadata": {}, "source": [ "---\n", "Before executing the notebook, there are some initial steps required for setup. This notebook requires latest version of sagemaker and ipywidgets.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "8f634c19", "metadata": {}, "outputs": [], "source": [ "!pip install sagemaker ipywidgets --upgrade --quiet" ] }, { "cell_type": "markdown", "id": "c640693a", "metadata": {}, "source": [ "\n", "---\n", "To train and host on Amazon SageMaker, we need to setup and authenticate the use of AWS services. Here, we use the execution role associated with the current notebook instance as the AWS account role with SageMaker access. It has necessary permissions, including access to your data in S3.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "ca80c47f", "metadata": {}, "outputs": [], "source": [ "import sagemaker, boto3, json\n", "from sagemaker import get_execution_role\n", "\n", "aws_role = get_execution_role()\n", "aws_region = boto3.Session().region_name\n", "sess = sagemaker.Session()" ] }, { "cell_type": "markdown", "id": "588d7703", "metadata": {}, "source": [ "## 2. Train a Tabular Model on Abalone Dataset\n", "\n", "---\n", "\n", "In this demonstration, we will train a tabular algorithm on the [Abalone](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) dataset. The dataset contains examples of eight physical measurements such as length, diameter, and height to predict the age of abalone. Among the eight physical measurements (features), there are one categorical feature and seven numerical features. Abalone dataset is downloaded from [LIBSVM](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html).\n", "\n", "Below is the table of the first 5 examples in the Abalone dataset.\n", "\n", "| Target | Feature_0 | Feature_1 | Feature_2 | Feature_3 | Feature_4 | Feature_5 | Feature_6 | Feature_7 |\n", "|:------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|\n", "| 11 | 1 | 0.585 | 0.455 | 0.150 | 0.9870 | 0.4355 | 0.2075 | 0.3100 |\n", "| 5 | 3 | 0.325 | 0.245 | 0.075 | 0.1495 | 0.0605 | 0.0330 | 0.0450 |\n", "| 9 | 3 | 0.580 | 0.420 | 0.140 | 0.7010 | 0.3285 | 0.1020 | 0.2255 |\n", "| 12 | 2 | 0.480 | 0.380 | 0.145 | 0.5900 | 0.2320 | 0.1410 | 0.2300 |\n", "| 11 | 2 | 0.440 | 0.355 | 0.115 | 0.4150 | 0.1585 | 0.0925 | 0.1310 |\n", "\n", "If you want to bring your own dataset, below are the instructions on how the training data should be formatted as input to the model.\n", "\n", "A S3 path should contain two sub-directories 'train/', and 'validation/' (optional). Each sub-directory contains a 'data.csv' file (The Abalone dataset used in this example has been prepared and saved in `training_dataset_s3_path` shown below).\n", "\n", "* The 'data.csv' files under sub-directory 'train/' and 'validation/' are for training and validation, respectively. The validation data is used to compute a validation score at the end of each training iteration or epoch. An early stopping is applied when the validation score stops improving. If the validation data is not provided, a fraction of training data is randomly sampled to serve as the validation data. The fraction value is selected based on the number of rows in the training data. Default values range from 0.2 at 2,500 rows to 0.01 at 250,000 rows. For details, see [AutoGluon-Tabular Documentation](https://auto.gluon.ai/stable/api/autogluon.predictor.html#autogluon.tabular.TabularPredictor.fit).\n", "* The first column of the 'data.csv' should have the corresponding target variable. The rest of other columns should have the corresponding predictor variables (features).\n", "* All the categorical and numeric features, and target can be kept as their original formats.\n", "\n", "\n", "Citations:\n", "\n", "- Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science\n" ] }, { "cell_type": "markdown", "id": "411468dc", "metadata": {}, "source": [ "### 2.1. Retrieve Training Artifacts\n", "\n", "___\n", "\n", "Here, we retrieve the training docker container, the training algorithm source, and the tabular algorithm. Note that model_version=\"*\" fetches the latest model.\n", "\n", "For the training algorithm, we have one choice in this demonstration.\n", "* [AutoGluon-Tabular](https://auto.gluon.ai/stable/tutorials/tabular_prediction/index.html): To use this algorithm, specify `train_model_id` as `autogluon-regression-ensemble` in the cell below.\n", "\n", "Note. [LightGBM](https://lightgbm.readthedocs.io/en/latest/) (`train_model_id: lightgbm-regression-model`), [CatBoost](https://catboost.ai/en/docs/) (`train_model_id:catboost-regression-model`), [XGBoost](https://xgboost.readthedocs.io/en/latest/) (`train_model_id: xgboost-regression-model`), [Linear Learner](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression) (`train_model_id: sklearn-regression-linear`), and [TabTransformer](https://arxiv.org/abs/2012.06678) (`train_model_id: pytorch-tabtransformerregression-ensemble`) are the other choices in the tabular regression category. Since they have different input-format requirements, please check separate notebooks `lightgbm_catboost_tabular/Amazon_Tabular_Regression_LightGBM_CatBoost.ipynb`, `xgboost_linear_learner_tabular/Amazon_Tabular_Regression_XGBoost_LinearLearner.ipynb`, and `tabtransformer_tabular/Amazon_Tabular_Regression_TabTransformer.ipynb` for details.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "72ff6773", "metadata": {}, "outputs": [], "source": [ "from sagemaker import image_uris, model_uris, script_uris\n", "\n", "train_model_id, train_model_version, train_scope = \"autogluon-regression-ensemble\", \"*\", \"training\"\n", "\n", "training_instance_type = \"ml.p3.2xlarge\"\n", "\n", "# Retrieve the docker image\n", "train_image_uri = image_uris.retrieve(\n", " region=None,\n", " framework=None,\n", " model_id=train_model_id,\n", " model_version=train_model_version,\n", " image_scope=train_scope,\n", " instance_type=training_instance_type,\n", ")\n", "# Retrieve the training script\n", "train_source_uri = script_uris.retrieve(\n", " model_id=train_model_id, model_version=train_model_version, script_scope=train_scope\n", ")\n", "# Retrieve the pre-trained model tarball to further fine-tune. In tabular case, however, the pre-trained model tarball is dummy and fine-tune means training from scratch.\n", "train_model_uri = model_uris.retrieve(\n", " model_id=train_model_id, model_version=train_model_version, model_scope=train_scope\n", ")" ] }, { "cell_type": "markdown", "id": "bcfd1a43", "metadata": {}, "source": [ "### 2.2. Set Training Parameters\n", "\n", "---\n", "Now that we are done with all the setup that is needed, we are ready to train our tabular algorithm. To begin, let us create a [``sageMaker.estimator.Estimator``](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html) object. This estimator will launch the training job. \n", "\n", "There are two kinds of parameters that need to be set for training. The first one are the parameters for the training job. These include: (i) Training data path. This is S3 folder in which the input data is stored, (ii) Output path: This the s3 folder in which the training output is stored. (iii) Training instance type: This indicates the type of machine on which to run the training.\n", "\n", "The second set of parameters are algorithm specific training hyper-parameters. \n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "8e2c3c20", "metadata": {}, "outputs": [], "source": [ "# Sample training data is available in this bucket\n", "training_data_bucket = f\"jumpstart-cache-prod-{aws_region}\"\n", "training_data_prefix = \"training-datasets/tabular_regress/\"\n", "\n", "training_dataset_s3_path = f\"s3://{training_data_bucket}/{training_data_prefix}\"\n", "\n", "output_bucket = sess.default_bucket()\n", "output_prefix = \"jumpstart-example-tabular-training\"\n", "\n", "s3_output_location = f\"s3://{output_bucket}/{output_prefix}/output\"" ] }, { "cell_type": "markdown", "id": "7ed97c2c", "metadata": {}, "source": [ "---\n", "For algorithm specific hyper-parameters, we start by fetching python dictionary of the training hyper-parameters that the algorithm accepts with their default values. This can then be overridden to custom values.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "540f822c", "metadata": {}, "outputs": [], "source": [ "from sagemaker import hyperparameters\n", "\n", "# Retrieve the default hyper-parameters for training the model\n", "hyperparameters = hyperparameters.retrieve_default(\n", " model_id=train_model_id, model_version=train_model_version\n", ")\n", "\n", "# [Optional] Override default hyperparameters with custom values\n", "hyperparameters[\"auto_stack\"] = \"True\"\n", "print(hyperparameters)" ] }, { "cell_type": "markdown", "id": "b765ed27", "metadata": {}, "source": [ "### 2.3. Start Training" ] }, { "cell_type": "markdown", "id": "87a052d8", "metadata": {}, "source": [ "---\n", "We start by creating the estimator object with all the required assets and then launch the training job.\n", "Note. We do not use hyperparameter tuning for AutoGluon models because [AutoGluon](https://arxiv.org/abs/2003.06505) succeeds by ensembling multiple models and stacking them in multiple layers rather than focusing on model/hyperparameter selection.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "feb39f11", "metadata": {}, "outputs": [], "source": [ "from sagemaker.estimator import Estimator\n", "from sagemaker.utils import name_from_base\n", "\n", "training_job_name = name_from_base(f\"jumpstart-example-{train_model_id}-training\")\n", "\n", "# Create SageMaker Estimator instance\n", "tabular_estimator = Estimator(\n", " role=aws_role,\n", " image_uri=train_image_uri,\n", " source_dir=train_source_uri,\n", " model_uri=train_model_uri,\n", " entry_point=\"transfer_learning.py\",\n", " instance_count=1,\n", " instance_type=training_instance_type,\n", " max_run=360000,\n", " hyperparameters=hyperparameters,\n", " output_path=s3_output_location,\n", ")\n", "\n", "# Launch a SageMaker Training job by passing s3 path of the training data\n", "tabular_estimator.fit({\"training\": training_dataset_s3_path}, logs=True, job_name=training_job_name)" ] }, { "cell_type": "markdown", "id": "5fa5c235", "metadata": {}, "source": [ "## 3. Deploy and Run Inference on the Trained Tabular Model\n", "\n", "---\n", "\n", "In this section, you learn how to query an existing endpoint and make predictions of the examples you input. For each example, the model will output a numerical value to estimate the corresponding target value.\n", "\n", "We start by retrieving the artifacts and deploy the `tabular_estimator` that we trained.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "cfcbb4a3", "metadata": {}, "outputs": [], "source": [ "inference_instance_type = \"ml.m5.2xlarge\"\n", "\n", "# Retrieve the inference docker container uri\n", "deploy_image_uri = image_uris.retrieve(\n", " region=None,\n", " framework=None,\n", " image_scope=\"inference\",\n", " model_id=train_model_id,\n", " model_version=train_model_version,\n", " instance_type=inference_instance_type,\n", ")\n", "# Retrieve the inference script uri\n", "deploy_source_uri = script_uris.retrieve(\n", " model_id=train_model_id, model_version=train_model_version, script_scope=\"inference\"\n", ")\n", "\n", "endpoint_name = name_from_base(f\"jumpstart-example-{train_model_id}-\")\n", "\n", "# Use the estimator from the previous step to deploy to a SageMaker endpoint\n", "predictor = tabular_estimator.deploy(\n", " initial_instance_count=1,\n", " instance_type=inference_instance_type,\n", " entry_point=\"inference.py\",\n", " image_uri=deploy_image_uri,\n", " source_dir=deploy_source_uri,\n", " endpoint_name=endpoint_name,\n", ")" ] }, { "cell_type": "markdown", "id": "eb448fa6", "metadata": {}, "source": [ "---\n", "Next, we download a hold-out ABALONE test data from the S3 bucket for inference.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "5279b308", "metadata": {}, "outputs": [], "source": [ "jumpstart_assets_bucket = f\"jumpstart-cache-prod-{aws_region}\"\n", "test_data_prefix = \"training-datasets/tabular_regress/test\"\n", "test_data_file_name = \"data.csv\"\n", "\n", "boto3.client(\"s3\").download_file(\n", " jumpstart_assets_bucket, f\"{test_data_prefix}/{test_data_file_name}\", test_data_file_name\n", ")" ] }, { "cell_type": "markdown", "id": "2b3bfd84", "metadata": {}, "source": [ "---\n", "Next, we read the Abalone test data into pandas data frame, prepare the ground truth target and predicting features to send into the endpoint.\n", "\n", "Below is the screenshot of the first 5 examples in the Abalone test set. All of the test examples with features\n", "from ```Feature_1``` to ```Feature_8``` are sent into the deployed model to get model predictions, to estimate the ground truth ```Target``` column.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "a301a885", "metadata": {}, "outputs": [], "source": [ "newline, bold, unbold = \"\\n\", \"\\033[1m\", \"\\033[0m\"\n", "\n", "import numpy as np\n", "import pandas as pd\n", "from sklearn.metrics import mean_absolute_error\n", "from sklearn.metrics import mean_squared_error\n", "from sklearn.metrics import r2_score\n", "import matplotlib.pyplot as plt\n", "\n", "# read the data\n", "test_data = pd.read_csv(test_data_file_name, header=None)\n", "test_data.columns = [\"Target\"] + [f\"Feature_{i}\" for i in range(1, test_data.shape[1])]\n", "\n", "num_examples, num_columns = test_data.shape\n", "print(\n", " f\"{bold}The test dataset contains {num_examples} examples and {num_columns} columns.{unbold}\\n\"\n", ")\n", "\n", "# prepare the ground truth target and predicting features to send into the endpoint.\n", "ground_truth_label, features = test_data.iloc[:, :1], test_data.iloc[:, 1:]\n", "\n", "print(\n", " f\"{bold}The first 5 observations of the test data: {unbold}\"\n", ") # Feature_1 is the categorical variables and rest of other features are numeric variables.\n", "test_data.head(5)" ] }, { "cell_type": "markdown", "id": "cde0cd66", "metadata": {}, "source": [ "---\n", "The following code queries the endpoint you have created to get the prediction for each test example. \n", "The `query_endpoint()` function returns a array-like of shape (num_examples, ).\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "6b311948", "metadata": {}, "outputs": [], "source": [ "content_type = \"text/csv\"\n", "\n", "\n", "def query_endpoint(encoded_tabular_data):\n", " client = boto3.client(\"runtime.sagemaker\")\n", " response = client.invoke_endpoint(\n", " EndpointName=endpoint_name, ContentType=content_type, Body=encoded_tabular_data\n", " )\n", " return response\n", "\n", "\n", "def parse_resonse(query_response):\n", " predictions = json.loads(query_response[\"Body\"].read())\n", " return np.array(predictions[\"prediction\"])\n", "\n", "\n", "query_response = query_endpoint(features.to_csv(header=False, index=False).encode(\"utf-8\"))\n", "model_predictions = parse_resonse(query_response)" ] }, { "cell_type": "markdown", "id": "a675cb65", "metadata": {}, "source": [ "## 4. Evaluate the Prediction Results Returned from the Endpoint\n", "\n", "---\n", "We evaluate the predictions results returned from the endpoint by following two ways.\n", "\n", "* Visualize the prediction results by a residual plot to compare the model predictions and ground truth targets.\n", "\n", "* Measure the prediction results quantitatively.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "f0629bab", "metadata": {}, "outputs": [], "source": [ "# Visualization: a residual plot to compare the model predictions and ground truth targets. For each example, the residual value\n", "# is the subtraction between the prediction and ground truth target.\n", "# We can see that the points in the residual plot are randomly dispersed around the horizontal axis y = 0,\n", "# which indicates the fitted regression model is appropriate for the ABALONE data\n", "\n", "residuals = ground_truth_label.values[:, 0] - model_predictions\n", "plt.scatter(model_predictions, residuals, color=\"blue\", s=40)\n", "plt.hlines(y=0, xmin=4, xmax=18)\n", "plt.xlabel(\"Predicted Values\", fontsize=18)\n", "plt.ylabel(\"Residuals\", fontsize=18)\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "id": "9f523a52", "metadata": {}, "outputs": [], "source": [ "# Evaluate the model predictions quantitatively.\n", "eval_r2_score = r2_score(ground_truth_label.values, model_predictions)\n", "eval_mse_score = mean_squared_error(ground_truth_label.values, model_predictions)\n", "eval_mae_score = mean_absolute_error(ground_truth_label.values, model_predictions)\n", "print(\n", " f\"{bold}Evaluation result on test data{unbold}:{newline}\"\n", " f\"{bold}{r2_score.__name__}{unbold}: {eval_r2_score}{newline}\"\n", " f\"{bold}{mean_squared_error.__name__}{unbold}: {eval_mse_score}{newline}\"\n", " f\"{bold}{mean_absolute_error.__name__}{unbold}: {eval_mae_score}{newline}\"\n", ")" ] }, { "cell_type": "markdown", "id": "c07e16bb", "metadata": {}, "source": [ "---\n", "Next, we delete the endpoint corresponding to the trained model.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "0b18b7e1", "metadata": {}, "outputs": [], "source": [ "# Delete the SageMaker endpoint and the attached resources\n", "predictor.delete_model()\n", "predictor.delete_endpoint()" ] }, { "attachments": {}, "cell_type": "markdown", "id": "3e7408cd", "metadata": {}, "source": [ "## Notebook CI Test Results\n", "\n", "This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.\n", "\n", "![This us-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-1/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Regression_AutoGluon.ipynb)\n", "\n", "![This us-east-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-2/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Regression_AutoGluon.ipynb)\n", "\n", "![This us-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-1/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Regression_AutoGluon.ipynb)\n", "\n", "![This ca-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ca-central-1/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Regression_AutoGluon.ipynb)\n", "\n", "![This sa-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/sa-east-1/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Regression_AutoGluon.ipynb)\n", "\n", "![This eu-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-1/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Regression_AutoGluon.ipynb)\n", "\n", "![This eu-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-2/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Regression_AutoGluon.ipynb)\n", "\n", "![This eu-west-3 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-3/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Regression_AutoGluon.ipynb)\n", "\n", "![This eu-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-central-1/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Regression_AutoGluon.ipynb)\n", "\n", "![This eu-north-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-north-1/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Regression_AutoGluon.ipynb)\n", "\n", "![This ap-southeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-1/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Regression_AutoGluon.ipynb)\n", "\n", "![This ap-southeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-2/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Regression_AutoGluon.ipynb)\n", "\n", "![This ap-northeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-1/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Regression_AutoGluon.ipynb)\n", "\n", "![This ap-northeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-2/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Regression_AutoGluon.ipynb)\n", "\n", "![This ap-south-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-south-1/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Regression_AutoGluon.ipynb)\n" ] } ], "metadata": { "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (Data Science 3.0)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/sagemaker-data-science-310-v1" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.6" } }, "nbformat": 4, "nbformat_minor": 5 }