{ "cells": [ { "cell_type": "markdown", "id": "8c2218f9", "metadata": {}, "source": [ "# Tabular classification with Amazon SageMaker AutoGluon-Tabular algorithm" ] }, { "attachments": {}, "cell_type": "markdown", "id": "f893dccd", "metadata": {}, "source": [ "---\n", "\n", "This notebook's CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook. \n", "\n", "![This us-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-2/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Classification_AutoGluon.ipynb)\n", "\n", "---" ] }, { "cell_type": "markdown", "id": "ef998835", "metadata": {}, "source": [ "---\n", "This notebook demonstrates the use of Amazon SageMaker [AutoGluon-Tabular](https://auto.gluon.ai/stable/tutorials/tabular_prediction/index.html) algorithm to train and host a tabular binary classification model. Tabular classification is the task of assigning a class to an example of structured or relational data. The Amazon SageMaker API for tabular classification can be used for classification of an example in two classes (binary classification) or more than two classes (multi-class classification).\n", "\n", "\n", "In this notebook, we demonstrate two use cases of tabular classification models:\n", "\n", "* How to train a tabular model on an example dataset to do binary classification.\n", "* How to use the trained tabular model to perform inference, i.e., classifying new samples.\n", "\n", "Note: This notebook was tested in Amazon SageMaker Studio on ml.t3.medium instance with Python 3 (Data Science) kernel.\n", "\n", "---" ] }, { "cell_type": "markdown", "id": "fa8b34fe", "metadata": {}, "source": [ "1. [Set Up](#1.-Set-Up)\n", "2. [Train A Tabular Model on Adult Dataset](#2.-Train-a-Tabular-Model-on-Adult-Dataset)\n", " * [Retrieve Training Artifacts](#2.1.-Retrieve-Training-Artifacts)\n", " * [Set Training Parameters](#2.2.-Set-Training-Parameters)\n", " * [Start Training](#2.3.-Start-Training)\n", "3. [Deploy and Run Inference on the Trained Tabular Model](#3.-Deploy-and-Run-Inference-on-the-Trained-Tabular-Model)\n", "4. [Evaluate the Prediction Results Returned from the Endpoint](#4.-Evaluate-the-Prediction-Results-Returned-from-the-Endpoint)" ] }, { "cell_type": "markdown", "id": "d4eaf3c2", "metadata": {}, "source": [ "## 1. Set Up\n", "\n", "---\n", "Before executing the notebook, there are some initial steps required for setup. This notebook requires latest version of sagemaker and ipywidgets.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "003e4758", "metadata": {}, "outputs": [], "source": [ "!pip install sagemaker ipywidgets --upgrade --quiet" ] }, { "cell_type": "markdown", "id": "6f5f8ae5", "metadata": {}, "source": [ "\n", "---\n", "To train and host on Amazon SageMaker, we need to setup and authenticate the use of AWS services. Here, we use the execution role associated with the current notebook instance as the AWS account role with SageMaker access. It has necessary permissions, including access to your data in S3.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "ade4907f", "metadata": {}, "outputs": [], "source": [ "import sagemaker, boto3, json\n", "from sagemaker import get_execution_role\n", "\n", "aws_role = get_execution_role()\n", "aws_region = boto3.Session().region_name\n", "sess = sagemaker.Session()" ] }, { "cell_type": "markdown", "id": "782a079f", "metadata": {}, "source": [ "## 2. Train a Tabular Model on Adult Dataset\n", "\n", "---\n", "In this demonstration, we will train a tabular algorithm on the [Adult](https://archive.ics.uci.edu/ml/datasets/adult) dataset. The dataset contains examples of census data to predict whether a person makes over 50K a year or not. The Adult dataset is downloaded from [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/adult).\n", "\n", "Below is the table of the first 5 examples in the Adult dataset.\n", "\n", "| Target | Feature_0 | Feature_1 | Feature_2 | Feature_3 | Feature_4 | ... | Feature_10 | Feature_11 | Feature_12 | Feature_13 |\n", "|:------:|:---------:|:---------:|:---------:|:------------:|:---------:|:----:|:----------:|:----------:|:----------:|:----------------:|\n", "| 0 | 25 | Private | 226802 | 11th | 7 | ... | 0 | 0 | 40 | United-States |\n", "| 0 | 38 | Private | 89814 | HS-grad | 9 | ... | 0 | 0 | 50 | United-States |\n", "| 1 | 28 | Local-gov | 336951 | Assoc-acdm | 12 | ... | 0 | 0 | 40 | United-States |\n", "| 1 | 44 | Private | 160323 | Some-college | 10 | ... | 7688 | 0 | 40 | United-States |\n", "| 0 | 18 | ? | 103497 | Some-college | 10 | ... | 0 | 0 | 30 | United-States |\n", "\n", "\n", "If you want to bring your own dataset, below are the instructions on how the training data should be formatted as input to the model.\n", "\n", "A S3 path should contain two sub-directories 'train/', and 'validation/' (optional). Each sub-directory contains a 'data.csv' file (The ABALONE dataset used in this example has been prepared and saved in `training_dataset_s3_path` shown below).\n", "\n", "* The 'data.csv' files under sub-directory 'train/' and 'validation/' are for training and validation, respectively. The validation data is used to compute a validation score at the end of each training iteration or epoch. An early stopping is applied when the validation score stops improving. If the validation data is not provided, a fraction of training data is randomly sampled to serve as the validation data. The fraction value is selected based on the number of rows in the training data. Default values range from 0.2 at 2,500 rows to 0.01 at 250,000 rows. For details, see [AutoGluon-Tabular Documentation](https://auto.gluon.ai/stable/api/autogluon.predictor.html#autogluon.tabular.TabularPredictor.fit).\n", "* The first column of the 'data.csv' should have the corresponding target variable. The rest of other columns should have the corresponding predictor variables (features).\n", "* All the categorical and numeric features, and target can be kept as their original formats.\n", "\n", "Citations:\n", "\n", "- Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science." ] }, { "cell_type": "markdown", "id": "bcd6302d", "metadata": {}, "source": [ "### 2.1. Retrieve Training Artifacts\n", "\n", "___\n", "\n", "Here, we retrieve the training docker container, the training algorithm source, and the tabular algorithm. Note that model_version=\"*\" fetches the latest model.\n", "\n", "For the training algorithm, we have one choice in this demonstration.\n", "* [AutoGluon-Tabular](https://auto.gluon.ai/stable/tutorials/tabular_prediction/index.html): To use this algorithm, specify `train_model_id` as `autogluon-classification-ensemble` in the cell below.\n", "\n", "Note. [LightGBM](https://lightgbm.readthedocs.io/en/latest/) (`train_model_id: lightgbm-classification-model`), [CatBoost](https://catboost.ai/en/docs/) (`train_model_id:catboost-classification-model`), [XGBoost](https://xgboost.readthedocs.io/en/latest/) (`train_model_id: xgboost-classification-model`), [Linear Learner](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression) (`train_model_id: sklearn-classification-linear`), and [TabTransformer](https://arxiv.org/abs/2012.06678) (`train_model_id: pytorch-tabtransformerclassification-ensemble`) are the other choices in the tabular classification category. Since they have different input-format requirements, please check separate notebooks `lightgbm_catboost_tabular/Amazon_Tabular_Classification_LightGBM_CatBoost.ipynb`, `xgboost_linear_learner_tabular/Amazon_Tabular_Classification_XGBoost_LinearLearner.ipynb`, and `tabtransformer_tabular/Amazon_Tabular_Classification_TabTransformer.ipynb` for details.\n", "\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "09049994", "metadata": {}, "outputs": [], "source": [ "from sagemaker import image_uris, model_uris, script_uris\n", "\n", "train_model_id, train_model_version, train_scope = (\n", " \"autogluon-classification-ensemble\",\n", " \"*\",\n", " \"training\",\n", ")\n", "training_instance_type = \"ml.p3.2xlarge\"\n", "\n", "# Retrieve the docker image\n", "train_image_uri = image_uris.retrieve(\n", " region=None,\n", " framework=None,\n", " model_id=train_model_id,\n", " model_version=train_model_version,\n", " image_scope=train_scope,\n", " instance_type=training_instance_type,\n", ")\n", "# Retrieve the training script\n", "train_source_uri = script_uris.retrieve(\n", " model_id=train_model_id, model_version=train_model_version, script_scope=train_scope\n", ")\n", "# Retrieve the pre-trained model tarball to further fine-tune. In tabular case, however, the pre-trained model tarball is dummy and fine-tune means training from scratch.\n", "train_model_uri = model_uris.retrieve(\n", " model_id=train_model_id, model_version=train_model_version, model_scope=train_scope\n", ")" ] }, { "cell_type": "markdown", "id": "0f47078a", "metadata": {}, "source": [ "### 2.2. Set Training Parameters\n", "\n", "---\n", "\n", "Now that we are done with all the setup that is needed, we are ready to train our tabular algorithm. To begin, let us create a [``sageMaker.estimator.Estimator``](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html) object. This estimator will launch the training job. \n", "\n", "There are two kinds of parameters that need to be set for training. The first one are the parameters for the training job. These include: (i) Training data path. This is S3 folder in which the input data is stored, (ii) Output path: This the s3 folder in which the training output is stored. (iii) Training instance type: This indicates the type of machine on which to run the training.\n", "\n", "The second set of parameters are algorithm specific training hyper-parameters. \n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "3a60a885", "metadata": {}, "outputs": [], "source": [ "# Sample training data is available in this bucket\n", "training_data_bucket = f\"jumpstart-cache-prod-{aws_region}\"\n", "training_data_prefix = \"training-datasets/tabular_binary/\"\n", "\n", "training_dataset_s3_path = f\"s3://{training_data_bucket}/{training_data_prefix}\"\n", "\n", "output_bucket = sess.default_bucket()\n", "output_prefix = \"jumpstart-example-tabular-training\"\n", "\n", "s3_output_location = f\"s3://{output_bucket}/{output_prefix}/output\"" ] }, { "cell_type": "markdown", "id": "ba07adda", "metadata": {}, "source": [ "---\n", "For algorithm specific hyper-parameters, we start by fetching python dictionary of the training hyper-parameters that the algorithm accepts with their default values. This can then be overridden to custom values.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "e426b631", "metadata": {}, "outputs": [], "source": [ "from sagemaker import hyperparameters\n", "\n", "# Retrieve the default hyper-parameters for fine-tuning the model\n", "hyperparameters = hyperparameters.retrieve_default(\n", " model_id=train_model_id, model_version=train_model_version\n", ")\n", "\n", "# [Optional] Override default hyperparameters with custom values\n", "hyperparameters[\"auto_stack\"] = \"True\"\n", "print(hyperparameters)" ] }, { "cell_type": "markdown", "id": "e6d81994", "metadata": {}, "source": [ "### 2.3. Start Training" ] }, { "cell_type": "markdown", "id": "d00983ba", "metadata": {}, "source": [ "---\n", "We start by creating the estimator object with all the required assets and then launch the training job.\n", "Note. We do not use hyperparameter tuning for AutoGluon models because [AutoGluon](https://arxiv.org/abs/2003.06505) succeeds by ensembling multiple models and stacking them in multiple layers rather than focusing on model/hyperparameter selection.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "69bb3c7d", "metadata": {}, "outputs": [], "source": [ "from sagemaker.estimator import Estimator\n", "from sagemaker.utils import name_from_base\n", "\n", "training_job_name = name_from_base(f\"jumpstart-example-{train_model_id}-training\")\n", "\n", "# Create SageMaker Estimator instance\n", "tabular_estimator = Estimator(\n", " role=aws_role,\n", " image_uri=train_image_uri,\n", " source_dir=train_source_uri,\n", " model_uri=train_model_uri,\n", " entry_point=\"transfer_learning.py\",\n", " instance_count=1,\n", " instance_type=training_instance_type,\n", " max_run=360000,\n", " hyperparameters=hyperparameters,\n", " output_path=s3_output_location,\n", ")\n", "\n", "\n", "# Launch a SageMaker Training job by passing s3 path of the training data\n", "tabular_estimator.fit({\"training\": training_dataset_s3_path}, logs=True, job_name=training_job_name)" ] }, { "cell_type": "markdown", "id": "2fb57b46", "metadata": {}, "source": [ "## 3. Deploy and Run Inference on the Trained Tabular Model\n", "\n", "---\n", "\n", "In this section, you learn how to query an existing endpoint and make predictions of the examples you input. For each example, the model will output the probability of the sample for each class in the model. \n", "Next, the predicted class label is obtained by taking the class label with the maximum probability over others. Throughout the notebook, the examples are taken from the [Adult](https://archive.ics.uci.edu/ml/datasets/adult) test set.\n", "The dataset contains examples of census data to predict whether a person makes over 50K a year or not.\n", "\n", "\n", "We start by retrieving the artifacts and deploy the `tabular_estimator` that we trained.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "08f4cb6d", "metadata": {}, "outputs": [], "source": [ "inference_instance_type = \"ml.m5.2xlarge\"\n", "\n", "# Retrieve the inference docker container uri\n", "deploy_image_uri = image_uris.retrieve(\n", " region=None,\n", " framework=None,\n", " image_scope=\"inference\",\n", " model_id=train_model_id,\n", " model_version=train_model_version,\n", " instance_type=inference_instance_type,\n", ")\n", "# Retrieve the inference script uri\n", "deploy_source_uri = script_uris.retrieve(\n", " model_id=train_model_id, model_version=train_model_version, script_scope=\"inference\"\n", ")\n", "\n", "endpoint_name = name_from_base(f\"jumpstart-example-{train_model_id}-\")\n", "\n", "# Use the estimator from the previous step to deploy to a SageMaker endpoint\n", "predictor = tabular_estimator.deploy(\n", " initial_instance_count=1,\n", " instance_type=inference_instance_type,\n", " entry_point=\"inference.py\",\n", " image_uri=deploy_image_uri,\n", " source_dir=deploy_source_uri,\n", " endpoint_name=endpoint_name,\n", ")" ] }, { "cell_type": "markdown", "id": "b6cdaa61", "metadata": {}, "source": [ "---\n", "Next, we download a hold-out Adult test data from the S3 bucket for inference.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "206f3735", "metadata": {}, "outputs": [], "source": [ "jumpstart_assets_bucket = f\"jumpstart-cache-prod-{aws_region}\"\n", "test_data_prefix = \"training-datasets/tabular_binary/test\"\n", "test_data_file_name = \"data.csv\"\n", "\n", "boto3.client(\"s3\").download_file(\n", " jumpstart_assets_bucket, f\"{test_data_prefix}/{test_data_file_name}\", test_data_file_name\n", ")" ] }, { "cell_type": "markdown", "id": "e42ae192", "metadata": {}, "source": [ "---\n", "Next, we read the Adult test data into pandas data frame, prepare the ground truth target and predicting features to send into the endpoint.\n", "\n", "Below is the screenshot of the first 5 examples in the Adult test set. All of the test examples with features\n", "from ```Feature_0``` to ```Feature_13``` are sent into the deployed model to get model predictions,\n", "to estimate the ground truth ```target``` column. For each test example, the model will output\n", "a vector of ```num_classes``` elements, where each element is the probability of the example for each class in the model.\n", "The ```num_classes``` is 2 in this case. Next, the predicted class label is obtained by taking the class label\n", "with the maximum probability over others. \n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "33f97980", "metadata": {}, "outputs": [], "source": [ "newline, bold, unbold = \"\\n\", \"\\033[1m\", \"\\033[0m\"\n", "\n", "import numpy as np\n", "import pandas as pd\n", "from sklearn.metrics import accuracy_score\n", "from sklearn.metrics import f1_score\n", "from sklearn.metrics import confusion_matrix\n", "import matplotlib.pyplot as plt\n", "\n", "# read the data\n", "test_data = pd.read_csv(test_data_file_name, header=None)\n", "test_data.columns = [\"Target\"] + [f\"Feature_{i}\" for i in range(1, test_data.shape[1])]\n", "\n", "num_examples, num_columns = test_data.shape\n", "print(\n", " f\"{bold}The test dataset contains {num_examples} examples and {num_columns} columns.{unbold}\\n\"\n", ")\n", "\n", "# prepare the ground truth target and predicting features to send into the endpoint.\n", "ground_truth_label, features = test_data.iloc[:, :1], test_data.iloc[:, 1:]\n", "\n", "print(f\"{bold}The first 5 observations of the data: {unbold} \\n\")\n", "test_data.head(5)" ] }, { "cell_type": "markdown", "id": "003a2d1c", "metadata": {}, "source": [ "---\n", "The following code queries the endpoint you have created to get the prediction for each test example. \n", "The `query_endpoint()` function returns an array-like of shape (num_examples, num_classes), where each row indicates\n", "the probability of the example for each class in the model. The num_classes is 2 in above test data.\n", "Next, the predicted class label is obtained by taking the class label with the maximum probability over others for each example. \n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "cbc8fd83", "metadata": {}, "outputs": [], "source": [ "content_type = \"text/csv\"\n", "\n", "\n", "def query_endpoint(encoded_tabular_data):\n", " # endpoint_name = endpoint_name\n", " client = boto3.client(\"runtime.sagemaker\")\n", " response = client.invoke_endpoint(\n", " EndpointName=endpoint_name, ContentType=content_type, Body=encoded_tabular_data\n", " )\n", " return response\n", "\n", "\n", "def parse_response(query_response):\n", " model_predictions = json.loads(query_response[\"Body\"].read())\n", " predicted_probabilities = model_predictions[\"probabilities\"]\n", " return np.array(predicted_probabilities)\n", "\n", "\n", "# split the test data into smaller size of batches to query the endpoint due to the large size of test data.\n", "batch_size = 1500\n", "predict_prob = []\n", "for i in np.arange(0, num_examples, step=batch_size):\n", " query_response_batch = query_endpoint(\n", " features.iloc[i : (i + batch_size), :].to_csv(header=False, index=False).encode(\"utf-8\")\n", " )\n", " predict_prob_batch = parse_response(query_response_batch) # prediction probability per batch\n", " predict_prob.append(predict_prob_batch)\n", "\n", "\n", "predict_prob = np.concatenate(predict_prob, axis=0)\n", "predict_label = np.argmax(predict_prob, axis=1)" ] }, { "cell_type": "markdown", "id": "d0224fb1", "metadata": {}, "source": [ "## 4. Evaluate the Prediction Results Returned from the Endpoint\n", "\n", "---\n", "We evaluate the predictions results returned from the endpoint by following two ways.\n", "\n", "* Visualize the predictions results by plotting the confusion matrix.\n", "\n", "* Measure the prediction results quantitatively.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "03d331de", "metadata": {}, "outputs": [], "source": [ "# Visualize the predictions results by plotting the confusion matrix.\n", "conf_matrix = confusion_matrix(y_true=ground_truth_label.values, y_pred=predict_label)\n", "fig, ax = plt.subplots(figsize=(7.5, 7.5))\n", "ax.matshow(conf_matrix, cmap=plt.cm.Blues, alpha=0.3)\n", "for i in range(conf_matrix.shape[0]):\n", " for j in range(conf_matrix.shape[1]):\n", " ax.text(x=j, y=i, s=conf_matrix[i, j], va=\"center\", ha=\"center\", size=\"xx-large\")\n", "\n", "plt.xlabel(\"Predictions\", fontsize=18)\n", "plt.ylabel(\"Actuals\", fontsize=18)\n", "plt.title(\"Confusion Matrix\", fontsize=18)\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "id": "9d2d2d65", "metadata": {}, "outputs": [], "source": [ "# Measure the prediction results quantitatively.\n", "eval_accuracy = accuracy_score(ground_truth_label.values, predict_label)\n", "eval_f1 = f1_score(ground_truth_label.values, predict_label)\n", "\n", "print(\n", " f\"{bold}Evaluation result on test data{unbold}:{newline}\"\n", " f\"{bold}{accuracy_score.__name__}{unbold}: {eval_accuracy}{newline}\"\n", " f\"{bold}F1 {unbold}: {eval_f1}{newline}\"\n", ")" ] }, { "cell_type": "markdown", "id": "e36f57c5", "metadata": {}, "source": [ "---\n", "Next, we delete the endpoint corresponding to the trained model.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "7ff6901e", "metadata": {}, "outputs": [], "source": [ "# Delete the SageMaker endpoint and the attached resources\n", "predictor.delete_model()\n", "predictor.delete_endpoint()" ] }, { "attachments": {}, "cell_type": "markdown", "id": "48f7d719", "metadata": {}, "source": [ "## Notebook CI Test Results\n", "\n", "This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.\n", "\n", "![This us-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-1/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Classification_AutoGluon.ipynb)\n", "\n", "![This us-east-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-2/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Classification_AutoGluon.ipynb)\n", "\n", "![This us-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-1/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Classification_AutoGluon.ipynb)\n", "\n", "![This ca-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ca-central-1/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Classification_AutoGluon.ipynb)\n", "\n", "![This sa-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/sa-east-1/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Classification_AutoGluon.ipynb)\n", "\n", "![This eu-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-1/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Classification_AutoGluon.ipynb)\n", "\n", "![This eu-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-2/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Classification_AutoGluon.ipynb)\n", "\n", "![This eu-west-3 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-3/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Classification_AutoGluon.ipynb)\n", "\n", "![This eu-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-central-1/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Classification_AutoGluon.ipynb)\n", "\n", "![This eu-north-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-north-1/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Classification_AutoGluon.ipynb)\n", "\n", "![This ap-southeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-1/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Classification_AutoGluon.ipynb)\n", "\n", "![This ap-southeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-2/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Classification_AutoGluon.ipynb)\n", "\n", "![This ap-northeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-1/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Classification_AutoGluon.ipynb)\n", "\n", "![This ap-northeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-2/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Classification_AutoGluon.ipynb)\n", "\n", "![This ap-south-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-south-1/introduction_to_amazon_algorithms|autogluon_tabular|Amazon_Tabular_Classification_AutoGluon.ipynb)\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (Data Science 3.0)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/sagemaker-data-science-310-v1" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.6" } }, "nbformat": 4, "nbformat_minor": 5 }