{ "cells": [ { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "# Use Your Own Inference Code with Amazon SageMaker XGBoost Algorithm\n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "This notebook's CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook. \n", "\n", "![This us-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-2/introduction_to_amazon_algorithms|xgboost_abalone|xgboost_inferenece_script_mode.ipynb)\n", "\n", "---" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "_**Customized inference for computing SHAP values with Amazon SageMaker XGBoost script mode**_\n", "\n", "---\n", "\n", "## Contents\n", "1. [Introduction](#Introduction)\n", "2. [Setup](#Setup)\n", "3. [Train the XGBoost Model](#Train-the-XGBoost-model)\n", " 1. [Train with XGBoost Estimator and SageMaker Training](#Train-with-XBoost-estimator-and-sagemaker-training)\n", " 2. [Train with Automatic Model Tuning (HPO)](#Train-with-automatic-model-tuning)\n", "4. [Deploying the XGBoost endpoint](#Deploying-the-XGBoost-endpoint)\n", "\n", "---" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Introduction\n", "\n", "This notebook shows how you can configure the SageMaker XGBoost model server by defining the following three functions in the Python source file you pass to the XGBoost constructor in the SageMaker Python SDK:\n", "- `input_fn`: Takes request data and deserializes the data into an object for prediction,\n", "- `predict_fn`: Takes the deserialized request object and performs inference against the loaded model, and\n", "- `output_fn`: Takes the result of prediction and serializes this according to the response content type.\n", "We will write a customized inference script that is designed to illustrate how [SHAP](https://github.com/slundberg/shap) values enable the interpretion of XGBoost models.\n", "\n", "We use the [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html), originally from the [UCI data repository](https://archive.ics.uci.edu/ml/datasets/abalone). More details about the original dataset can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.names). In this libsvm converted version, the nominal feature (Male/Female/Infant) has been converted into a real valued feature as required by XGBoost. Age of abalone is to be predicted from eight physical measurements.\n", "\n", "This notebook uses the Abalone dataset to deploy a model server that returns SHAP values, which enable us to create model explanation such as the following plots that show each features contributing to push the model output from the base value.\n", "\n", "\n", " \n", " \n", "
\"Drawing\"/ \"Drawing\"/
\n", "\n", "---\n", "\n", "## Setup\n", "\n", "This notebook was tested in Amazon SageMaker Studio on a ml.t3.medium instance with Python 3 (Data Science) kernel.\n", "\n", "Let's start by specifying:\n", "1. The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.\n", "2. The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regex with a the appropriate full IAM role arn string(s)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install --upgrade sagemaker" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "\n", "import io\n", "import os\n", "import boto3\n", "import sagemaker\n", "import time\n", "\n", "role = sagemaker.get_execution_role()\n", "region = boto3.Session().region_name\n", "\n", "# S3 bucket for saving code and model artifacts.\n", "# Feel free to specify a different bucket here if you wish.\n", "bucket = sagemaker.Session().default_bucket()\n", "prefix = \"sagemaker/DEMO-xgboost-inference-script-mode\"" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Fetching the dataset\n", "\n", "The following methods download the Abalone dataset and upload files to S3." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "s3 = boto3.client(\"s3\")\n", "# Load the dataset\n", "FILE_DATA = \"abalone\"\n", "s3.download_file(\n", " f\"sagemaker-example-files-prod-{region}\",\n", " f\"datasets/tabular/uci_abalone/abalone.libsvm\",\n", " FILE_DATA,\n", ")\n", "sagemaker.Session().upload_data(FILE_DATA, bucket=bucket, key_prefix=prefix + \"/train\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Train the XGBoost model\n", "\n", "SageMaker can now run an XGboost script using the XGBoost estimator. A typical training script loads data from the input channels, configures training with hyperparameters, trains a model, and saves a model to `model_dir` so that it can be hosted later. In this notebook, we use the same training script [abalone.py](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/xgboost_abalone/abalone.py) from [Regression with Amazon SageMaker XGBoost algorithm](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/xgboost_abalone/xgboost_abalone_dist_script_mode.ipynb). Refer to [Regression with Amazon SageMaker XGBoost algorithm](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/xgboost_abalone/xgboost_abalone_dist_script_mode.ipynb) for details on the training script.\n", "\n", "After setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between few minutes.\n", "\n", "To run our training script on SageMaker, we construct a `sagemaker.xgboost.estimator.XGBoost` estimator, which accepts several constructor arguments:\n", "\n", "* __entry_point__: The path to the Python script SageMaker runs for training and prediction.\n", "* __role__: Role ARN\n", "* __framework_version__: SageMaker XGBoost version you want to use for executing your model training code, e.g., `1.0-1`, `1.2-2`, `1.3-1`, `1.5-1`, or `1.7-1`.\n", "* __train_instance_type__ *(optional)*: The type of SageMaker instances for training. __Note__: Because Scikit-learn does not natively support GPU training, Sagemaker Scikit-learn does not currently support training on GPU instance types.\n", "* __sagemaker_session__ *(optional)*: The session used to train on Sagemaker.\n", "* __hyperparameters__ *(optional)*: A dictionary passed to the train function as hyperparameters." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.inputs import TrainingInput\n", "from sagemaker.xgboost.estimator import XGBoost\n", "\n", "\n", "job_name = \"DEMO-xgboost-inference-script-mode-\" + time.strftime(\"%Y-%m-%d-%H-%M-%S\", time.gmtime())\n", "print(\"Training job\", job_name)\n", "\n", "hyperparameters = {\n", " \"max_depth\": \"5\",\n", " \"eta\": \"0.2\",\n", " \"gamma\": \"4\",\n", " \"min_child_weight\": \"6\",\n", " \"subsample\": \"0.7\",\n", " \"objective\": \"reg:squarederror\",\n", " \"num_round\": \"50\",\n", " \"verbosity\": \"2\",\n", "}\n", "\n", "instance_type = \"ml.c5.xlarge\"\n", "\n", "xgb_script_mode_estimator = XGBoost(\n", " entry_point=\"abalone.py\",\n", " hyperparameters=hyperparameters,\n", " role=role,\n", " instance_count=1,\n", " instance_type=instance_type,\n", " framework_version=\"1.7-1\",\n", " output_path=\"s3://{}/{}/{}/output\".format(bucket, prefix, job_name),\n", ")\n", "\n", "content_type = \"text/libsvm\"\n", "train_input = TrainingInput(\n", " \"s3://{}/{}/{}/\".format(bucket, prefix, \"train\"), content_type=content_type\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Train with XGBoost Estimator on Abalone Data\n", "\n", "Training is as simple as calling `fit` on the Estimator. This will start a SageMaker Training job that will download the data, invoke the entry point code (in the provided script file), and save any model artifacts that the script creates. In this case, the script requires a `train` and a `validation` channel. Since we only created a `train` channel, we re-use it for validation. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "xgb_script_mode_estimator.fit({\"train\": train_input, \"validation\": train_input}, job_name=job_name)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Train with Automatic Model Tuning ([HPO](https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning.html)) \n", "***\n", "Instead of maunally configuring your hyper parameter values and training with SageMaker Training, you could also train with Amazon SageMaker Automatic Model Tuning. AMT, also known as hyperparameter tuning, finds the best version of a model by running many training jobs on your dataset using the algorithm and ranges of hyperparameters that you specify. It then chooses the hyperparameter values that result in a model that performs the best, as measured by a metric that you choose. We will use a [HyperparameterTuner](https://sagemaker.readthedocs.io/en/stable/api/training/tuner.html) object to interact with Amazon SageMaker hyperparameter tuning APIs.\n", " \n", "The code sample below shows you how to use the HyperParameterTuner.\n", "***" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.tuner import ContinuousParameter, IntegerParameter\n", "from sagemaker.utils import name_from_base\n", "from sagemaker.tuner import HyperparameterTuner\n", "\n", "\n", "# You can select from the hyperparameters supported by the model, and configure ranges of values to be searched for training the optimal model.(https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning-define-ranges.html)\n", "hyperparameter_ranges = {\n", " \"max_depth\": IntegerParameter(0, 10, scaling_type=\"Auto\"),\n", " \"num_round\": IntegerParameter(1, 4000, scaling_type=\"Auto\"),\n", " \"alpha\": ContinuousParameter(0, 2, scaling_type=\"Auto\"),\n", " \"subsample\": ContinuousParameter(0.5, 1, scaling_type=\"Auto\"),\n", " \"min_child_weight\": ContinuousParameter(0, 120, scaling_type=\"Auto\"),\n", " \"gamma\": ContinuousParameter(0, 5, scaling_type=\"Auto\"),\n", " \"eta\": ContinuousParameter(0.1, 0.5, scaling_type=\"Auto\"),\n", "}\n", "\n", "# Increase the total number of training jobs run by AMT, for increased accuracy (and training time).\n", "max_jobs = 6\n", "# Change parallel training jobs run by AMT to reduce total training time, constrained by your account limits.\n", "# if max_jobs=max_parallel_jobs then Bayesian search turns to Random.\n", "max_parallel_jobs = 2\n", "\n", "hp_tuner = HyperparameterTuner(\n", " xgb_script_mode_estimator,\n", " \"validation:rmse\",\n", " hyperparameter_ranges,\n", " max_jobs=max_jobs,\n", " max_parallel_jobs=max_parallel_jobs,\n", " objective_type=\"Minimize\",\n", " base_tuning_job_name=job_name,\n", ")\n", "\n", "# Launch a SageMaker Tuning job to search for the best hyperparameters\n", "hp_tuner.fit({\"train\": train_input, \"validation\": train_input})" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Deploying the XGBoost endpoint\n", "\n", "After training, we can host the newly created model in SageMaker, and create an Amazon SageMaker endpoint – a hosted and managed prediction service that we can use to perform inference. If you call `deploy` after you call `fit` on an XGBoost estimator, it will create a SageMaker endpoint using the training script (i.e., `entry_point`). You can also optionally specify other functions to customize the behavior of deserialization of the input request (`input_fn()`), serialization of the predictions (`output_fn()`), and how predictions are made (`predict_fn()`). If any of these functions are not specified, the endpoint will use the default functions in the SageMaker XGBoost container. See the [SageMaker Python SDK documentation](https://sagemaker.readthedocs.io/en/stable/frameworks/xgboost/using_xgboost.html#sagemaker-xgboost-model-server) for details.\n", "\n", "In this notebook, we will run a separate inference script and customize the endpoint to return [SHAP](https://github.com/slundberg/shap) values in addition to predictions. The inference script that we will run in this notebook is provided as the accompanying file (`inference.py`) and also shown below:\n", "\n", "```python\n", "import json\n", "import os\n", "import pickle as pkl\n", "\n", "import numpy as np\n", "\n", "import sagemaker_xgboost_container.encoder as xgb_encoders\n", "\n", "\n", "def model_fn(model_dir):\n", " \"\"\"\n", " Deserialize and return fitted model.\n", " \"\"\"\n", " model_file = \"xgboost-model\"\n", " booster = pkl.load(open(os.path.join(model_dir, model_file), \"rb\"))\n", " return booster\n", "\n", "\n", "def input_fn(request_body, request_content_type):\n", " \"\"\"\n", " The SageMaker XGBoost model server receives the request data body and the content type,\n", " and invokes the `input_fn`.\n", "\n", " Return a DMatrix (an object that can be passed to predict_fn).\n", " \"\"\"\n", " if request_content_type == \"text/libsvm\":\n", " return xgb_encoders.libsvm_to_dmatrix(request_body)\n", " else:\n", " raise ValueError(\n", " \"Content type {} is not supported.\".format(request_content_type)\n", " )\n", "\n", "\n", "def predict_fn(input_data, model):\n", " \"\"\"\n", " SageMaker XGBoost model server invokes `predict_fn` on the return value of `input_fn`.\n", "\n", " Return a two-dimensional NumPy array where the first columns are predictions\n", " and the remaining columns are the feature contributions (SHAP values) for that prediction.\n", " \"\"\"\n", " prediction = model.predict(input_data)\n", " feature_contribs = model.predict(input_data, pred_contribs=True, validate_features=False)\n", " output = np.hstack((prediction[:, np.newaxis], feature_contribs))\n", " return output\n", "\n", "\n", "def output_fn(predictions, content_type):\n", " \"\"\"\n", " After invoking predict_fn, the model server invokes `output_fn`.\n", " \"\"\"\n", " if content_type == \"text/csv\":\n", " return ','.join(str(x) for x in predictions[0])\n", " else:\n", " raise ValueError(\"Content type {} is not supported.\".format(content_type))\n", "```\n", "\n", "### transform_fn\n", "\n", "If you would rather not structure your code around the three methods described above, you can instead define your own `transform_fn` to handle inference requests. An error is thrown if a `transform_fn` is present in conjunction with any `input_fn`, `predict_fn`, and/or `output_fn`. In our case, the `transform_fn` would look as follows:\n", "```python\n", "def transform_fn(model, request_body, content_type, accept_type):\n", " dmatrix = xgb_encoders.libsvm_to_dmatrix(request_body)\n", " prediction = model.predict(dmatrix)\n", " feature_contribs = model.predict(dmatrix, pred_contribs=True, validate_features=False)\n", " output = np.hstack((prediction[:, np.newaxis], feature_contribs))\n", " return ','.join(str(x) for x in predictions[0])\n", "```\n", "where `model` is the model object loaded by `model_fn`, `request_body` is the data from the inference request, `content_type` is the content type of the request, and `accept_type` is the request content type for the response.\n", "\n", "\n", "### Deploy to an endpoint\n", "\n", "Since the inference script is separate from the training script, here we use `XGBoostModel` to create a model from s3 artifacts and specify `inference.py` as the `entry_point`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.xgboost.model import XGBoostModel\n", "\n", "model_data = xgb_script_mode_estimator.model_data\n", "print(model_data)\n", "\n", "xgb_inference_model = XGBoostModel(\n", " model_data=model_data,\n", " role=role,\n", " entry_point=\"inference.py\",\n", " framework_version=\"1.7-1\",\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "predictor = xgb_inference_model.deploy(\n", " initial_instance_count=1,\n", " instance_type=\"ml.c5.xlarge\",\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Explain the model's predictions on each data point" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import matplotlib.pyplot as plt\n", "import pandas as pd\n", "import seaborn as sns\n", "\n", "\n", "def plot_feature_contributions(prediction):\n", " attribute_names = [\n", " \"Sex\", # nominal / -- / M, F, and I (infant)\n", " \"Length\", # continuous / mm / Longest shell measurement\n", " \"Diameter\", # continuous / mm / perpendicular to length\n", " \"Height\", # continuous / mm / with meat in shell\n", " \"Whole weight\", # continuous / grams / whole abalone\n", " \"Shucked weight\", # continuous / grams / weight of meat\n", " \"Viscera weight\", # continuous / grams / gut weight (after bleeding)\n", " \"Shell weight\", # continuous / grams / after being dried\n", " ]\n", "\n", " prediction, _, *shap_values, bias = prediction\n", "\n", " if len(shap_values) != len(attribute_names):\n", " raise ValueError(\"Length mismatch between shap values and attribute names.\")\n", "\n", " df = pd.DataFrame(data=[shap_values], index=[\"SHAP\"], columns=attribute_names).T\n", " df.sort_values(by=\"SHAP\", inplace=True)\n", "\n", " df[\"bar_start\"] = bias + df.SHAP.cumsum().shift().fillna(0.0)\n", " df[\"bar_end\"] = df.bar_start + df.SHAP\n", " df[[\"bar_start\", \"bar_end\"]] = np.sort(df[[\"bar_start\", \"bar_end\"]].values)\n", " df[\"hue\"] = df.SHAP.apply(lambda x: 0 if x > 0 else 1)\n", "\n", " sns.set(style=\"white\")\n", "\n", " ax1 = sns.barplot(x=df.bar_end, y=df.index, data=df, orient=\"h\", palette=\"vlag\")\n", " for idx, patch in enumerate(ax1.patches):\n", " x_val = patch.get_x() + patch.get_width() + 0.8\n", " y_val = patch.get_y() + patch.get_height() / 2\n", " shap_value = df.SHAP.values[idx]\n", " value = \"{0}{1:.2f}\".format(\"+\" if shap_value > 0 else \"-\", shap_value)\n", " ax1.annotate(value, (x_val, y_val), ha=\"right\", va=\"center\")\n", "\n", " ax2 = sns.barplot(x=df.bar_start, y=df.index, data=df, orient=\"h\", color=\"#FFFFFF\")\n", " ax2.set_xlim(\n", " df[[\"bar_start\", \"bar_end\"]].values.min() - 1, df[[\"bar_start\", \"bar_end\"]].values.max() + 1\n", " )\n", " ax2.axvline(x=bias, color=\"#000000\", alpha=0.2, linestyle=\"--\", linewidth=1)\n", " ax2.set_title(\"base value: {0:.1f} → model output: {1:.1f}\".format(bias, prediction))\n", " ax2.set_xlabel(\"Abalone age\")\n", "\n", " sns.despine(left=True, bottom=True)\n", "\n", " plt.tight_layout()\n", " plt.show()\n", "\n", "\n", "def predict_and_plot(predictor, libsvm_str):\n", " label, *features = libsvm_str.strip().split()\n", " predictions = predictor.predict(\" \".join([\"-99\"] + features)) # use dummy label -99\n", " np_array = np.array([float(x) for x in predictions[0]])\n", " plot_feature_contributions(np_array)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "The below figure shows features each contributing to push the model output from the base value (9.9 rings) to the model output (6.9 rings). The primary indicator for a young abalone according to the model is low shell weight, which decreases the prediction by 3.0 rings from the base value of 9.9 rings. Whole weight and shucked weight are also powerful indicators. The whole weight pushes the prediction lower by 0.84 rings, while shucked weight pushes the prediction higher by 1.6 rings." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "a_young_abalone = \"6 1:3 2:0.37 3:0.29 4:0.095 5:0.249 6:0.1045 7:0.058 8:0.067\"\n", "predict_and_plot(predictor, a_young_abalone)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "The second example shows feature contributions for another sample, an old abalone. We again see that the primary indicator for the age of abalone according to the model is shell weight, which increases the model prediction by 2.36 rings. Whole weight and shucked weight also contribute significantly, and they both push the model's prediction higher." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "an_old_abalone = \"15 1:1 2:0.655 3:0.53 4:0.175 5:1.2635 6:0.486 7:0.2635 8:0.415\"\n", "predict_and_plot(predictor, an_old_abalone)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### (Optional) Delete the Endpoint\n", "\n", "If you're done with this exercise, please run the `delete_endpoint` line in the cell below. This will remove the hosted endpoint and avoid any charges from a stray instance being left on." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "predictor.delete_endpoint()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Notebook CI Test Results\n", "\n", "This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.\n", "\n", "![This us-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-1/introduction_to_amazon_algorithms|xgboost_abalone|xgboost_inferenece_script_mode.ipynb)\n", "\n", "![This us-east-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-2/introduction_to_amazon_algorithms|xgboost_abalone|xgboost_inferenece_script_mode.ipynb)\n", "\n", "![This us-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-1/introduction_to_amazon_algorithms|xgboost_abalone|xgboost_inferenece_script_mode.ipynb)\n", "\n", "![This ca-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ca-central-1/introduction_to_amazon_algorithms|xgboost_abalone|xgboost_inferenece_script_mode.ipynb)\n", "\n", "![This sa-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/sa-east-1/introduction_to_amazon_algorithms|xgboost_abalone|xgboost_inferenece_script_mode.ipynb)\n", "\n", "![This eu-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-1/introduction_to_amazon_algorithms|xgboost_abalone|xgboost_inferenece_script_mode.ipynb)\n", "\n", "![This eu-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-2/introduction_to_amazon_algorithms|xgboost_abalone|xgboost_inferenece_script_mode.ipynb)\n", "\n", "![This eu-west-3 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-3/introduction_to_amazon_algorithms|xgboost_abalone|xgboost_inferenece_script_mode.ipynb)\n", "\n", "![This eu-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-central-1/introduction_to_amazon_algorithms|xgboost_abalone|xgboost_inferenece_script_mode.ipynb)\n", "\n", "![This eu-north-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-north-1/introduction_to_amazon_algorithms|xgboost_abalone|xgboost_inferenece_script_mode.ipynb)\n", "\n", "![This ap-southeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-1/introduction_to_amazon_algorithms|xgboost_abalone|xgboost_inferenece_script_mode.ipynb)\n", "\n", "![This ap-southeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-2/introduction_to_amazon_algorithms|xgboost_abalone|xgboost_inferenece_script_mode.ipynb)\n", "\n", "![This ap-northeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-1/introduction_to_amazon_algorithms|xgboost_abalone|xgboost_inferenece_script_mode.ipynb)\n", "\n", "![This ap-northeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-2/introduction_to_amazon_algorithms|xgboost_abalone|xgboost_inferenece_script_mode.ipynb)\n", "\n", "![This ap-south-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-south-1/introduction_to_amazon_algorithms|xgboost_abalone|xgboost_inferenece_script_mode.ipynb)\n" ] } ], "metadata": { "instance_type": "ml.m5.large", "kernelspec": { "display_name": "Python 3 (Data Science 3.0)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/sagemaker-data-science-310-v1" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.6" } }, "nbformat": 4, "nbformat_minor": 4 }