{ "cells": [ { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "# An Introduction to Linear Learner with MNIST\n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "This notebook's CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook. \n", "\n", "![This us-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-2/introduction_to_amazon_algorithms|linear_learner_mnist|linear_learner_mnist.ipynb)\n", "\n", "---" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "_**Making a Binary Prediction of Whether a Handwritten Digit is a 0**_\n", "\n", "1. [Introduction](#Introduction)\n", "2. [Prerequisites and Preprocessing](#Prequisites-and-Preprocessing)\n", " 1. [Permissions and environment variables](#Permissions-and-environment-variables)\n", " 2. [Data ingestion](#Data-ingestion)\n", " 3. [Data inspection](#Data-inspection)\n", " 4. [Data conversion](#Data-conversion)\n", "3. [Training the linear model](#Training-the-linear-model)\n", " 1. [Training the Linear Learner model with SageMaker Training](#Training-with-sagemaker-training)\n", " 2. [Training with Automatic Model Tuning (HPO)](#Training-with-automatic-model-tuning-HPO)\n", "4. [Set up hosting for the model](#Set-up-hosting-for-the-model)\n", "5. [Validate the model for use](#Validate-the-model-for-use)\n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Introduction\n", "\n", "Welcome to our example introducing Amazon SageMaker's Linear Learner Algorithm! Today, we're analyzing the [MNIST](https://en.wikipedia.org/wiki/MNIST_database) dataset which consists of images of handwritten digits, from zero to nine. We'll use the individual pixel values from each 28 x 28 grayscale image to predict a yes or no label of whether the digit is a 0 or some other digit (1, 2, 3, ... 9).\n", "\n", "The method that we'll use is a linear binary classifier. Linear models are supervised learning algorithms used for solving either classification or regression problems. As input, the model is given labeled examples ( **`x`**, `y`). **`x`** is a high dimensional vector and `y` is a numeric label. Since we are doing binary classification, the algorithm expects the label to be either 0 or 1 (but Amazon SageMaker Linear Learner also supports regression on continuous values of `y`). The algorithm learns a linear function, or linear threshold function for classification, mapping the vector **`x`** to an approximation of the label `y`.\n", "\n", "Amazon SageMaker's Linear Learner algorithm extends upon typical linear models by training many models in parallel, in a computationally efficient manner. Each model has a different set of hyperparameters, and then the algorithm finds the set that optimizes a specific criteria. This can provide substantially more accurate models than typical linear algorithms at the same, or lower, cost.\n", "\n", "To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Prequisites and Preprocessing\n", "\n", "### Permissions and environment variables\n", "\n", "_This notebook was created and tested on an ml.m4.xlarge notebook instance._\n", "\n", "Let's start by specifying:\n", "\n", "- The S3 buckets and prefixes that you want to use for training and model data and where original data is located. These should be within the same region as the Notebook Instance, training, and hosting.\n", "- The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "! pip install --upgrade sagemaker" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "isConfigCell": true, "tags": [ "parameters" ] }, "outputs": [], "source": [ "import re\n", "import boto3\n", "import sagemaker\n", "from sagemaker import get_execution_role\n", "\n", "sess = sagemaker.Session()\n", "\n", "region = boto3.Session().region_name\n", "\n", "# S3 bucket where the original mnist data is downloaded and stored.\n", "downloaded_data_bucket = f\"sagemaker-example-files-prod-{region}\"\n", "downloaded_data_prefix = \"datasets/image/MNIST\"\n", "\n", "# S3 bucket for saving code and model artifacts.\n", "# Feel free to specify a different bucket and prefix\n", "bucket = sess.default_bucket()\n", "prefix = \"sagemaker/DEMO-linear-mnist\"\n", "\n", "# Define IAM role\n", "role = get_execution_role()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Data ingestion\n", "\n", "Next, we read the MNIST dataset [1] from an existing repository into memory, for preprocessing prior to training. It was downloaded from this [link](http://deeplearning.net/data/mnist/mnist.pkl.gz) and stored on the `downloaded_data_bucket`. Processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets.\n", "> [1] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, November 1998." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "import pickle, gzip, numpy, json\n", "\n", "# Load the dataset\n", "s3 = boto3.client(\"s3\")\n", "s3.download_file(downloaded_data_bucket, f\"{downloaded_data_prefix}/mnist.pkl.gz\", \"mnist.pkl.gz\")\n", "with gzip.open(\"mnist.pkl.gz\", \"rb\") as f:\n", " train_set, valid_set, test_set = pickle.load(f, encoding=\"latin1\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Data inspection\n", "\n", "Once the dataset is imported, it's typical as part of the machine learning process to inspect the data, understand the distributions, and determine what type(s) of preprocessing might be needed. You can perform those tasks right here in the notebook. As an example, let's go ahead and look at one of the digits that is part of the dataset." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%matplotlib inline\n", "import matplotlib.pyplot as plt\n", "\n", "plt.rcParams[\"figure.figsize\"] = (2, 10)\n", "\n", "\n", "def show_digit(img, caption=\"\", subplot=None):\n", " if subplot is None:\n", " _, (subplot) = plt.subplots(1, 1)\n", " imgr = img.reshape((28, 28))\n", " subplot.axis(\"off\")\n", " subplot.imshow(imgr, cmap=\"gray\")\n", " plt.title(caption)\n", "\n", "\n", "show_digit(train_set[0][30], f\"This is a {train_set[1][30]}\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Data conversion\n", "\n", "Since algorithms have particular input and output requirements, converting the dataset is also part of the process that a data scientist goes through prior to initiating training. In this particular case, the Amazon SageMaker implementation of Linear Learner takes recordIO-wrapped protobuf, where the data we have today is a pickle-ized numpy array on disk.\n", "\n", "Most of the conversion effort is handled by the Amazon SageMaker Python SDK, imported as `sagemaker` below." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import io\n", "import numpy as np\n", "import sagemaker.amazon.common as smac\n", "\n", "train_set_vectors = np.array([t.tolist() for t in train_set[0]]).astype(\"float32\")\n", "train_set_labels = np.where(np.array([t.tolist() for t in train_set[1]]) == 0, 1, 0).astype(\n", " \"float32\"\n", ")\n", "\n", "validation_set_vectors = np.array([t.tolist() for t in valid_set[0]]).astype(\"float32\")\n", "validation_set_labels = np.where(np.array([t.tolist() for t in valid_set[1]]) == 0, 1, 0).astype(\n", " \"float32\"\n", ")\n", "\n", "train_set_buf = io.BytesIO()\n", "validation_set_buf = io.BytesIO()\n", "smac.write_numpy_to_dense_tensor(train_set_buf, train_set_vectors, train_set_labels)\n", "smac.write_numpy_to_dense_tensor(validation_set_buf, validation_set_vectors, validation_set_labels)\n", "train_set_buf.seek(0)\n", "validation_set_buf.seek(0)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Upload training data\n", "Now that we've created our recordIO-wrapped protobuf, we'll need to upload it to S3, so that Amazon SageMaker training can use it." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "import boto3\n", "import os\n", "\n", "key = \"recordio-pb-data\"\n", "boto3.resource(\"s3\").Bucket(bucket).Object(os.path.join(prefix, \"train\", key)).upload_fileobj(\n", " train_set_buf\n", ")\n", "boto3.resource(\"s3\").Bucket(bucket).Object(os.path.join(prefix, \"validation\", key)).upload_fileobj(\n", " validation_set_buf\n", ")\n", "s3_train_data = f\"s3://{bucket}/{prefix}/train/{key}\"\n", "print(f\"uploaded training data location: {s3_train_data}\")\n", "s3_validation_data = f\"s3://{bucket}/{prefix}/validation/{key}\"\n", "print(f\"uploaded validation data location: {s3_validation_data}\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Let's also setup an output S3 location for the model artifact that will be output as the result of training with the algorithm." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "output_location = f\"s3://{bucket}/{prefix}/output\"\n", "print(f\"training artifacts will be uploaded to: {output_location}\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Training the linear model\n", "\n", "Once we have the data preprocessed and available in the correct format for training, the next step is to actually train the model using the data. Since this data is relatively small, it isn't meant to show off the performance of the Linear Learner training algorithm, although we have tested it on multi-terabyte datasets.\n", "\n", "Training can be done by either calling SageMaker Training with a set of hyperparameters values to train with, or by leveraging SageMaker Automatic Model Tuning ([AMT](https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning.html)). AMT, also known as hyperparameter tuning (HPO), finds the best version of a model by running many training jobs on your dataset using the algorithm and ranges of hyperparameters that you specify. It then chooses the hyperparameter values that result in a model that performs the best, as measured by a metric that you choose.\n", "\n", "In this notebook, both methods are used for demonstration purposes, but the model that the HPO job creates is the one that is eventually hosted. You can instead choose to deploy the model created by the standalone training job by changing the below variable `deploy_amt_model` to False.\n", "\n", "### Training with SageMaker Training\n", "\n", "We'll use the Amazon SageMaker Python SDK to kick off training, and monitor status until it is completed. In this example that takes between 7 and 11 minutes. Despite the dataset being small, provisioning hardware and loading the algorithm container take time upfront.\n", "\n", "First, let's specify our container. We retrieve the image for the Linear Learner Algorithm according to the region." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker import image_uris\n", "\n", "container = image_uris.retrieve(region=boto3.Session().region_name, framework=\"linear-learner\")\n", "deploy_amt_model = True" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Then we create an [estimator from the SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html) using the Linear Learner container image and we setup the training parameters and hyperparameters configuration. Notice:\n", "- `feature_dim` is set to 784, which is the number of pixels in each 28 x 28 image.\n", "- `predictor_type` is set to 'binary_classifier' since we are trying to predict whether the image is or is not a 0.\n", "- `mini_batch_size` is set to 200. This value can be tuned for relatively minor improvements in fit and speed, but selecting a reasonable value relative to the dataset is appropriate in most cases." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import boto3\n", "import sagemaker\n", "\n", "sess = sagemaker.Session()\n", "\n", "linear = sagemaker.estimator.Estimator(\n", " container,\n", " role,\n", " instance_count=1,\n", " instance_type=\"ml.c4.xlarge\",\n", " output_path=output_location,\n", " sagemaker_session=sess,\n", ")\n", "linear.set_hyperparameters(feature_dim=784, predictor_type=\"binary_classifier\", mini_batch_size=200)\n", "\n", "linear.fit({\"train\": s3_train_data})" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Training with Automatic Model Tuning ([HPO](https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning.html)) \n", "***\n", "As mentioned above, instead of manually configuring our hyper parameter values and training with SageMaker Training, we'll use Amazon SageMaker Automatic Model Tuning.\n", " \n", "The code sample below shows you how to use the HyperParameterTuner. For recommended default hyparameter ranges, check the [Amazon SageMaker Linear Learner HPs documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/linear-learner.html).\n", "\n", "The tuning job will take 8 to 10 minutes to complete.\n", "***" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import time\n", "from sagemaker.tuner import IntegerParameter, ContinuousParameter\n", "from sagemaker.tuner import HyperparameterTuner\n", "\n", "job_name = \"DEMO-ll-mni-\" + time.strftime(\"%Y-%m-%d-%H-%M-%S\", time.gmtime())\n", "print(\"Tuning job name:\", job_name)\n", "\n", "# Linear Learner tunable hyper parameters can be found here https://docs.aws.amazon.com/sagemaker/latest/dg/linear-learner-tuning.html\n", "hyperparameter_ranges = {\n", " \"wd\": ContinuousParameter(1e-7, 1, scaling_type=\"Auto\"),\n", " \"learning_rate\": ContinuousParameter(1e-5, 1, scaling_type=\"Auto\"),\n", " \"mini_batch_size\": IntegerParameter(100, 2000, scaling_type=\"Auto\"),\n", "}\n", "\n", "# Increase the total number of training jobs run by AMT, for increased accuracy (and training time).\n", "max_jobs = 6\n", "# Change parallel training jobs run by AMT to reduce total training time, constrained by your account limits.\n", "# if max_jobs=max_parallel_jobs then Bayesian search turns to Random.\n", "max_parallel_jobs = 2\n", "\n", "\n", "hp_tuner = HyperparameterTuner(\n", " linear,\n", " \"validation:binary_f_beta\",\n", " hyperparameter_ranges,\n", " max_jobs=max_jobs,\n", " max_parallel_jobs=max_parallel_jobs,\n", " objective_type=\"Maximize\",\n", ")\n", "\n", "\n", "# Launch a SageMaker Tuning job to search for the best hyperparameters\n", "hp_tuner.fit(inputs={\"train\": s3_train_data, \"validation\": s3_validation_data}, job_name=job_name)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Set up hosting for the model\n", "Now that we've trained our model, we can deploy it behind an Amazon SageMaker real-time hosted endpoint. This will allow out to make predictions (or inference) from the model dyanamically.\n", "\n", "_Note, Amazon SageMaker allows you the flexibility of importing models trained elsewhere, as well as the choice of not importing models if the target of model creation is AWS Lambda, AWS Greengrass, Amazon Redshift, Amazon Athena, or other deployment target._" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if deploy_amt_model:\n", " linear_predictor = hp_tuner.deploy(initial_instance_count=1, instance_type=\"ml.m4.xlarge\")\n", "else:\n", " linear_predictor = linear.deploy(initial_instance_count=1, instance_type=\"ml.m4.xlarge\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Validate the model for use\n", "Finally, we can now validate the model for use. We can pass HTTP POST requests to the endpoint to get back predictions. To make this easier, we'll again use the Amazon SageMaker Python SDK and specify how to serialize requests and deserialize responses that are specific to the algorithm." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.serializers import CSVSerializer\n", "from sagemaker.deserializers import JSONDeserializer\n", "\n", "linear_predictor.serializer = CSVSerializer()\n", "linear_predictor.deserializer = JSONDeserializer()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Now let's try getting a prediction for a single record." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "result = linear_predictor.predict(train_set[0][30:31], initial_args={\"ContentType\": \"text/csv\"})\n", "print(result)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "OK, a single prediction works. We see that for one record our endpoint returned some JSON which contains `predictions`, including the `score` and `predicted_label`. In this case, `score` will be a continuous value between [0, 1] representing the probability we think the digit is a 0 or not. `predicted_label` will take a value of either `0` or `1` where (somewhat counterintuitively) `1` denotes that we predict the image is a 0, while `0` denotes that we are predicting the image is not of a 0.\n", "\n", "Let's do a whole batch of images and evaluate our predictive accuracy." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "\n", "predictions = []\n", "for array in np.array_split(test_set[0], 100):\n", " result = linear_predictor.predict(array)\n", " predictions += [r[\"predicted_label\"] for r in result[\"predictions\"]]\n", "\n", "predictions = np.array(predictions)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "\n", "pd.crosstab(\n", " np.where(test_set[1] == 0, 1, 0), predictions, rownames=[\"actuals\"], colnames=[\"predictions\"]\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "As we can see from the confusion matrix above, we predict 931 images of 0 correctly, while we predict 44 images as 0s that aren't, and miss predicting 49 images of 0." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### (Optional) Delete the Endpoint\n", "\n", "If you're ready to be done with this notebook, please run the delete_endpoint line in the cell below. This will remove the hosted endpoint you created and avoid any charges from a stray instance being left on." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "linear_predictor.delete_model()\n", "linear_predictor.delete_endpoint()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Notebook CI Test Results\n", "\n", "This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.\n", "\n", "![This us-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-1/introduction_to_amazon_algorithms|linear_learner_mnist|linear_learner_mnist.ipynb)\n", "\n", "![This us-east-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-2/introduction_to_amazon_algorithms|linear_learner_mnist|linear_learner_mnist.ipynb)\n", "\n", "![This us-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-1/introduction_to_amazon_algorithms|linear_learner_mnist|linear_learner_mnist.ipynb)\n", "\n", "![This ca-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ca-central-1/introduction_to_amazon_algorithms|linear_learner_mnist|linear_learner_mnist.ipynb)\n", "\n", "![This sa-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/sa-east-1/introduction_to_amazon_algorithms|linear_learner_mnist|linear_learner_mnist.ipynb)\n", "\n", "![This eu-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-1/introduction_to_amazon_algorithms|linear_learner_mnist|linear_learner_mnist.ipynb)\n", "\n", "![This eu-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-2/introduction_to_amazon_algorithms|linear_learner_mnist|linear_learner_mnist.ipynb)\n", "\n", "![This eu-west-3 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-3/introduction_to_amazon_algorithms|linear_learner_mnist|linear_learner_mnist.ipynb)\n", "\n", "![This eu-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-central-1/introduction_to_amazon_algorithms|linear_learner_mnist|linear_learner_mnist.ipynb)\n", "\n", "![This eu-north-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-north-1/introduction_to_amazon_algorithms|linear_learner_mnist|linear_learner_mnist.ipynb)\n", "\n", "![This ap-southeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-1/introduction_to_amazon_algorithms|linear_learner_mnist|linear_learner_mnist.ipynb)\n", "\n", "![This ap-southeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-2/introduction_to_amazon_algorithms|linear_learner_mnist|linear_learner_mnist.ipynb)\n", "\n", "![This ap-northeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-1/introduction_to_amazon_algorithms|linear_learner_mnist|linear_learner_mnist.ipynb)\n", "\n", "![This ap-northeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-2/introduction_to_amazon_algorithms|linear_learner_mnist|linear_learner_mnist.ipynb)\n", "\n", "![This ap-south-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-south-1/introduction_to_amazon_algorithms|linear_learner_mnist|linear_learner_mnist.ipynb)\n" ] } ], "metadata": { "celltoolbar": "Tags", "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (Data Science 3.0)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/sagemaker-data-science-310-v1" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.6" }, "notice": "Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the \"License\"). You may not use this file except in compliance with the License. A copy of the License is located at http://aws.amazon.com/apache2.0/ or in the \"license\" file accompanying this file. This file is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." }, "nbformat": 4, "nbformat_minor": 4 }