{ "cells": [ { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "# SageMaker shadow testing overview" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "This notebook's CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook. \n", "\n", "![This us-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-2/sagemaker-shadow-variant|shadow-console|Shadow_variant.ipynb)\n", "\n", "---" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Amazon SageMaker now enables you to evaluate any changes to your model serving infrastructure, consisting of the ML model, the serving container, or the ML instance by shadow testing its performance against the currently deployed one. Shadow testing can help you catch potential configuration errors and performance issues before they impact end users. With SageMaker, you don’t need to invest in building your own shadow testing infrastructure, allowing you to focus on model development. \n", "\n", "You can use this to validate changes to any component to your production variant, namely the model, the container, or the instance, without any end user impact. It is useful in situations such as:\n", "\n", "* You are considering promoting a new model that has been validated offline to production but want to evaluate operational performance metrics such as latency, error rate before making this decision\n", "* You are considering changes to your serving infrastructure container, such as patching vulnerabilities or upgrading to newer versions, and want to assess the impact of these changes prior to promotion\n", "* You are considering changing your ML instance and want to evaluate how the new instance would perform with live inference requests.\n", "\n", "Just select a production variant you want to test against, and SageMaker automatically deploys the new variant in shadow mode and routes a copy of the inference requests to it in real time within the same endpoint. Only the responses of the production variant are returned to the calling application. You can choose to discard or log the responses of the shadow variant for offline comparison. \n", "\n", "This notebook provides a walkthrough of the feature using the SageMaker Inference APIs. \n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## SageMaker Background" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "![title](images/Shadow.png)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "A *production variant* consists of the ML model, Serving Container, and ML Instance. Since each variant is independent of others, you can have different models, containers, or instance types across variants. SageMaker lets you specify autoscaling policies on a per-variant basis so they can scale independently based on incoming load. SageMaker supports up to 10 production variants per endpoint. You can either configure a variant to receive a portion of the incoming traffic by setting variant weights or specify the target variant in the incoming request. The response from the production variant is forwarded back to the invoker. \n", "\n", "A *shadow variant* *(new)* has the same components as a production production variant. A user specified portion of the requests, known as the traffic sampling percentage (VariantWeight parameter in the ShadowProductionVariants object), is forwarded to the shadow variant. You can choose to log the response of the shadow variant in S3 or discard it. For an endpoint with a shadow variant, you can have a maximum of one production variant. \n", "\n", "You can monitor the [invocation metrics](https://docs.aws.amazon.com/sagemaker/latest/dg/monitoring-cloudwatch.html) for both production and shadow variants in CloudWatch under the AWS/SageMaker namespace \n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Setup \n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Ensure that you have an updated version of boto3, which includes the latest SageMaker features:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "!pip install sagemaker --quiet\n", "!pip install -U awscli --quiet" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "The SageMaker role arn used to give training and hosting access to your data. The S3 bucket that you want to use for training and storing model objects." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "import boto3\n", "import sagemaker\n", "import time\n", "from time import gmtime, strftime\n", "from datetime import datetime, timedelta, timezone\n", "from sagemaker import get_execution_role, session\n", "from sagemaker.s3 import S3Downloader, S3Uploader\n", "\n", "boto_session = boto3.session.Session()\n", "role = sagemaker.get_execution_role()\n", "region = boto3.Session().region_name\n", "sm_session = session.Session(boto3.Session())\n", "sm = boto3.Session().client(\"sagemaker\")\n", "sm_runtime = boto3.Session().client(\"sagemaker-runtime\")\n", "\n", "\n", "# You can use a different bucket, but make sure the role you chose for this notebook\n", "# has the s3:PutObject permissions. This is the bucket into which the model artifacts will be uploaded\n", "bucket = sagemaker.Session().default_bucket()\n", "\n", "prefix = \"sagemaker/shadow-deployment\"\n", "resource_name = \"ShadowDemo-{}-{}\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "! mkdir model\n", "! mkdir test_data\n", "\n", "s3 = boto3.client(\"s3\")\n", "s3.download_file(\n", " f\"sagemaker-example-files-prod-{region}\",\n", " \"models/xgb-churn/xgb-churn-prediction-model.tar.gz\",\n", " \"model/xgb-churn-prediction-model.tar.gz\",\n", ")\n", "s3.download_file(\n", " f\"sagemaker-example-files-prod-{region}\",\n", " \"models/xgb-churn/xgb-churn-prediction-model2.tar.gz\",\n", " \"model/xgb-churn-prediction-model2.tar.gz\",\n", ")\n", "\n", "s3.download_file(\n", " f\"sagemaker-example-files-prod-{region}\",\n", " \"datasets/tabular/xgb-churn/test-dataset.csv\",\n", " \"test_data/test-dataset.csv\",\n", ")\n", "s3.download_file(\n", " f\"sagemaker-example-files-prod-{region}\",\n", " \"datasets/tabular/xgb-churn/test-dataset-input-cols.csv\",\n", " \"test_data/test-dataset-input-cols.csv\",\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Create models " ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "### First, we upload our pre-trained models to Amazon S3\n", "This code uploads two pre-trained XGBoost models that are ready for you to deploy. These models were trained using the [XGB Churn Prediction Notebook](https://github.com/aws/amazon-sagemaker-examples/blob/master/introduction_to_applying_machine_learning/xgboost_customer_churn/xgboost_customer_churn.ipynb) in SageMaker. You can also use your own pre-trained models in this step.\n", "\n", "The models in this example are used to predict the probability of a mobile customer leaving their current mobile operator. The dataset we use is publicly available and was mentioned in the book [Discovering Knowledge in Data](https://www.amazon.com/dp/0470908742/) by Daniel T. Larose. It is attributed by the author to the University of California Irvine Repository of Machine Learning Datasets.\n", "\n", "To begin, let us upload these trained models to S3. Keep in mind that to use your pre-trained model, you just need to point `local_path` to your local pre-tained model file. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model_url = S3Uploader.upload(\n", " local_path=\"model/xgb-churn-prediction-model.tar.gz\",\n", " desired_s3_uri=f\"s3://{bucket}/{prefix}\",\n", ")\n", "model_url2 = S3Uploader.upload(\n", " local_path=\"model/xgb-churn-prediction-model2.tar.gz\",\n", " desired_s3_uri=f\"s3://{bucket}/{prefix}\",\n", ")\n", "\n", "print(f\"Model URI 1: {model_url}\")\n", "print(f\"Model URI 2: {model_url2}\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker import image_uris\n", "\n", "image_uri = image_uris.retrieve(\"xgboost\", boto3.Session().region_name, \"0.90-1\")\n", "image_uri2 = image_uris.retrieve(\"xgboost\", boto3.Session().region_name, \"0.90-2\")\n", "\n", "print(f\"Model Image 1: {image_uri}\")\n", "print(f\"Model Image 2: {image_uri2}\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Deploy the two models as production and shadow variants to a real-time Inference endpoint" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "The first step in deploying a trained model to SageMaker Inference is to created a SageMaker Model using the create_model API. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model_name = f\"DEMO-xgb-churn-pred-{datetime.now():%Y-%m-%d-%H-%M-%S}\"\n", "model_name2 = f\"DEMO-xgb-churn-pred2-{datetime.now():%Y-%m-%d-%H-%M-%S}\"\n", "\n", "print(f\"Model Name 1: {model_name}\")\n", "print(f\"Model Name 2: {model_name2}\")\n", "\n", "resp = sm.create_model(\n", " ModelName=model_name,\n", " ExecutionRoleArn=role,\n", " Containers=[{\"Image\": image_uri, \"ModelDataUrl\": model_url}],\n", ")\n", "print(f\"Created Model: {resp}\")\n", "\n", "resp = sm.create_model(\n", " ModelName=model_name2,\n", " ExecutionRoleArn=role,\n", " Containers=[{\"Image\": image_uri2, \"ModelDataUrl\": model_url2}],\n", ")\n", "print(f\"Created Model: {resp}\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "The first step in deploying a trained model to SageMaker Inference is to created a SageMaker Model using the create_model API" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "The next step is to create an endpoint config with the production and shadow variants. The ProductionVariants and the ShadowProductionVariants are of particular interest. We set the InitialVariantWeight in the ShadowProductionVariants to sample and send 50% of the production variant requests to the shadow variant. The production variant receives 100% of the traffic.\n", "\n", "Both these variants have ml.m5.xlarge instances with 4 vCPUs and 16 GiB of memory and the initial instance count is set to 1. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ep_config_name = f\"Shadow-EpConfig-{datetime.now():%Y-%m-%d-%H-%M-%S}\"\n", "production_variant_name = \"production\"\n", "shadow_variant_name = \"shadow\"\n", "\n", "create_endpoint_config_response = sm.create_endpoint_config(\n", " EndpointConfigName=ep_config_name,\n", " ProductionVariants=[\n", " {\n", " \"VariantName\": production_variant_name,\n", " \"ModelName\": model_name,\n", " \"InstanceType\": \"ml.m5.xlarge\",\n", " \"InitialInstanceCount\": 2,\n", " \"InitialVariantWeight\": 1,\n", " }\n", " ],\n", " ShadowProductionVariants=[\n", " {\n", " \"VariantName\": shadow_variant_name,\n", " \"ModelName\": model_name2,\n", " \"InstanceType\": \"ml.m5.xlarge\",\n", " \"InitialInstanceCount\": 1,\n", " \"InitialVariantWeight\": 0.5,\n", " }\n", " ],\n", ")\n", "print(f\"Created EndpointConfig: {create_endpoint_config_response['EndpointConfigArn']}\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "endpoint_name = f\"xgb-prod-shadow-{datetime.now():%Y-%m-%d-%H-%M-%S}\"\n", "create_endpoint_api_response = sm.create_endpoint(\n", " EndpointName=endpoint_name,\n", " EndpointConfigName=ep_config_name,\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Now, wait for the endpoint creation to complete. This should take 2-5 minutes, depending on your model artifact and serving container size. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def wait_for_endpoint_in_service(endpoint_name):\n", " print(\"Waiting for endpoint in service\")\n", " while True:\n", " details = sm.describe_endpoint(EndpointName=endpoint_name)\n", " status = details[\"EndpointStatus\"]\n", " if status in [\"InService\", \"Failed\"]:\n", " print(\"\\nDone!\")\n", " break\n", " print(\".\", end=\"\", flush=True)\n", " time.sleep(30)\n", "\n", "\n", "wait_for_endpoint_in_service(endpoint_name)\n", "\n", "sm.describe_endpoint(EndpointName=endpoint_name)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Invoke Endpoint" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Once the endpoint has been successfully created, you can begin invoking it. To learn more about endpoint, please check out our [documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints.html). " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def invoke_endpoint(endpoint_name, should_raise_exp=False):\n", " with open(\"test_data/test-dataset-input-cols.csv\", \"r\") as f:\n", " for row in f:\n", " payload = row.rstrip(\"\\n\")\n", " try:\n", " for i in range(10): # send the same payload 10 times for testing purpose\n", " response = sm_runtime.invoke_endpoint(\n", " EndpointName=endpoint_name, ContentType=\"text/csv\", Body=payload\n", " )\n", " except Exception as e:\n", " print(\"E\", end=\"\", flush=True)\n", " if should_raise_exp:\n", " raise e\n", "\n", "\n", "invoke_endpoint(endpoint_name)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Now that the endpoint is InService and has been invoked, the following cells help collect CloudWatch metrics between the production and shadow variants for metrics comparison. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%matplotlib inline" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "\n", "cw = boto3.Session().client(\"cloudwatch\", region_name=region)\n", "\n", "\n", "def get_sagemaker_metrics(\n", " endpoint_name,\n", " variant_name,\n", " metric_name,\n", " statistic,\n", " start_time,\n", " end_time,\n", "):\n", " dimensions = [\n", " {\"Name\": \"EndpointName\", \"Value\": endpoint_name},\n", " {\"Name\": \"VariantName\", \"Value\": variant_name},\n", " ]\n", " namespace = \"AWS/SageMaker\"\n", " if metric_name in [\"CPUUtilization\", \"MemoryUtilization\", \"DiskUtilization\"]:\n", " namespace = \"/aws/sagemaker/Endpoints\"\n", "\n", " metrics = cw.get_metric_statistics(\n", " Namespace=namespace,\n", " MetricName=metric_name,\n", " StartTime=start_time,\n", " EndTime=end_time,\n", " Period=1,\n", " Statistics=[statistic],\n", " Dimensions=dimensions,\n", " )\n", "\n", " if len(metrics[\"Datapoints\"]) == 0:\n", " return\n", " return (\n", " pd.DataFrame(metrics[\"Datapoints\"])\n", " .sort_values(\"Timestamp\")\n", " .set_index(\"Timestamp\")\n", " .drop([\"Unit\"], axis=1)\n", " .rename(columns={statistic: variant_name})\n", " )\n", "\n", "\n", "def plot_endpoint_invocation_metrics(\n", " endpoint_name,\n", " metric_name,\n", " statistic,\n", " start_time=None,\n", "):\n", " start_time = start_time or datetime.now(timezone.utc) - timedelta(minutes=10)\n", " end_time = datetime.now(timezone.utc)\n", " metrics_production = get_sagemaker_metrics(\n", " endpoint_name,\n", " production_variant_name,\n", " metric_name,\n", " statistic,\n", " start_time,\n", " end_time,\n", " )\n", " metrics_shadow = get_sagemaker_metrics(\n", " endpoint_name,\n", " shadow_variant_name,\n", " metric_name,\n", " statistic,\n", " start_time,\n", " end_time,\n", " )\n", " try:\n", " metrics_variants = pd.merge(metrics_production, metrics_shadow, on=\"Timestamp\")\n", " return metrics_variants.plot(y=[\"production\", \"shadow\"])\n", " except Exception as e:\n", " print(e)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Metric Comparison" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "Now that we have deployed both the production and shadow models, let us compare the invocation metrics. Here is a [list](https://docs.aws.amazon.com/sagemaker/latest/dg/monitoring-cloudwatch.html) of invocation metrics available for comparison. Let us start by comparing invocations between the production and shadow variants" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "invocations = plot_endpoint_invocation_metrics(endpoint_name, \"Invocations\", \"Sum\")\n", "invocations_per_instance = plot_endpoint_invocation_metrics(\n", " endpoint_name, \"InvocationsPerInstance\", \"Sum\"\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "The Invocation metric refers to the number of invocations sent to the production variant. A fraction of these invocations, specified in the variant weight, are sent to the shadow variant. The invocation per instance is calculated by dividing the total number of invocations by the number of instances in a variant. From the chart above, we can confirm that both the production and shadow variants are receiving invocation requests according to the weights specified in the endpoint config. \n", "\n", "Next let us compare the model latency between the production and shadow variants. Model latency is the time taken by a model to respond as viewed from SageMaker." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model_latency = plot_endpoint_invocation_metrics(endpoint_name, \"ModelLatency\", \"Average\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "Using the chart above, we can observe how the model latency of the shadow variant compares with the production variant without exposing end users to the shadow variant. \n", "\n", "We expect the overhead latency to be comparable across production and shadow variants. Overhead latency is the interval measured from the time SageMaker receives the request until it returns a response to the client, minus the model Latency. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "overhead_latency = plot_endpoint_invocation_metrics(endpoint_name, \"OverheadLatency\", \"Average\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "Finally, let us review the 4xx, 5xx and total model errors returned by the model serving container. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Invocation4xxErrors = plot_endpoint_invocation_metrics(endpoint_name, \"Invocation4XXErrors\", \"Sum\")\n", "Invocation5xxErrors = plot_endpoint_invocation_metrics(endpoint_name, \"Invocation5XXErrors\", \"Sum\")\n", "Invocation5xxErrors = plot_endpoint_invocation_metrics(\n", " endpoint_name, \"InvocationModelErrors\", \"Sum\"\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "We can consider promoting the shadow model if we do not see any differences in 4xx and 5xx errors between the production shadow variants. \n", "\n", "To promote the shadow model to production, create a new endpoint configuration with current ShadowProductionVariant as the new ProductionVariant and removing the ShadowProductionVariant. This will remove the current ProductionVariant and promote the shadow variant to become the new production variant. As always, all SageMaker updates are orchestrated as blue/green deployments under the hood and there is no loss of availability while performing the update. Optionally, you can leverage [Deployment Guardrails](https://docs.aws.amazon.com/sagemaker/latest/dg/deployment-guardrails.html) if you want to use all-at-once traffic shifting and auto rollbacks during your update." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "promote_ep_config_name = f\"PromoteShadow-EpConfig-{datetime.now():%Y-%m-%d-%H-%M-%S}\"\n", "\n", "create_endpoint_config_response = sm.create_endpoint_config(\n", " EndpointConfigName=promote_ep_config_name,\n", " ProductionVariants=[\n", " {\n", " \"VariantName\": shadow_variant_name,\n", " \"ModelName\": model_name2,\n", " \"InstanceType\": \"ml.m5.xlarge\",\n", " \"InitialInstanceCount\": 2,\n", " \"InitialVariantWeight\": 1.0,\n", " }\n", " ],\n", ")\n", "print(f\"Created EndpointConfig: {create_endpoint_config_response['EndpointConfigArn']}\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "update_endpoint_api_response = sm.update_endpoint(\n", " EndpointName=endpoint_name,\n", " EndpointConfigName=promote_ep_config_name,\n", ")\n", "\n", "wait_for_endpoint_in_service(endpoint_name)\n", "\n", "sm.describe_endpoint(EndpointName=endpoint_name)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "If you do not want to create multiple endpoint configurations and want SageMaker to manage the end to end workflow of creating, managing, and acting on the results of the shadow tests, consider using the SageMaker Inference Experiement APIs/Console experience. As stated earlier, they enable you to setup shadow tests for a predefined duration of time, monitor the progress through a live dashboard, presents clean up options upon completion, and act on the results. To get started, please navigate to the 'Shadow Tests' section of the SageMaker Inference console. " ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Cleanup" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "If you do not plan to use this endpoint further, you should delete the endpoint to avoid incurring additional charges and clean up other resources created in this notebook." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sm.delete_endpoint(EndpointName=endpoint_name)\n", "sm.delete_endpoint_config(EndpointConfigName=ep_config_name)\n", "sm.delete_endpoint_config(EndpointConfigName=promote_ep_config_name)\n", "sm.delete_model(ModelName=model_name)\n", "sm.delete_model(ModelName=model_name2)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Notebook CI Test Results\n", "\n", "This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.\n", "\n", "![This us-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-1/sagemaker-shadow-variant|shadow-console|Shadow_variant.ipynb)\n", "\n", "![This us-east-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-2/sagemaker-shadow-variant|shadow-console|Shadow_variant.ipynb)\n", "\n", "![This us-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-1/sagemaker-shadow-variant|shadow-console|Shadow_variant.ipynb)\n", "\n", "![This ca-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ca-central-1/sagemaker-shadow-variant|shadow-console|Shadow_variant.ipynb)\n", "\n", "![This sa-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/sa-east-1/sagemaker-shadow-variant|shadow-console|Shadow_variant.ipynb)\n", "\n", "![This eu-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-1/sagemaker-shadow-variant|shadow-console|Shadow_variant.ipynb)\n", "\n", "![This eu-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-2/sagemaker-shadow-variant|shadow-console|Shadow_variant.ipynb)\n", "\n", "![This eu-west-3 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-3/sagemaker-shadow-variant|shadow-console|Shadow_variant.ipynb)\n", "\n", "![This eu-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-central-1/sagemaker-shadow-variant|shadow-console|Shadow_variant.ipynb)\n", "\n", "![This eu-north-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-north-1/sagemaker-shadow-variant|shadow-console|Shadow_variant.ipynb)\n", "\n", "![This ap-southeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-1/sagemaker-shadow-variant|shadow-console|Shadow_variant.ipynb)\n", "\n", "![This ap-southeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-2/sagemaker-shadow-variant|shadow-console|Shadow_variant.ipynb)\n", "\n", "![This ap-northeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-1/sagemaker-shadow-variant|shadow-console|Shadow_variant.ipynb)\n", "\n", "![This ap-northeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-2/sagemaker-shadow-variant|shadow-console|Shadow_variant.ipynb)\n", "\n", "![This ap-south-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-south-1/sagemaker-shadow-variant|shadow-console|Shadow_variant.ipynb)\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (Data Science 3.0)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/sagemaker-data-science-310-v1" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": "3.10.6" }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.12" }, "vscode": { "interpreter": { "hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6" } } }, "nbformat": 4, "nbformat_minor": 4 }