{ "cells": [ { "cell_type": "markdown", "id": "dab554c6", "metadata": {}, "source": [ "# Deploy Stable Diffusion on SageMaker with Triton Business Logic Scripting (BLS)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "This notebook's CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook. \n", "\n", "![This us-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-2/sagemaker-triton|business_logic_scripting|stable_diffusion|sm-triton-bls-stablediff.ipynb)\n", "\n", "---" ] }, { "cell_type": "markdown", "id": "5640df9b", "metadata": {}, "source": [ "In this notebook we will take most of the [example](https://github.com/triton-inference-server/server/tree/main/docs/examples/stable_diffusion) to host Stable Diffusion on Triton Inference Server provided by NVIDIA and adapt it to SageMaker.\n", "\n", "[Business Logic Scripting (BLS)](https://github.com/triton-inference-server/python_backend#business-logic-scripting) is a Triton Inference Server feature that allows you to create complex inference logic, where loops, conditionals, data-dependent control flow and other custom logic needs to be intertwined with model execution. From within a Python script that runs on Triton's [Python backend](https://github.com/triton-inference-server/python_backend), you can run some of the required inference steps (light processing, even ML models that are not fit to be run on framework-specific backends), but also call other models hosted indepedently in the same server. This enables you to optimize some of the model component's execution performance (using TensorRT for example), while orchestrating the end-to-end inference flow with a comfortable Python interface.\n", "\n", "
\n", "Warning: You should run this notebook on a SageMaker Notebook Instance with access to the same GPU as the instance you will deploy your model to (g4dn is the one configured by default in this example). There are model optimization steps contained in this notebook that are GPU architecture-dependent.\n", " \u2b07\u2b07\u2b07\u2b07\u2b07 change in the next cell if required\n", "
\n", "\n", "------\n", "------" ] }, { "cell_type": "code", "execution_count": null, "id": "918ad235-0b86-40f1-b4bc-b390bfcb3dec", "metadata": {}, "outputs": [], "source": [ "# change this cell if you are running this notebook in a different instance type\n", "notebook_instance_type = 'ml.g4dn.xlarge'" ] }, { "cell_type": "markdown", "id": "fdbf35ff", "metadata": {}, "source": [ "### Part 1 - Installs and imports" ] }, { "cell_type": "code", "execution_count": null, "id": "69df6cd4", "metadata": {}, "outputs": [], "source": [ "!pip install nvidia-pyindex\n", "!pip install tritonclient[http]\n", "!pip install -U sagemaker pywidgets numpy PIL" ] }, { "cell_type": "code", "execution_count": null, "id": "44c48876", "metadata": {}, "outputs": [], "source": [ "import boto3\n", "import sagemaker\n", "from sagemaker import get_execution_role\n", "\n", "import tritonclient.http as httpclient\n", "from tritonclient.utils import *\n", "import time\n", "from PIL import Image\n", "import numpy as np\n", "\n", "# variables\n", "s3_client = boto3.client(\"s3\")\n", "ts = time.strftime(\"%Y-%m-%d-%H-%M-%S\", time.gmtime())\n", "\n", "# sagemaker variables\n", "role = get_execution_role()\n", "sm_client = boto3.client(service_name=\"sagemaker\")\n", "runtime_sm_client = boto3.client(\"sagemaker-runtime\")\n", "sagemaker_session = sagemaker.Session(boto_session=boto3.Session())\n", "bucket = sagemaker_session.default_bucket()" ] }, { "cell_type": "markdown", "id": "95d58e0e", "metadata": {}, "source": [ "### Part 2 - Packaging a conda environment" ] }, { "cell_type": "markdown", "id": "8973a7d2", "metadata": {}, "source": [ "When using the Triton Python backend (which Business Logic Scripts run on), you can include your own environment and dependencies. The recommended way to do this is to use [conda pack](https://conda.github.io/conda-pack/) to generate a conda environment archive in `tar.gz` format, include it in your model repository, and point to it in the `config.pbtxt` file of python models that should use it, adding the snippet: \n", "\n", "```\n", "parameters: {\n", " key: \"EXECUTION_ENV_PATH\",\n", " value: {string_value: \"$$TRITON_MODEL_DIRECTORY/your_env.tar.gz\"}\n", "}\n", "\n", "```\n", "Let's create this file and save it to the pipeline model repo, which is our business logic \"model\"." ] }, { "cell_type": "code", "execution_count": null, "id": "5f523d03", "metadata": {}, "outputs": [], "source": [ "!bash conda_dependencies.sh" ] }, { "cell_type": "code", "execution_count": null, "id": "19ae03ba", "metadata": {}, "outputs": [], "source": [ "!cp sd_env.tar.gz model_repository/pipeline" ] }, { "cell_type": "markdown", "id": "a18183fc", "metadata": {}, "source": [ "---\n", "---" ] }, { "cell_type": "markdown", "id": "6d3af67c", "metadata": {}, "source": [ "### Part 3 - Model artifact creation" ] }, { "cell_type": "markdown", "id": "6345b907", "metadata": {}, "source": [ "One of the components of Stable Diffusion is a Variational Autoencoder (VAE); only the decoder block is used for inference. NVIDIA's example shows how to use [TensorRT](https://developer.nvidia.com/tensorrt) (an ML framework to accelerate models for inference) to compile and optimize this model, which helps decrease the end-to-end latency for each request. To make sure we use TensorRT version and dependencies that are compatible with the ones in our Triton container, we compile the model using the corresponding version of NVIDIA's PyTorch container image.\n", "\n", "The `export.sh` file also saves the text encoder (another one of Stable Diffusion's components) in ONNX format. " ] }, { "cell_type": "code", "execution_count": null, "id": "3aaafa02", "metadata": { "scrolled": true, "tags": [] }, "outputs": [], "source": [ "!docker run -it --gpus all -v ${PWD}:/mount nvcr.io/nvidia/pytorch:22.10-py3 /bin/bash /mount/export.sh --verbose | tee conversion.txt" ] }, { "cell_type": "markdown", "id": "25b96597", "metadata": {}, "source": [ "Note the namings `model.plan` and `model.onnx` are required to be recognized by Triton native backends at startup." ] }, { "cell_type": "code", "execution_count": null, "id": "43341156", "metadata": {}, "outputs": [], "source": [ "# Place the models in the right model repositories\n", "! mv vae.plan model_repository/vae/1/model.plan\n", "! mv encoder.onnx model_repository/text_encoder/1/model.onnx\n", "! rm vae.onnx" ] }, { "cell_type": "markdown", "id": "d74d9f9a", "metadata": {}, "source": [ "Let's take a look at our logic script. If you're not familiarized with the required script structure when using Triton's Python backend, check out the documentation [here](https://github.com/triton-inference-server/python_backend#usage).\n", "\n", "Notice some of the required steps are run in the python script itself, and some steps are offloaded to other models deployed to native backends (TensorRT and ONNX) using `triton_python_backend_utils.InferenceRequest()`. \n", "
\n", "\ud83d\udca1 You might notice that some model components are downloaded on initialization. If you prefer to include these in your model deployment artifact, you can save them beforehand under the model_repository/pipeline directory in this example and access their path using the args['model_repository'] that Triton passes to the initialize() method. An example of retrieving the path for a saved model artifact: f\"{args['model_repository']}/my_saved_model_dir/model.pt\"\n", "
" ] }, { "cell_type": "code", "execution_count": null, "id": "ddc666ea", "metadata": {}, "outputs": [], "source": [ "!pygmentize model_repository/pipeline/1/model.py" ] }, { "cell_type": "markdown", "id": "b0c5d92f", "metadata": {}, "source": [ "----\n", "----" ] }, { "cell_type": "markdown", "id": "7b63de28", "metadata": {}, "source": [ "### Part 4 - Local testing of Triton model repository" ] }, { "cell_type": "markdown", "id": "ab3e6815", "metadata": {}, "source": [ "Now you can test the model repository and validate it is working (all models load and BLS works). Let's run the Triton docker container locally and invoke the script to check this." ] }, { "cell_type": "code", "execution_count": null, "id": "9b971890", "metadata": {}, "outputs": [], "source": [ "repo_name = \"model_repository\"" ] }, { "cell_type": "markdown", "id": "7fce2bd8-376a-4b44-b2aa-8415b45205e7", "metadata": {}, "source": [ "We are running the Triton container in detached model with the `-d` flag so that it runs in the background. " ] }, { "cell_type": "code", "execution_count": null, "id": "4c92b181", "metadata": {}, "outputs": [], "source": [ "!docker run --gpus=all -d --shm-size=4G --rm -p8000:8000 -p8001:8001 -p8002:8002 -v$(pwd)/$repo_name:/model_repository nvcr.io/nvidia/tritonserver:22.10-py3 tritonserver --model-repository=/model_repository --exit-on-error=false\n", "time.sleep(90)" ] }, { "cell_type": "code", "execution_count": null, "id": "1b90d62b", "metadata": {}, "outputs": [], "source": [ "CONTAINER_ID=!docker container ls -q\n", "FIRST_CONTAINER_ID = CONTAINER_ID[0]" ] }, { "cell_type": "code", "execution_count": null, "id": "30bbf319", "metadata": {}, "outputs": [], "source": [ "!echo $FIRST_CONTAINER_ID" ] }, { "cell_type": "code", "execution_count": null, "id": "77fc4caf", "metadata": {}, "outputs": [], "source": [ "!docker logs $FIRST_CONTAINER_ID" ] }, { "cell_type": "markdown", "id": "c580605b", "metadata": {}, "source": [ "
\n", "Warning: Rerun the cell above to check the container logs until you verify that Triton has loaded all models successfully, otherwise inference request will fail.\n", "
" ] }, { "cell_type": "markdown", "id": "0dd938e2", "metadata": {}, "source": [ "#### Now we will invoke the script locally" ] }, { "cell_type": "markdown", "id": "147e1371", "metadata": {}, "source": [ "We will use Triton's HTTP client and its utility functions to send a request to `localhost:8000`, where the server is listening. We are sending text as binary data for input and receiving an array that we decode with numpy as output. Check out the code in `model_repository/pipeline/1/model.py` to understand how the input data is decoded and the output data returned, and check out more Triton Python backend [docs](https://github.com/triton-inference-server/python_backend) and [examples](https://github.com/triton-inference-server/python_backend/tree/main/examples) to understand how to handle other data types." ] }, { "cell_type": "code", "execution_count": null, "id": "a8655dda", "metadata": {}, "outputs": [], "source": [ "client = httpclient.InferenceServerClient(url=\"localhost:8000\")\n", "\n", "prompt = \"Pikachu in a detective trench coat, photorealistic, nikon\"\n", "text_obj = np.array([prompt], dtype=\"object\").reshape((-1, 1))\n", "\n", "input_text = httpclient.InferInput(\"prompt\", text_obj.shape, np_to_triton_dtype(text_obj.dtype))\n", "\n", "input_text.set_data_from_numpy(text_obj)\n", "\n", "output_img = httpclient.InferRequestedOutput(\"generated_image\")\n", "\n", "start = time.time()\n", "query_response = client.infer(model_name=\"pipeline\", inputs=[input_text], outputs=[output_img])\n", "print(f\"took {time.time()-start} seconds\")\n", "\n", "image = query_response.as_numpy(\"generated_image\")\n", "im = Image.fromarray(np.squeeze(image))\n", "im.save(\"generated_image.jpg\")" ] }, { "cell_type": "markdown", "id": "45b05c7a-d37c-441d-9a00-113df7ad8b5e", "metadata": {}, "source": [ "Let's stop the container that is running locally so we don't take up notebook resources." ] }, { "cell_type": "code", "execution_count": null, "id": "aa3ba306-35e8-43f6-aa18-324de79ed87a", "metadata": {}, "outputs": [], "source": [ "!docker kill $FIRST_CONTAINER_ID" ] }, { "cell_type": "markdown", "id": "f450772c-0561-4df1-857b-787c9dfdf889", "metadata": {}, "source": [ "----\n", "----\n", "### Part 5 - Deploy to SageMaker Real-Time Endpoint" ] }, { "cell_type": "markdown", "id": "393bf7e3-e648-4625-94d4-e0284a87e58e", "metadata": {}, "source": [ "We first package our Triton model repository in a `tar.gz` file." ] }, { "cell_type": "code", "execution_count": null, "id": "f11fe7c1-82fa-4d60-a86b-c037a6228900", "metadata": {}, "outputs": [], "source": [ "!rm -rf `find -type d -name .ipynb_checkpoints`" ] }, { "cell_type": "code", "execution_count": null, "id": "de3ee49f", "metadata": {}, "outputs": [], "source": [ "model_file_name = \"stable-diff-bls.tar.gz\"\n", "prefix = \"stable-diffusion-bls\"\n", "!tar -C model_repository/ -czf $model_file_name .\n", "model_data_url = sagemaker_session.upload_data(path=model_file_name, key_prefix=prefix)" ] }, { "cell_type": "markdown", "id": "53fec5eb-4a81-4b28-bb47-1228d65d2dde", "metadata": {}, "source": [ "Check out the content of tar.gz file, make sure all folders are on the root directory of file." ] }, { "cell_type": "code", "execution_count": null, "id": "ce3edb08-6eb5-4f62-8b94-05ce367da5f2", "metadata": {}, "outputs": [], "source": [ "!tar -tf $model_file_name" ] }, { "cell_type": "markdown", "id": "25c3bd56-2c6c-4496-8012-f287a257da5e", "metadata": {}, "source": [ "Get the correct URI for the Triton SageMaker container image. Check out all the available Deep Learning Container images that AWS maintains [here](https://github.com/aws/deep-learning-containers/blob/master/available_images.md). " ] }, { "cell_type": "code", "execution_count": null, "id": "b9b707c2-f78e-452a-9e99-4860232bd76b", "metadata": {}, "outputs": [], "source": [ "# account mapping for SageMaker Triton Image\n", "account_id_map = {\n", " \"us-east-1\": \"785573368785\",\n", " \"us-east-2\": \"007439368137\",\n", " \"us-west-1\": \"710691900526\",\n", " \"us-west-2\": \"301217895009\",\n", " \"eu-west-1\": \"802834080501\",\n", " \"eu-west-2\": \"205493899709\",\n", " \"eu-west-3\": \"254080097072\",\n", " \"eu-north-1\": \"601324751636\",\n", " \"eu-south-1\": \"966458181534\",\n", " \"eu-central-1\": \"746233611703\",\n", " \"ap-east-1\": \"110948597952\",\n", " \"ap-south-1\": \"763008648453\",\n", " \"ap-northeast-1\": \"941853720454\",\n", " \"ap-northeast-2\": \"151534178276\",\n", " \"ap-southeast-1\": \"324986816169\",\n", " \"ap-southeast-2\": \"355873309152\",\n", " \"cn-northwest-1\": \"474822919863\",\n", " \"cn-north-1\": \"472730292857\",\n", " \"sa-east-1\": \"756306329178\",\n", " \"ca-central-1\": \"464438896020\",\n", " \"me-south-1\": \"836785723513\",\n", " \"af-south-1\": \"774647643957\",\n", "}\n", "\n", "region = boto3.Session().region_name\n", "if region not in account_id_map.keys():\n", " raise (\"UNSUPPORTED REGION\")\n", "\n", "base = \"amazonaws.com.cn\" if region.startswith(\"cn-\") else \"amazonaws.com\"\n", "mme_triton_image_uri = (\n", " \"{account_id}.dkr.ecr.{region}.{base}/sagemaker-tritonserver:22.10-py3\".format(\n", " account_id=account_id_map[region], region=region, base=base\n", " )\n", ")" ] }, { "cell_type": "markdown", "id": "7d4902de-7501-4699-ae69-58197ef46f32", "metadata": {}, "source": [ "Create a SageMaker Model definition.\n", "
\n", "\ud83d\udca1 The next two cells are very important. To make sure that the 2 model components (text encoder and VAE) called by the BLS are loaded at endpoint startup before we ever call the pipeline, we use the \"SAGEMAKER_TRITON_LOG_INFO\" environment variable. This should be a boolean var meant to define if verbose logs are emitted by Triton or not, but since it is appended to the end of the Triton Server launch command that runs at startup, we can add --load-model=model-name calls in front of the boolean to preload both models. \n", "
" ] }, { "cell_type": "code", "execution_count": null, "id": "6687b3e9-c8a6-4ae5-b0ae-3e937b505a9a", "metadata": {}, "outputs": [], "source": [ "preload_model_argument = \"false --load-model=text_encoder --load-model=vae\"" ] }, { "cell_type": "code", "execution_count": null, "id": "5e328daa-3d5c-4c66-a6cc-7c80c4274dba", "metadata": {}, "outputs": [], "source": [ "ts = time.strftime(\"%Y-%m-%d-%H-%M-%S\", time.gmtime())\n", "\n", "container = {\n", " \"Image\": mme_triton_image_uri,\n", " \"ModelDataUrl\": model_data_url,\n", " \"Environment\": {\n", " \"SAGEMAKER_TRITON_DEFAULT_MODEL_NAME\": \"pipeline\",\n", " \"SAGEMAKER_TRITON_LOG_INFO\": preload_model_argument,\n", " },\n", "}" ] }, { "cell_type": "code", "execution_count": null, "id": "0cbcc545-a421-438c-b01f-e6420a504a0f", "metadata": {}, "outputs": [], "source": [ "sm_model_name = f\"{prefix}-mdl-{ts}\"\n", "\n", "create_model_response = sm_client.create_model(\n", " ModelName=sm_model_name, ExecutionRoleArn=role, PrimaryContainer=container\n", ")\n", "\n", "print(\"Model Arn: \" + create_model_response[\"ModelArn\"])" ] }, { "cell_type": "markdown", "id": "9cb4c11a-4f8a-409e-832e-f27b8be07bb7", "metadata": {}, "source": [ "Create a SageMaker endpoint configuration." ] }, { "cell_type": "code", "execution_count": null, "id": "75c9c1a6-92de-44d6-8b0b-7c6f388da957", "metadata": {}, "outputs": [], "source": [ "endpoint_config_name = f\"{prefix}-epc-{ts}\"\n", "\n", "create_endpoint_config_response = sm_client.create_endpoint_config(\n", " EndpointConfigName=endpoint_config_name,\n", " ProductionVariants=[\n", " {\n", " \"InstanceType\": notebook_instance_type,\n", " \"InitialVariantWeight\": 1,\n", " \"InitialInstanceCount\": 1,\n", " \"ModelName\": sm_model_name,\n", " \"VariantName\": \"AllTraffic\",\n", " }\n", " ],\n", ")\n", "\n", "print(\"Endpoint Config Arn: \" + create_endpoint_config_response[\"EndpointConfigArn\"])" ] }, { "cell_type": "markdown", "id": "2f09a802-706e-49f4-ae36-cb4e32960b6f", "metadata": {}, "source": [ "Create the endpoint, and wait for it to be up and running." ] }, { "cell_type": "code", "execution_count": null, "id": "39066acd-e31e-433d-9200-65f7bb3b6975", "metadata": {}, "outputs": [], "source": [ "endpoint_name = f\"{prefix}-ep-{ts}\"\n", "\n", "create_endpoint_response = sm_client.create_endpoint(\n", " EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name\n", ")\n", "\n", "print(\"Endpoint Arn: \" + create_endpoint_response[\"EndpointArn\"])" ] }, { "cell_type": "code", "execution_count": null, "id": "abfee5c2-049d-4c96-89ea-725f99003bfe", "metadata": {}, "outputs": [], "source": [ "sm_clientsm_client.describe_endpoint(EndpointName=endpoint_name)\n", "status = resp[\"EndpointStatus\"]\n", "print(\"Status: \" + status)\n", "\n", "while status == \"Creating\":\n", " time.sleep(60)\n", " resp = sm_client.describe_endpoint(EndpointName=endpoint_name)\n", " status = resp[\"EndpointStatus\"]\n", " print(\"Status: \" + status)\n", "\n", "print(\"Arn: \" + resp[\"EndpointArn\"])\n", "print(\"Status: \" + status)" ] }, { "cell_type": "markdown", "id": "aff32048-bf34-4b27-abe7-580929bdbe39", "metadata": {}, "source": [ "#### Invoke model " ] }, { "cell_type": "code", "execution_count": null, "id": "359093f6", "metadata": {}, "outputs": [], "source": [ "prompt = \"Smiling person\"\n", "inputs = []\n", "outputs = []\n", "\n", "text_obj = np.array([prompt], dtype=\"object\").reshape((-1, 1))\n", "\n", "inputs.append(httpclient.InferInput(\"prompt\", text_obj.shape, np_to_triton_dtype(text_obj.dtype)))\n", "inputs[0].set_data_from_numpy(text_obj)\n", "\n", "\n", "outputs.append(httpclient.InferRequestedOutput(\"generated_image\"))" ] }, { "cell_type": "markdown", "id": "c5980bb7-5ad5-4eae-9dbd-229d0bd37806", "metadata": {}, "source": [ "Since we are using the SageMaker Runtime client to send an HTTP request to the endpoint now, we use Triton's `generate_request_body` method to create the right [request format](https://github.com/triton-inference-server/server/tree/main/docs/protocol) for us." ] }, { "cell_type": "code", "execution_count": null, "id": "c50b4011-5d1b-4256-a0aa-1d89082a3941", "metadata": {}, "outputs": [], "source": [ "request_body, header_length = httpclient.InferenceServerClient.generate_request_body(\n", " inputs, outputs=outputs\n", ")\n", "\n", "print(request_body)" ] }, { "cell_type": "markdown", "id": "17e3e283-9f6e-4054-8f9c-f1bf0313b510", "metadata": {}, "source": [ "We are sending our request in binary format for lower inference. \n", "\n", "With the binary+json format, we have to specify the length of the request metadata in the header to allow Triton to correctly parse the binary payload. This is done using a custom Content-Type header, which is different from using an `Inference-Header-Content-Length` header on a standalone Triton server because custom headers aren\u2019t allowed in SageMaker." ] }, { "cell_type": "code", "execution_count": null, "id": "cc387ba1-10db-498d-b395-5d69f2b412a2", "metadata": {}, "outputs": [], "source": [ "response = runtime_sm_client.invoke_endpoint(\n", " EndpointName=endpoint_name,\n", " ContentType=\"application/vnd.sagemaker-triton.binary+json;json-header-size={}\".format(\n", " header_length\n", " ),\n", " Body=request_body,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "028a1931-fc9a-470f-b153-d92f86139d9d", "metadata": {}, "outputs": [], "source": [ "header_length_prefix = \"application/vnd.sagemaker-triton.binary+json;json-header-size=\"\n", "header_length_str = response[\"ContentType\"][len(header_length_prefix) :]\n", "\n", "# Read response body\n", "result = httpclient.InferenceServerClient.parse_response_body(\n", " response[\"Body\"].read(), header_length=int(header_length_str)\n", ")\n", "\n", "image_array = result.as_numpy(\"generated_image\")\n", "image = Image.fromarray(np.squeeze(image_array))" ] }, { "cell_type": "code", "execution_count": null, "id": "1afdda79-5943-41d0-bfbe-a43ba17bed4c", "metadata": {}, "outputs": [], "source": [ "image" ] }, { "cell_type": "markdown", "id": "839fef3c-3d7f-40f0-b58c-fb5ca344f995", "metadata": {}, "source": [ "----\n", "----\n", "### Part 6 - Clean up" ] }, { "cell_type": "code", "execution_count": null, "id": "2688904b-426a-4d3b-8e7a-30701f0f752a", "metadata": {}, "outputs": [], "source": [ "sm_client.delete_endpoint(EndpointName=endpoint_name)\n", "sm_client.delete_endpoint_config(EndpointConfigName=endpoint_config_name)\n", "sm_client.delete_model(ModelName=sm_model_name)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Notebook CI Test Results\n", "\n", "This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.\n", "\n", "![This us-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-1/sagemaker-triton|business_logic_scripting|stable_diffusion|sm-triton-bls-stablediff.ipynb)\n", "\n", "![This us-east-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-2/sagemaker-triton|business_logic_scripting|stable_diffusion|sm-triton-bls-stablediff.ipynb)\n", "\n", "![This us-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-1/sagemaker-triton|business_logic_scripting|stable_diffusion|sm-triton-bls-stablediff.ipynb)\n", "\n", "![This ca-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ca-central-1/sagemaker-triton|business_logic_scripting|stable_diffusion|sm-triton-bls-stablediff.ipynb)\n", "\n", "![This sa-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/sa-east-1/sagemaker-triton|business_logic_scripting|stable_diffusion|sm-triton-bls-stablediff.ipynb)\n", "\n", "![This eu-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-1/sagemaker-triton|business_logic_scripting|stable_diffusion|sm-triton-bls-stablediff.ipynb)\n", "\n", "![This eu-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-2/sagemaker-triton|business_logic_scripting|stable_diffusion|sm-triton-bls-stablediff.ipynb)\n", "\n", "![This eu-west-3 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-3/sagemaker-triton|business_logic_scripting|stable_diffusion|sm-triton-bls-stablediff.ipynb)\n", "\n", "![This eu-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-central-1/sagemaker-triton|business_logic_scripting|stable_diffusion|sm-triton-bls-stablediff.ipynb)\n", "\n", "![This eu-north-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-north-1/sagemaker-triton|business_logic_scripting|stable_diffusion|sm-triton-bls-stablediff.ipynb)\n", "\n", "![This ap-southeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-1/sagemaker-triton|business_logic_scripting|stable_diffusion|sm-triton-bls-stablediff.ipynb)\n", "\n", "![This ap-southeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-2/sagemaker-triton|business_logic_scripting|stable_diffusion|sm-triton-bls-stablediff.ipynb)\n", "\n", "![This ap-northeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-1/sagemaker-triton|business_logic_scripting|stable_diffusion|sm-triton-bls-stablediff.ipynb)\n", "\n", "![This ap-northeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-2/sagemaker-triton|business_logic_scripting|stable_diffusion|sm-triton-bls-stablediff.ipynb)\n", "\n", "![This ap-south-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-south-1/sagemaker-triton|business_logic_scripting|stable_diffusion|sm-triton-bls-stablediff.ipynb)\n" ] } ], "metadata": { "kernelspec": { "display_name": "conda_pytorch_p39", "language": "python", "name": "conda_pytorch_p39" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.13" } }, "nbformat": 4, "nbformat_minor": 5 }