{ "cells": [ { "cell_type": "markdown", "id": "4d71f039", "metadata": {}, "source": [ "# Serve large models on SageMaker with DeepSpeed Container. In this notebook we show Bloom-176B model hosting\n" ] }, { "cell_type": "markdown", "id": "e2aefa6c", "metadata": {}, "source": [ "---\n", "\n", "This notebook's CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook. \n", "\n", "![This us-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-2/inference|nlp|realtime|llm|bloom_176b|djl_deepspeed_deploy.ipynb)\n", "\n", "---" ] }, { "cell_type": "markdown", "id": "3efc33da", "metadata": {}, "source": [ "\n", "In this notebook, we explore how to host a large language model on SageMaker using the latest container launched using DeepSpeed and DJL. DJL provides for the serving framework while DeepSpeed is the key sharding library we leverage to enable hosting of large models. We use DJLServing as the model serving solution in this example. DJLServing is a high-performance universal model serving solution powered by the Deep Java Library (DJL) that is programming language agnostic. To learn more about DJL and DJLServing, you can refer to our recent blog post (https://aws.amazon.com/blogs/machine-learning/deploy-large-models-on-amazon-sagemaker-using-djlserving-and-deepspeed-model-parallel-inference/).\n", "\n", "Language models have recently exploded in both size and popularity. In 2018, BERT-large entered the scene and, with its 340M parameters and novel transformer architecture, set the standard on NLP task accuracy. Within just a few years, state-of-the-art NLP model size has grown by more than 500x with models such as OpenAI’s 175 billion parameter GPT-3 and similarly sized open source Bloom 176B raising the bar on NLP accuracy. This increase in the number of parameters is driven by the simple and empirically-demonstrated positive relationship between model size and accuracy: more is better. With easy access from models zoos such as Hugging Face and improved accuracy in NLP tasks such as classification and text generation, practitioners are increasingly reaching for these large models. However, deploying them can be a challenge because of their size.\n", "\n", "Model parallelism can help deploy large models that would normally be too large for a single GPU. With model parallelism, we partition and distribute a model across multiple GPUs. Each GPU holds a different part of the model, resolving the memory capacity issue for the largest deep learning models with billions of parameters. This notebook uses tensor parallelism techniques which allow GPUs to work simultaneously on the same layer of a model and achieve low latency inference relative to a pipeline parallel solution.\n", "\n", "SageMaker has rolled out DeepSpeed container which now provides users with the ability to leverage the managed serving capabilities and help to provide the un-differentiated heavy lifting.\n", "\n", "In this notebook, we deploy the open source Bloom 176B quantized model across GPU's on a ml.p4d.24xlarge instance. DeepSpeed is used for tensor parallelism inference while DJLServing handles inference requests and the distributed workers. For further reading on DeepSpeed you can refer to https://arxiv.org/pdf/2207.00032.pdf \n" ] }, { "cell_type": "markdown", "id": "38277875", "metadata": {}, "source": [ "## License agreement\n", "View license information https://huggingface.co/spaces/bigscience/license for this model including the use-based restrictions in Section 5 before using the model. \n" ] }, { "cell_type": "code", "execution_count": null, "id": "73e34247", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Instal boto3 library to create model and run inference workloads\n", "%pip install -Uqq boto3 awscli sagemaker" ] }, { "cell_type": "markdown", "id": "673241dd", "metadata": {}, "source": [ "## Optional Section to Download Model from Hugging Face Hub\n", "\n", "Use this section of you are interested in downloading the model directly from Huggingface hub and storing in your own S3 bucket. Please change the variable \"install_model_locally\" to True in that case.\n", "\n", "**However, this notebook currently leverages the model stored in AWS public S3 location for ease of use. So you can skip this step**\n", "\n", "The below step to download and then upload to S3 can take several minutes since the model size is extremely large" ] }, { "cell_type": "code", "execution_count": null, "id": "2cef7476", "metadata": { "tags": [] }, "outputs": [], "source": [ "install_model_locally = False" ] }, { "cell_type": "code", "execution_count": null, "id": "8c1b226b", "metadata": { "tags": [] }, "outputs": [], "source": [ "if install_model_locally:\n", " %pip install huggingface-hub -Uqq" ] }, { "cell_type": "code", "execution_count": null, "id": "7c0123be", "metadata": { "tags": [] }, "outputs": [], "source": [ "if install_model_locally:\n", "\n", " from huggingface_hub import snapshot_download\n", " from pathlib import Path\n", "\n", " # - This will download the model into the ./model directory where ever the jupyter file is running\n", " local_model_path = Path(\"./model\")\n", " local_model_path.mkdir(exist_ok=True)\n", " model_name = \"microsoft/bloom-deepspeed-inference-int8\"\n", " commit_hash = \"aa00a6626f6484a2eef68e06d1e089e4e32aa571\"\n", "\n", " # - Leverage the snapshot library to donload the model since the model is stored in repository using LFS\n", " snapshot_download(repo_id=model_name, revision=commit_hash, cache_dir=local_model_path)\n", "\n", " # - Upload to S3 using AWS CLI\n", " s3_model_prefix = \"hf-large-model-djl-ds/model\" # folder where model checkpoint will go\n", " model_snapshot_path = list(local_model_path.glob(\"**/snapshots/*\"))[0]\n", "\n", " !aws s3 cp --recursive {model_snapshot_path} s3://{bucket}/{s3_model_prefix}" ] }, { "cell_type": "markdown", "id": "e4d802f3", "metadata": {}, "source": [ "## Create SageMaker compatible Model artifact and Upload Model to S3\n", "\n", "SageMaker needs the model to be in a Tarball format. \n", "\n", "The tarball is in the following format\n", "\n", "```\n", "code\n", "├──── \n", "│ └── model.py\n", "│ └── serving.properties\n", "\n", "``` \n", "\n", "- `model.py` is the key file which will handle any requests for serving. It is also responsible for loading the model from S3\n", "- `serving.properties` is the configuration file that can be used to configure the model server.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "c0d6e489", "metadata": { "tags": [] }, "outputs": [], "source": [ "import sagemaker\n", "from sagemaker import image_uris\n", "import boto3\n", "import os\n", "import time\n", "import json\n", "from pathlib import Path" ] }, { "cell_type": "markdown", "id": "d23af4e3", "metadata": {}, "source": [ "#### Create required variables and initialize them to create the endpoint, we leverage boto3 for this" ] }, { "cell_type": "code", "execution_count": null, "id": "b6270cbc", "metadata": { "tags": [] }, "outputs": [], "source": [ "role = sagemaker.get_execution_role() # execution role for the endpoint\n", "sess = sagemaker.session.Session() # sagemaker session for interacting with different AWS APIs\n", "bucket = sess.default_bucket() # bucket to house artifacts\n", "model_bucket = f\"sagemaker-example-files-prod-{sess.boto_region_name}\"\n", "s3_code_prefix = \"hf-large-model-djl-ds/code\" # folder within bucket where code artifact will go\n", "s3_model_prefix = \"models/bloom-176B/raw_model_microsoft/\" # \"bloom-176B/raw_model_microsoft/\" # folder where model checkpoint will go\n", "# S3 URI-- s3://sagemaker-example-files-prod-{region}/models/bloom-176B/raw_model_microsoft/ -\n", "\n", "region = sess._region_name\n", "account_id = sess.account_id()\n", "\n", "s3_client = boto3.client(\"s3\")\n", "sm_client = boto3.client(\"sagemaker\")\n", "smr_client = boto3.client(\"sagemaker-runtime\")" ] }, { "cell_type": "markdown", "id": "1c57a0a2", "metadata": {}, "source": [ "**Image URI of the DJL Container to be used**" ] }, { "cell_type": "code", "execution_count": null, "id": "c005e92d-ecd7-4de2-a603-9b556dccb2eb", "metadata": { "tags": [] }, "outputs": [], "source": [ "inference_image_uri = image_uris.retrieve(\n", " framework=\"djl-deepspeed\", region=sess.boto_session.region_name, version=\"0.21.0\"\n", ")\n", "print(f\"Image going to be used is ---- > {inference_image_uri}\")" ] }, { "cell_type": "markdown", "id": "713f17db", "metadata": {}, "source": [ "**Create the Tarball and then upload to S3 location**" ] }, { "cell_type": "code", "execution_count": null, "id": "c7118720", "metadata": { "tags": [] }, "outputs": [], "source": [ "!mkdir -p code_bloom176" ] }, { "cell_type": "code", "execution_count": null, "id": "3a7c265d", "metadata": { "tags": [] }, "outputs": [], "source": [ "%%writefile code_bloom176/model.py\n", "from djl_python import Input, Output\n", "import deepspeed\n", "import torch\n", "import logging\n", "import math\n", "import os\n", "from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer\n", "\n", "model = None\n", "tokenizer = None\n", "generator = None\n", "\n", "\n", "def load_model(properties):\n", " # number of partitions\n", " tensor_parallel = properties[\"tensor_parallel_degree\"]\n", "\n", " # location on the hosting instance where the model checkpoints are downloaded (from the s3url)\n", " model_location = properties[\"model_id\"]\n", "\n", " logging.info(f\"Loading model in {model_location}\")\n", "\n", " tokenizer = AutoTokenizer.from_pretrained(model_location)\n", "\n", " # Construct model with fake meta tensors, later will be replaced during ds-inference checkpoint load\n", " with deepspeed.OnDevice(dtype=torch.float16, device=\"meta\"):\n", " model = AutoModelForCausalLM.from_config(\n", " AutoConfig.from_pretrained(model_location), torch_dtype=torch.bfloat16\n", " )\n", "\n", " ### Deepspeed-Inference Loading\n", " logging.info(f\"Starting DeepSpeed init with TP={tensor_parallel}\")\n", "\n", " # tensor parallel presharded repos come with their own checkpoint config file\n", " model = deepspeed.init_inference(\n", " model,\n", " mp_size=tensor_parallel,\n", " dtype=torch.int8,\n", " replace_method=\"auto\",\n", " replace_with_kernel_inject=True,\n", " base_dir=model_location,\n", " checkpoint=os.path.join(model_location, \"ds_inference_config.json\"),\n", " )\n", " model = model.module\n", " return model, tokenizer\n", "\n", "\n", "def run_inference(model, tokenizer, data, params):\n", " generate_kwargs = params\n", " tokenizer.pad_token = tokenizer.eos_token\n", " input_tokens = tokenizer.batch_encode_plus(data, return_tensors=\"pt\", padding=True)\n", " for t in input_tokens:\n", " if torch.is_tensor(input_tokens[t]):\n", " input_tokens[t] = input_tokens[t].to(torch.cuda.current_device())\n", " outputs = model.generate(**input_tokens, **generate_kwargs)\n", " return tokenizer.batch_decode(outputs, skip_special_tokens=True)\n", "\n", "\n", "def handle(inputs: Input):\n", " \"\"\"\n", " inputs: Contains the configurations from serving.properties\n", " \"\"\"\n", " global model, tokenizer\n", "\n", " if not model:\n", " model, tokenizer = load_model(inputs.get_properties())\n", "\n", " if inputs.is_empty():\n", " # Model server makes an empty call to warmup the model on startup\n", " return None\n", "\n", " data = inputs.get_as_json()\n", "\n", " input_sentences = data[\"inputs\"]\n", " params = data[\"parameters\"]\n", "\n", " outputs = run_inference(model, tokenizer, input_sentences, params)\n", " result = {\"outputs\": outputs}\n", " return Output().add_as_json(result)" ] }, { "cell_type": "markdown", "id": "118fcf9b", "metadata": {}, "source": [ "#### Serving.properties has engine parameter which tells the DJL model server to use the DeepSpeed engine to load the model" ] }, { "cell_type": "markdown", "id": "299408cc-0767-4857-9470-f9d50e5fbb34", "metadata": {}, "source": [ "Here is a list of settings that we use in this configuration file -\n", "\n", "- `engine`: The engine for DJL to use. In this case, we intend to use Accelerate and hence set it to Python.\n", "- `option.entryPoint`: The entrypoint python file or module. This should align with the engine that is being used.\n", "- `option.s3url`: Set this to the URI of the Amazon S3 bucket that contains the model. When this is set, the container leverages s5cmd to download the model from s3. This is extremely fast and useful when downloading large models like this one.\n", "\n", "The container downloads the model into the /tmp space on the container because SageMaker maps the /tmp to the Amazon Elastic Block Store (Amazon EBS) volume that is mounted when we specify the endpoint creation parameter VolumeSizeInGB. It leverages s5cmd(https://github.com/peak/s5cmd) which offers a very fast download speed and hence extremely useful when downloading large models.\n", "\n", "For instances like p4dn, which come pre-built with the volume instance, we can continue to leverage the /tmp on the container. The size of this mount is large enough to hold the model.\n", "\n", "For more details on the configuration options and an exhaustive list, you can refer the documentation - https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-large-model-configuration.html" ] }, { "cell_type": "code", "execution_count": null, "id": "fd957e78", "metadata": { "tags": [] }, "outputs": [], "source": [ "props = f\"\"\"\n", "engine = DeepSpeed\n", "option.tensor_parallel_degree = 8\n", "option.s3url = s3://sagemaker-example-files-prod-{sess.boto_region_name}/models/bloom-176B/raw_model_microsoft/\n", "\"\"\"\n", "print(props, file=open(\"code_bloom176/serving.properties\", \"a\"))" ] }, { "cell_type": "code", "execution_count": null, "id": "af89a84c", "metadata": { "tags": [] }, "outputs": [], "source": [ "!rm model.tar.gz\n", "!tar czvf model.tar.gz code_bloom176" ] }, { "cell_type": "code", "execution_count": null, "id": "ba85b80f", "metadata": { "tags": [] }, "outputs": [], "source": [ "s3_code_artifact = sess.upload_data(\"model.tar.gz\", bucket, s3_code_prefix)\n", "print(f\"S3 Code or Model tar ball uploaded to --- > {s3_code_artifact}\")" ] }, { "cell_type": "code", "execution_count": null, "id": "a9811501", "metadata": { "tags": [] }, "outputs": [], "source": [ "print(f\"S3 Model Prefix where the model files are -- > {s3_model_prefix}\")\n", "print(f\"S3 Model Bucket is -- > {model_bucket}\")" ] }, { "cell_type": "markdown", "id": "1edcfa93", "metadata": {}, "source": [ "### This is optional in case you want to use VpcConfig to specify when creating the end points\n", "\n", "For more details you can refer to this link https://docs.aws.amazon.com/sagemaker/latest/dg/host-vpc.html\n", "\n", "The below is just an example to extract information about Security Groups and Subnets needed to configure" ] }, { "cell_type": "code", "execution_count": null, "id": "1cb187c1", "metadata": { "tags": [] }, "outputs": [], "source": [ "!aws ec2 describe-security-groups --filter Name=vpc-id,Values= | python3 -c \"import sys, json; print(json.load(sys.stdin)['SecurityGroups'])\"" ] }, { "cell_type": "code", "execution_count": null, "id": "c10ee85c", "metadata": { "tags": [] }, "outputs": [], "source": [ "# - provide networking configs if needed.\n", "security_group_ids = [] # add the security group id's\n", "subnets = [] # add the subnet id for this vpc\n", "privateVpcConfig = {\"SecurityGroupIds\": security_group_ids, \"Subnets\": subnets}\n", "print(privateVpcConfig)" ] }, { "cell_type": "markdown", "id": "b361bfd3", "metadata": {}, "source": [ "### To create the end point the steps are:\n", "\n", "1. Create the Model using the Image container and the Model Tarball uploaded earlier\n", "2. Create the endpoint config using the following key parameters\n", "\n", " a) Instance Type is ml.p4d.24xlarge \n", " \n", " b) ModelDataDownloadTimeoutInSeconds is 2400 which is needed to ensure the Model downloads from S3 successfully,\n", " \n", " c) ContainerStartupHealthCheckTimeoutInSeconds is 2400 to ensure health check starts after the model is ready\n", " \n", "3. Create the end point using the endpoint config created \n", " " ] }, { "cell_type": "code", "execution_count": null, "id": "b0ca7288", "metadata": { "tags": [] }, "outputs": [], "source": [ "from sagemaker.utils import name_from_base\n", "\n", "model_name = name_from_base(f\"bloom-djl-ds\")\n", "print(model_name)\n", "\n", "create_model_response = sm_client.create_model(\n", " ModelName=model_name,\n", " ExecutionRoleArn=role,\n", " PrimaryContainer={\n", " \"Image\": inference_image_uri,\n", " \"ModelDataUrl\": s3_code_artifact,\n", " },\n", " # Uncomment if providing networking configs\n", " # VpcConfig=privateVpcConfig\n", ")\n", "model_arn = create_model_response[\"ModelArn\"]\n", "\n", "print(f\"Created Model: {model_arn}\")" ] }, { "cell_type": "markdown", "id": "56b1db8e", "metadata": {}, "source": [ "VolumnSizeInGB has been commented out. You should use this value for Instance types which support EBS volume mounts. The current instance we are using comes with a pre-configured space and does not support additional volume mounts" ] }, { "cell_type": "code", "execution_count": null, "id": "4ae0ab61", "metadata": { "tags": [] }, "outputs": [], "source": [ "endpoint_config_name = f\"{model_name}-config\"\n", "endpoint_name = f\"{model_name}-endpoint\"\n", "\n", "endpoint_config_response = sm_client.create_endpoint_config(\n", " EndpointConfigName=endpoint_config_name,\n", " ProductionVariants=[\n", " {\n", " \"VariantName\": \"variant1\",\n", " \"ModelName\": model_name,\n", " \"InstanceType\": \"ml.p4d.24xlarge\",\n", " \"InitialInstanceCount\": 1,\n", " # \"VolumeSizeInGB\" : 400,\n", " \"ModelDataDownloadTimeoutInSeconds\": 2400,\n", " \"ContainerStartupHealthCheckTimeoutInSeconds\": 2400,\n", " },\n", " ],\n", ")\n", "endpoint_config_response" ] }, { "cell_type": "code", "execution_count": null, "id": "68ab0e08", "metadata": { "tags": [] }, "outputs": [], "source": [ "create_endpoint_response = sm_client.create_endpoint(\n", " EndpointName=f\"{endpoint_name}\", EndpointConfigName=endpoint_config_name\n", ")\n", "print(f\"Created Endpoint: {create_endpoint_response['EndpointArn']}\")" ] }, { "cell_type": "markdown", "id": "f90fed10", "metadata": {}, "source": [ "#### Wait for the end point to be created. This can take a few minutes. Please be patient\n", "However, while that happens, let us look at the critical areas of the helper files we are using to load the model\n", "1. We will look at the code snippets for model.py to see the model downloading mechanism\n", "2. Serving.properties to see the environment related properties" ] }, { "cell_type": "code", "execution_count": null, "id": "87e846f1", "metadata": { "tags": [] }, "outputs": [], "source": [ "# This is the code snippet which is responsible to load the model from S3\n", "! sed -n '40,60p' code_bloom176/model.py" ] }, { "cell_type": "code", "execution_count": null, "id": "3206c328", "metadata": { "tags": [] }, "outputs": [], "source": [ "# This is the code snippet which shows the environment variables being used to customize runtime\n", "! sed -n '1,3p' code_bloom176/serving.properties" ] }, { "cell_type": "code", "execution_count": null, "id": "2dd69db1", "metadata": { "tags": [] }, "outputs": [], "source": [ "import time\n", "\n", "resp = sm_client.describe_endpoint(EndpointName=endpoint_name)\n", "status = resp[\"EndpointStatus\"]\n", "print(\"Status: \" + status)\n", "\n", "while status == \"Creating\":\n", " time.sleep(60)\n", " resp = sm_client.describe_endpoint(EndpointName=endpoint_name)\n", " status = resp[\"EndpointStatus\"]\n", " print(\"Status: \" + status)\n", "\n", "print(\"Arn: \" + resp[\"EndpointArn\"])\n", "print(\"Status: \" + status)" ] }, { "cell_type": "markdown", "id": "49699f5a", "metadata": {}, "source": [ "#### Leverage the Boto3 api to invoke the endpoint. \n", "\n", "This is a generative model, so we pass in a Text (specified in the 'input' field in the json) as a prompt and Model will complete the sentence and return the results. More details on these parameters can be found at https://huggingface.co/docs/api-inference/detailed_parameters#text-generation-task. Some quick explainations are below\n", "1. temperature -- > The temperature of the sampling operation. 1 means regular sampling, 0 means always take the highest score and 100 means uniform probability\n", "2. max_new_tokens -- > The amount of new tokens or text to be gnerated. More tokens will increase the prediction time\n", "3. num_beams -- > Beam Search keeps track of the n-th most likely word sequences.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "cf9ce8fc", "metadata": {}, "outputs": [], "source": [ "%%time\n", "smr_client.invoke_endpoint(\n", " EndpointName=endpoint_name,\n", " Body=json.dumps(\n", " {\n", " \"inputs\": [\"Amazon.com is the best \"],\n", " \"parameters\": {\n", " \"min_length\": 5,\n", " \"max_new_tokens\": 100,\n", " \"temperature\": 0.8,\n", " \"num_beams\": 5,\n", " \"no_repeat_ngram_size\": 2,\n", " },\n", " }\n", " ),\n", " ContentType=\"application/json\",\n", ")[\"Body\"].read().decode(\"utf8\")" ] }, { "cell_type": "markdown", "id": "e79c4f43", "metadata": {}, "source": [ "#### With do_sample to false we are making a greedy optimization for token generation" ] }, { "cell_type": "code", "execution_count": null, "id": "509ca788", "metadata": {}, "outputs": [], "source": [ "%%time\n", "# -- Greedy generation\n", "smr_client.invoke_endpoint(\n", " EndpointName=endpoint_name,\n", " Body=json.dumps(\n", " {\n", " \"inputs\": [\"Amazon.com is the best \", \"Large Models are the way to go\"],\n", " \"parameters\": {\n", " \"min_length\": 5,\n", " \"max_new_tokens\": 10,\n", " \"do_sample\": False,\n", " \"early_stopping\": True,\n", " },\n", " \"padding\": True,\n", " }\n", " ),\n", " ContentType=\"application/json\",\n", ")[\"Body\"].read().decode(\"utf8\")" ] }, { "cell_type": "markdown", "id": "f35fcdda", "metadata": {}, "source": [ "## Conclusion\n", "In this post, we demonstrated how to use SageMaker large model inference containers to host two large language models, BLOOM-176B and OPT-30B. We used DeepSpeed’s model parallel techniques with multiple GPUs on a single SageMaker machine learning instance. For more details about Amazon SageMaker and its large model inference capabilities, refer to the following:\n", "\n", "* Amazon SageMaker now supports deploying large models through configurable volume size and timeout quotas (https://aws.amazon.com/about-aws/whats-new/2022/09/amazon-sagemaker-deploying-large-models-volume-size-timeout-quotas/)\n", "* Real-time inference – Amazon SageMaker (https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints.html)\n" ] }, { "cell_type": "markdown", "id": "0185a964", "metadata": {}, "source": [ "## Clean Up" ] }, { "cell_type": "code", "execution_count": null, "id": "74057ebe", "metadata": {}, "outputs": [], "source": [ "# - Delete the end point\n", "sm_client.delete_endpoint(EndpointName=endpoint_name)" ] }, { "cell_type": "code", "execution_count": null, "id": "e2280fbe", "metadata": {}, "outputs": [], "source": [ "# - In case the end point failed we still want to delete the model\n", "sm_client.delete_endpoint_config(EndpointConfigName=endpoint_config_name)\n", "sm_client.delete_model(ModelName=model_name)" ] }, { "cell_type": "markdown", "id": "7fb98323", "metadata": {}, "source": [ "#### Optionally delete the model checkpoint from S3" ] }, { "cell_type": "code", "execution_count": null, "id": "62969f84", "metadata": {}, "outputs": [], "source": [ "!aws s3 rm --recursive s3://{bucket}/{s3_model_prefix}" ] }, { "cell_type": "code", "execution_count": null, "id": "dd618599", "metadata": {}, "outputs": [], "source": [ "s3_client = boto3.client(\"s3\")" ] }, { "cell_type": "code", "execution_count": null, "id": "df2f569b", "metadata": {}, "outputs": [], "source": [ "len(s3_client.list_objects(Bucket=bucket, Prefix=f\"{s3_model_prefix}/\")[\"Contents\"])" ] }, { "cell_type": "markdown", "id": "b4177f8b", "metadata": {}, "source": [ "## Notebook CI Test Results\n", "\n", "This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.\n", "\n", "![This us-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-1/inference|nlp|realtime|llm|bloom_176b|djl_deepspeed_deploy.ipynb)\n", "\n", "![This us-east-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-2/inference|nlp|realtime|llm|bloom_176b|djl_deepspeed_deploy.ipynb)\n", "\n", "![This us-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-1/inference|nlp|realtime|llm|bloom_176b|djl_deepspeed_deploy.ipynb)\n", "\n", "![This ca-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ca-central-1/inference|nlp|realtime|llm|bloom_176b|djl_deepspeed_deploy.ipynb)\n", "\n", "![This sa-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/sa-east-1/inference|nlp|realtime|llm|bloom_176b|djl_deepspeed_deploy.ipynb)\n", "\n", "![This eu-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-1/inference|nlp|realtime|llm|bloom_176b|djl_deepspeed_deploy.ipynb)\n", "\n", "![This eu-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-2/inference|nlp|realtime|llm|bloom_176b|djl_deepspeed_deploy.ipynb)\n", "\n", "![This eu-west-3 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-3/inference|nlp|realtime|llm|bloom_176b|djl_deepspeed_deploy.ipynb)\n", "\n", "![This eu-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-central-1/inference|nlp|realtime|llm|bloom_176b|djl_deepspeed_deploy.ipynb)\n", "\n", "![This eu-north-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-north-1/inference|nlp|realtime|llm|bloom_176b|djl_deepspeed_deploy.ipynb)\n", "\n", "![This ap-southeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-1/inference|nlp|realtime|llm|bloom_176b|djl_deepspeed_deploy.ipynb)\n", "\n", "![This ap-southeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-2/inference|nlp|realtime|llm|bloom_176b|djl_deepspeed_deploy.ipynb)\n", "\n", "![This ap-northeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-1/inference|nlp|realtime|llm|bloom_176b|djl_deepspeed_deploy.ipynb)\n", "\n", "![This ap-northeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-2/inference|nlp|realtime|llm|bloom_176b|djl_deepspeed_deploy.ipynb)\n", "\n", "![This ap-south-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-south-1/inference|nlp|realtime|llm|bloom_176b|djl_deepspeed_deploy.ipynb)\n" ] } ], "metadata": { "availableInstances": [ { "_defaultOrder": 0, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.t3.medium", "vcpuNum": 2 }, { "_defaultOrder": 1, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.t3.large", "vcpuNum": 2 }, { "_defaultOrder": 2, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.t3.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 3, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.t3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 4, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5.large", "vcpuNum": 2 }, { "_defaultOrder": 5, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 6, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 7, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 8, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 9, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 10, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 11, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 12, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5d.large", "vcpuNum": 2 }, { "_defaultOrder": 13, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5d.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 14, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5d.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 15, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5d.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 16, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5d.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 17, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5d.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 18, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5d.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 19, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 20, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": true, "memoryGiB": 0, "name": "ml.geospatial.interactive", "supportedImageNames": [ "sagemaker-geospatial-v1-0" ], "vcpuNum": 0 }, { "_defaultOrder": 21, "_isFastLaunch": true, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.c5.large", "vcpuNum": 2 }, { "_defaultOrder": 22, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.c5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 23, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.c5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 24, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.c5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 25, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 72, "name": "ml.c5.9xlarge", "vcpuNum": 36 }, { "_defaultOrder": 26, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 96, "name": "ml.c5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 27, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 144, "name": "ml.c5.18xlarge", "vcpuNum": 72 }, { "_defaultOrder": 28, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.c5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 29, "_isFastLaunch": true, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g4dn.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 30, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g4dn.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 31, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g4dn.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 32, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g4dn.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 33, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g4dn.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 34, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g4dn.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 35, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 61, "name": "ml.p3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 36, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 244, "name": "ml.p3.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 37, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 488, "name": "ml.p3.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 38, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.p3dn.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 39, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.r5.large", "vcpuNum": 2 }, { "_defaultOrder": 40, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.r5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 41, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.r5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 42, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.r5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 43, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.r5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 44, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.r5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 45, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 512, "name": "ml.r5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 46, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.r5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 47, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 48, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 49, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 50, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 51, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 52, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 53, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.g5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 54, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.g5.48xlarge", "vcpuNum": 192 }, { "_defaultOrder": 55, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 1152, "name": "ml.p4d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 56, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 1152, "name": "ml.p4de.24xlarge", "vcpuNum": 96 } ], "kernelspec": { "display_name": "Python 3 (Data Science 3.0)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-west-2:236514542706:image/sagemaker-data-science-310-v1" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.6" }, "vscode": { "interpreter": { "hash": "916dbcbb3f70747c44a77c7bcd40155683ae19c65e1c03b4aa3499c5328201f1" } } }, "nbformat": 4, "nbformat_minor": 5 }