{ "cells": [ { "cell_type": "markdown", "id": "c6c03e23", "metadata": {}, "source": [ "# Host StabilityAI's StableLM base alpha 7B on SageMaker with Hugging Face using Large Model Inference container.\n" ] }, { "cell_type": "markdown", "id": "38ba28cd", "metadata": {}, "source": [ "---\n", "\n", "This notebook's CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook. \n", "\n", "![This us-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-2/inference|generativeai|llm-workshop|lab7-stablelm-base-alpha-7b|stablelm-base-alpha-7b-djl-sagemaker.ipynb)\n", "\n", "---" ] }, { "cell_type": "markdown", "id": "f0d4fc79", "metadata": {}, "source": [ "\n", "In this notebook, we deploy the open source StabilityAI's [stablelm-base-alpha-7b](https://huggingface.co/stabilityai/stablelm-base-alpha-7b) model on ml.g5.xlarge instance using [Large Model Inference DLC](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-large-model-dlc.html) on SageMaker. The model is loaded using Hugging Face [Hugging Face Accelerate](https://huggingface.co/docs/accelerate/usage_guides/big_modeling#designing-a-device-map). \n" ] }, { "cell_type": "markdown", "id": "0bcfa65e", "metadata": {}, "source": [ "## Licence agreement\n", "Please refer to license information [here](https://huggingface.co/stabilityai/stablelm-base-alpha-7b#model-details). Base model checkpoints (StableLM-Base-Alpha) are licensed under the Creative Commons license [(CC BY-SA-4.0)](https://creativecommons.org/licenses/by-sa/4.0/). No changes were made in the base model. All credits to [Stability AI](https://stability.ai/) for the model weights." ] }, { "cell_type": "markdown", "id": "eb63b6c1", "metadata": {}, "source": [ "#### Import the relevant libraries and configure several global variables using boto3" ] }, { "cell_type": "code", "execution_count": null, "id": "a78aa656-a254-4037-8736-2c3ab0a9ef7e", "metadata": { "tags": [] }, "outputs": [], "source": [ "!pip install sagemaker boto3 huggingface_hub --upgrade --quiet" ] }, { "cell_type": "code", "execution_count": null, "id": "456e483a", "metadata": {}, "outputs": [], "source": [ "import sagemaker\n", "import jinja2\n", "from sagemaker import image_uris\n", "import boto3\n", "import os\n", "import time\n", "import json\n", "from pathlib import Path" ] }, { "cell_type": "code", "execution_count": null, "id": "1867d693", "metadata": {}, "outputs": [], "source": [ "role = sagemaker.get_execution_role() # execution role for the endpoint\n", "sess = sagemaker.session.Session() # sagemaker session for interacting with different AWS APIs\n", "bucket = sess.default_bucket() # bucket to house artifacts\n", "model_bucket = sess.default_bucket() # bucket to house artifacts\n", "s3_code_prefix = \"hf-large-model-djl/code-stablelm-base-alpha-7b\" # folder within bucket where code artifact will go\n", "\n", "s3_model_prefix = \"hf-large-model-djl/model-stablelm-base-alpha-7b\" # folder within bucket where code artifact will go\n", "region = sess._region_name\n", "account_id = sess.account_id()\n", "\n", "s3_client = boto3.client(\"s3\")\n", "sm_client = boto3.client(\"sagemaker\")\n", "smr_client = boto3.client(\"sagemaker-runtime\")\n", "\n", "jinja_env = jinja2.Environment()" ] }, { "cell_type": "markdown", "id": "3b6c6a87", "metadata": {}, "source": [ "## Create SageMaker compatible Model artifact, upload model to S3 and bring your own inference script.\n", "\n", "SageMaker Large Model Inference containers can be used to host models without providing your own inference code. This is extremely useful when there is no custom pre-processing of the input data or postprocessing of the model's predictions. We used that approach in Lab1 to host the models where we leveraged the In-Built containers.\n", "\n", "In this notebook, we demonstrate how to bring your own inference script which leverages Accelerate to shard the model.\n", "\n", "SageMaker needs the model artifacts to be in a Tarball format. In this example, we provide the following files - `serving.properties` and `model.py`.\n", "\n", "The tarball is in the following format\n", "\n", "```\n", "code\n", "├──── \n", "│ └── serving.properties\n", "│ └── model.py\n", "\n", "```\n", "\n", "- `serving.properties` is the configuration file that can be used to configure the model server.\n", "- `model.py` is the file that handles any requests for serving.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "7d037781", "metadata": {}, "outputs": [], "source": [ "!mkdir -p code_stablelm-base-alpha-7b" ] }, { "cell_type": "code", "execution_count": null, "id": "a3a16395-71c4-422f-be91-d82df10a1a9a", "metadata": { "tags": [] }, "outputs": [], "source": [ "from huggingface_hub import snapshot_download\n", "from pathlib import Path\n", "import os\n", "\n", "# - This will download the model into the current directory where ever the jupyter notebook is running\n", "local_model_path = Path(\".\")\n", "local_model_path.mkdir(exist_ok=True)\n", "model_name = \"stabilityai/stablelm-base-alpha-7b\"\n", "# Only download pytorch checkpoint files\n", "allow_patterns = [\"*.json\", \"*.pt\", \"*.bin\", \"*.txt\", \"*.model\"]\n", "\n", "# - Leverage the snapshot library to donload the model since the model is stored in repository using LFS\n", "model_download_path = snapshot_download(\n", " repo_id=model_name,\n", " cache_dir=local_model_path,\n", " allow_patterns=allow_patterns,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "339da658-fae7-4495-a703-3fae16e240da", "metadata": { "tags": [] }, "outputs": [], "source": [ "model_artifact = sess.upload_data(path=model_download_path, key_prefix=s3_model_prefix)\n", "print(f\"Model uploaded to --- > {model_artifact}\")\n", "print(f\"We will set option.s3url={model_artifact}\")" ] }, { "cell_type": "code", "execution_count": null, "id": "b6666a8b-9ac3-45a6-a960-33cd2365b022", "metadata": {}, "outputs": [], "source": [ "!rm -rf {model_download_path}" ] }, { "cell_type": "markdown", "id": "5a507efc", "metadata": {}, "source": [ "#### Create serving.properties \n", "This is a configuration file to indicate to DJL Serving which model parallelization and inference optimization libraries you would like to use. Depending on your need, you can set the appropriate configuration.\n", "\n", "Here is a list of settings that we use in this configuration file -\n", "- `engine`: The engine for DJL to use. In this case, we intend to use Accelerate and hence set it to **Python**. \n", "- `option.entryPoint`: The entrypoint python file or module. This should align with the engine that is being used. \n", "- `option.s3url`: Set this to the URI of the Amazon S3 bucket that contains the model. When this is set, the container leverages [s5cmd](https://github.com/peak/s5cmd) to download the model from s3. This is extremely fast and useful when downloading large models like this one.\n", "\n", "If you want to download the model from huggingface.co, you can set `option.modelid`. The model id of a pretrained model hosted inside a model repository on huggingface.co (https://huggingface.co/models). The container uses this model id to download the corresponding model repository on huggingface.co. \n", "- `option.tensor_parallel_degree`: Set to the number of GPU devices over which Accelerate needs to partition the model. This parameter also controls the no of workers per model which will be started up when DJL serving runs. As an example if we have a 8 GPU machine and we are creating 8 partitions then we will have 1 worker per model to serve the requests.\n", "\n", "For more details on the configuration options and an exhaustive list, you can refer the documentation - https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-large-model-configuration.html.\n", "\n", "\n", "The approach here is to utilize the built-in functionality within Hugging Face Transformers to enable Large Language Model hosting. \n" ] }, { "cell_type": "markdown", "id": "f3a1437b", "metadata": {}, "source": [ "In the below cell, we leverage [Jinja](https://pypi.org/project/Jinja2/) to create a template for serving.properties. Specifically, we parameterize `option.s3url` so that it can be changed based on the pretrained model location." ] }, { "cell_type": "code", "execution_count": null, "id": "786a02ed", "metadata": {}, "outputs": [], "source": [ "%%writefile ./code_stablelm-base-alpha-7b/serving.properties\n", "option.s3url = {{s3url}}\n", "engine = Python\n", "option.tensor_parallel_degree = 1" ] }, { "cell_type": "code", "execution_count": null, "id": "03d9203a", "metadata": {}, "outputs": [], "source": [ "# we plug in the appropriate model location into our `serving.properties` file based on the region in which this notebook is running\n", "template = jinja_env.from_string(\n", " Path(\"code_stablelm-base-alpha-7b/serving.properties\").open().read()\n", ")\n", "Path(\"code_stablelm-base-alpha-7b/serving.properties\").open(\"w\").write(\n", " template.render(s3url=model_artifact)\n", ")\n", "!pygmentize code_stablelm-base-alpha-7b/serving.properties | cat -n" ] }, { "cell_type": "markdown", "id": "ed435a7c", "metadata": {}, "source": [ "#### Create a model.py with custom inference code\n", "\n", "In this script, we load the model and generate predictions using the `transformers` library. Note the use of the following parameters while loading the model -\n", "- `device_map`: Using one of the supported versions lets Accelerate handle the `device_map` computation. With `balanced_low_0`, the model is split evenly across all GPUs except the first one. For other supported options, you can refer to [designing a device map](https://huggingface.co/docs/accelerate/usage_guides/big_modeling#designing-a-device-map). You can also create one yourself.\n", "- `load_in_8bit`: Setting this to `True` quantizes the model weights to int8 thereby greatly reducing the memory footprint of the model from the initial FP32. See this [blog post](https://huggingface.co/blog/hf-bitsandbytes-integration) from Hugging Face for additional information.\n", "\n", "The container also makes a warmup call without an payload to the handler." ] }, { "cell_type": "code", "execution_count": null, "id": "a25a6aa6", "metadata": {}, "outputs": [], "source": [ "%%writefile ./code_stablelm-base-alpha-7b/model.py\n", "from djl_python import Input, Output\n", "import torch\n", "import logging\n", "import math\n", "import os\n", "from transformers import (\n", " AutoModelForCausalLM,\n", " AutoTokenizer,\n", " pipeline,\n", " StoppingCriteria,\n", " StoppingCriteriaList,\n", ")\n", "\n", "\n", "class StopOnTokens(StoppingCriteria):\n", " def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:\n", " stop_ids = [50278, 50279, 50277, 1, 0]\n", " for stop_id in stop_ids:\n", " if input_ids[0][-1] == stop_id:\n", " return True\n", " return False\n", "\n", "\n", "def load_model(properties):\n", " tensor_parallel = properties[\"tensor_parallel_degree\"]\n", " model_location = properties[\"model_dir\"]\n", " if \"model_id\" in properties:\n", " model_location = properties[\"model_id\"]\n", " logging.info(f\"Loading model in {model_location}\")\n", "\n", " tokenizer = AutoTokenizer.from_pretrained(model_location)\n", "\n", " model = AutoModelForCausalLM.from_pretrained(\n", " model_location, torch_dtype=torch.float16, device_map={\"\": 0}\n", " ).cuda()\n", " model.requires_grad_(False)\n", " model.eval()\n", "\n", " return model, tokenizer\n", "\n", "\n", "model = None\n", "tokenizer = None\n", "generator = None\n", "\n", "\n", "def run_inference(model, tokenizer, data, params):\n", " generate_kwargs = params\n", " tokenizer.pad_token = tokenizer.eos_token\n", " input_tokens = tokenizer.batch_encode_plus(data, return_tensors=\"pt\", padding=True)\n", " for t in input_tokens:\n", " if torch.is_tensor(input_tokens[t]):\n", " input_tokens[t] = input_tokens[t].to(torch.cuda.current_device())\n", " stop = StopOnTokens()\n", " outputs = model.generate(\n", " **input_tokens, **generate_kwargs, stopping_criteria=StoppingCriteriaList([stop])\n", " )\n", " return tokenizer.batch_decode(outputs, skip_special_tokens=True)\n", "\n", "\n", "def handle(inputs: Input):\n", " global model, tokenizer\n", " if not model:\n", " model, tokenizer = load_model(inputs.get_properties())\n", "\n", " if inputs.is_empty():\n", " return None\n", " data = inputs.get_as_json()\n", "\n", " input_sentences = data[\"inputs\"]\n", " params = data[\"parameters\"]\n", "\n", " outputs = run_inference(model, tokenizer, input_sentences, params)\n", " result = {\"outputs\": outputs}\n", " return Output().add_as_json(result)" ] }, { "cell_type": "markdown", "id": "a5516aca", "metadata": {}, "source": [ "**Image URI for the DJL container is being used here**" ] }, { "cell_type": "code", "execution_count": null, "id": "36729749", "metadata": {}, "outputs": [], "source": [ "# inference_image_uri = f\"{account_id}.dkr.ecr.{region}.amazonaws.com/djl-ds:latest\"\n", "inference_image_uri = (\n", " f\"763104351884.dkr.ecr.{region}.amazonaws.com/djl-inference:0.21.0-fastertransformer5.3.0-cu117\"\n", ")\n", "print(f\"Image going to be used is ---- > {inference_image_uri}\")" ] }, { "cell_type": "markdown", "id": "c2ff7d02", "metadata": {}, "source": [ "**Create the Tarball and then upload to S3 location**" ] }, { "cell_type": "code", "execution_count": null, "id": "313ef1ca", "metadata": {}, "outputs": [], "source": [ "!rm -f model.tar.gz\n", "!tar czvf model.tar.gz -C code_stablelm-base-alpha-7b ." ] }, { "cell_type": "code", "execution_count": null, "id": "c9a634a8", "metadata": {}, "outputs": [], "source": [ "s3_code_artifact = sess.upload_data(\"model.tar.gz\", bucket, s3_code_prefix)\n", "print(f\"S3 Code or Model tar ball uploaded to --- > {s3_code_artifact}\")" ] }, { "cell_type": "markdown", "id": "d86bc297", "metadata": {}, "source": [ "### To create the end point the steps are:\n", "\n", "1. Create the Model using the Image container and the Model Tarball uploaded earlier\n", "2. Create the endpoint config using the following key parameters\n", "\n", " a) Instance Type is ml.g5.12xlarge\n", " \n", " b) ContainerStartupHealthCheckTimeoutInSeconds is 2400 to ensure health check starts after the model is ready \n", "3. Create the end point using the endpoint config created " ] }, { "cell_type": "markdown", "id": "736e245f", "metadata": {}, "source": [ "#### Create the Model\n", "Use the image URI for the DJL container and the s3 location to which the tarball was uploaded.\n", "\n", "The container downloads the model into the `/tmp` space on the container because SageMaker maps the `/tmp` to the Amazon Elastic Block Store (Amazon EBS) volume that is mounted when we specify the endpoint creation parameter VolumeSizeInGB. It leverages `s5cmd`(https://github.com/peak/s5cmd) which offers a very fast download speed and hence extremely useful when downloading large models.\n", "\n", "For instances like p4dn, which come pre-built with the volume instance, we can continue to leverage the `/tmp` on the container. The size of this mount is large enough to hold the model.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "69fb1813", "metadata": {}, "outputs": [], "source": [ "from sagemaker.utils import name_from_base\n", "\n", "model_name = name_from_base(f\"stablelm-base-alpha-7b\")\n", "print(model_name)\n", "\n", "create_model_response = sm_client.create_model(\n", " ModelName=model_name,\n", " ExecutionRoleArn=role,\n", " PrimaryContainer={\"Image\": inference_image_uri, \"ModelDataUrl\": s3_code_artifact},\n", ")\n", "model_arn = create_model_response[\"ModelArn\"]\n", "\n", "print(f\"Created Model: {model_arn}\")" ] }, { "cell_type": "code", "execution_count": null, "id": "fd78601e", "metadata": {}, "outputs": [], "source": [ "endpoint_config_name = f\"{model_name}-config\"\n", "endpoint_name = f\"{model_name}-endpoint\"\n", "\n", "endpoint_config_response = sm_client.create_endpoint_config(\n", " EndpointConfigName=endpoint_config_name,\n", " ProductionVariants=[\n", " {\n", " \"VariantName\": \"variant1\",\n", " \"ModelName\": model_name,\n", " \"InstanceType\": \"ml.g5.xlarge\",\n", " \"InitialInstanceCount\": 1,\n", " # \"ModelDataDownloadTimeoutInSeconds\": 2400,\n", " \"ContainerStartupHealthCheckTimeoutInSeconds\": 600,\n", " },\n", " ],\n", ")\n", "endpoint_config_response" ] }, { "cell_type": "code", "execution_count": null, "id": "4bf1662a", "metadata": {}, "outputs": [], "source": [ "create_endpoint_response = sm_client.create_endpoint(\n", " EndpointName=f\"{endpoint_name}\", EndpointConfigName=endpoint_config_name\n", ")\n", "print(f\"Created Endpoint: {create_endpoint_response['EndpointArn']}\")" ] }, { "cell_type": "markdown", "id": "e76ed7c9", "metadata": {}, "source": [ "### This step can take ~ 10 min or longer so please be patient" ] }, { "cell_type": "code", "execution_count": null, "id": "2a346828", "metadata": {}, "outputs": [], "source": [ "import time\n", "\n", "resp = sm_client.describe_endpoint(EndpointName=endpoint_name)\n", "status = resp[\"EndpointStatus\"]\n", "print(\"Status: \" + status)\n", "\n", "while status == \"Creating\":\n", " time.sleep(60)\n", " resp = sm_client.describe_endpoint(EndpointName=endpoint_name)\n", " status = resp[\"EndpointStatus\"]\n", " print(\"Status: \" + status)\n", "\n", "print(\"Arn: \" + resp[\"EndpointArn\"])\n", "print(\"Status: \" + status)" ] }, { "cell_type": "markdown", "id": "6c96dd6a", "metadata": {}, "source": [ "#### While you wait for the endpoint to be created, you can read more about:\n", "- [Deep Learning containers for large model inference](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-large-model-dlc.html)\n", "- [Quantization in HuggingFace Accelerate](https://huggingface.co/blog/hf-bitsandbytes-integration)\n", "- [Handling big models for inference using Accelerate](https://huggingface.co/docs/accelerate/usage_guides/big_modeling#designing-a-device-map)" ] }, { "cell_type": "markdown", "id": "80638bee", "metadata": {}, "source": [ "#### Leverage Boto3 to invoke the endpoint. \n", "\n", "This is a generative model so we pass in a Text as a prompt and Model will complete the sentence and return the results.\n", "\n", "You can pass a batch of prompts as input to the model. This done by setting `inputs` to the list of prompts. The model then returns a result for each prompt. The text generation can be configured using appropriate parameters. These `parameters` need to be passed to the endpoint as a dictionary of `kwargs`. Refer this documentation - https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.GenerationConfig for more details.\n", "\n", "The below code sample illustrates the invocation of the endpoint using a batch of prompts and also sets some parameters.\n" ] }, { "cell_type": "markdown", "id": "0d187673-baa2-4ac8-8be7-6c264230224f", "metadata": {}, "source": [ "## Generating text using different decoding approaches\n", "We will use 5 different decoding approaches as described [here](https://huggingface.co/blog/how-to-generate) and analyze the model output quality. " ] }, { "cell_type": "markdown", "id": "e105a76d-fd5c-4694-9f34-4abc5e7c0a19", "metadata": {}, "source": [ "### Top_p sampling" ] }, { "cell_type": "code", "execution_count": null, "id": "d799ff95", "metadata": {}, "outputs": [], "source": [ "%%time\n", "prompts = [\"Hi, How are you?\"]\n", "response_model = smr_client.invoke_endpoint(\n", " EndpointName=endpoint_name,\n", " Body=json.dumps(\n", " {\n", " \"inputs\": prompts,\n", " \"parameters\": {\n", " \"early_stopping\": True,\n", " \"no_repeat_ngram_size\": 4,\n", " \"max_new_tokens\": 200,\n", " \"do_sample\": True,\n", " \"temperature\": 0.1,\n", " \"top_p\": 0.95,\n", " },\n", " }\n", " ),\n", " ContentType=\"application/json\",\n", ")\n", "\n", "response_model[\"Body\"].read().decode(\"utf8\")" ] }, { "cell_type": "markdown", "id": "e8e0c0b9-cbd8-4207-beff-d26099b93977", "metadata": {}, "source": [ "### Beam search" ] }, { "cell_type": "code", "execution_count": null, "id": "a299b42f-98cb-4455-bb05-3ab52f56f6d1", "metadata": { "tags": [] }, "outputs": [], "source": [ "%%time\n", "prompts = [\"Hi, How are you?\"]\n", "response_model = smr_client.invoke_endpoint(\n", " EndpointName=endpoint_name,\n", " Body=json.dumps(\n", " {\n", " \"inputs\": prompts,\n", " \"parameters\": {\n", " \"early_stopping\": True,\n", " \"no_repeat_ngram_size\": 4,\n", " \"max_new_tokens\": 1024,\n", " \"num_beams\": 2,\n", " },\n", " }\n", " ),\n", " ContentType=\"application/json\",\n", ")\n", "\n", "response_model[\"Body\"].read().decode(\"utf8\")" ] }, { "cell_type": "markdown", "id": "fb8a343f-49f7-4db9-93c0-4893145c9fb0", "metadata": {}, "source": [ "### Soft-max sampling " ] }, { "cell_type": "code", "execution_count": null, "id": "be576702-a340-41d0-a101-c209c9c655bd", "metadata": { "tags": [] }, "outputs": [], "source": [ "%%time\n", "prompts = [\"Hi, How are you?\"]\n", "response_model = smr_client.invoke_endpoint(\n", " EndpointName=endpoint_name,\n", " Body=json.dumps(\n", " {\n", " \"inputs\": prompts,\n", " \"parameters\": {\n", " \"top_k\": 0,\n", " \"temperature\": 0.6,\n", " \"num_return_sequences\": 1,\n", " \"do_sample\": True,\n", " },\n", " }\n", " ),\n", " ContentType=\"application/json\",\n", ")\n", "\n", "response_model[\"Body\"].read().decode(\"utf8\")" ] }, { "cell_type": "markdown", "id": "7c52f0e4-bf34-40ff-a2f8-2e6eb3983050", "metadata": {}, "source": [ "### Top-k sampling" ] }, { "cell_type": "code", "execution_count": null, "id": "0d265dc1-6f11-40d9-a6bb-5a8c357afd4c", "metadata": { "tags": [] }, "outputs": [], "source": [ "%%time\n", "prompts = [\"Hi, How are you?\"]\n", "response_model = smr_client.invoke_endpoint(\n", " EndpointName=endpoint_name,\n", " Body=json.dumps(\n", " {\"inputs\": prompts, \"parameters\": {\"max_new_tokens\": 200, \"top_k\": 20, \"do_sample\": True}}\n", " ),\n", " ContentType=\"application/json\",\n", ")\n", "\n", "response_model[\"Body\"].read().decode(\"utf8\")" ] }, { "cell_type": "markdown", "id": "364e7f36-c405-4f2e-9988-4493c54db899", "metadata": {}, "source": [ "### Top_p sampling" ] }, { "cell_type": "code", "execution_count": null, "id": "b547302c-f9c6-4488-b189-c6721e7f1ce8", "metadata": { "tags": [] }, "outputs": [], "source": [ "%%time\n", "prompts = [\"Hi, How are you?\"]\n", "response_model = smr_client.invoke_endpoint(\n", " EndpointName=endpoint_name,\n", " Body=json.dumps(\n", " {\n", " \"inputs\": prompts,\n", " \"parameters\": {\"max_new_tokens\": 200, \"top_k\": 10, \"top_p\": 0.95, \"do_sample\": True},\n", " }\n", " ),\n", " ContentType=\"application/json\",\n", ")\n", "\n", "response_model[\"Body\"].read().decode(\"utf8\")" ] }, { "cell_type": "markdown", "id": "8c5a7651", "metadata": {}, "source": [ "## Conclusion\n", "In this notebook, we demonstrated how to use SageMaker large model inference containers to host StabilityAI's stablelm-base-alpha-7b. We used Hugging Face library to host model on GPU-based machine learning instance on SageMaker. We then analyzed different decoding approaches and engineered the inference parameters to get better model output quality. For more details about Amazon SageMaker and its large model inference capabilities, refer to the following:\n", "\n", "* Model parallelism and large model inference on Sagemaker (https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-large-model-inference.html)\n", "* Amazon SageMaker now supports deploying large models through configurable volume size and timeout quotas (https://aws.amazon.com/about-aws/whats-new/2022/09/amazon-sagemaker-deploying-large-models-volume-size-timeout-quotas/)\n", "* Real-time inference – Amazon SageMake (https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints.html)\n", "\n" ] }, { "cell_type": "markdown", "id": "03a86317", "metadata": {}, "source": [ "## Clean Up" ] }, { "cell_type": "code", "execution_count": null, "id": "2defcfef", "metadata": {}, "outputs": [], "source": [ "# - Delete the end point\n", "sm_client.delete_endpoint(EndpointName=endpoint_name)" ] }, { "cell_type": "code", "execution_count": null, "id": "28278159", "metadata": {}, "outputs": [], "source": [ "# - In case the end point failed we still want to delete the model\n", "sm_client.delete_endpoint_config(EndpointConfigName=endpoint_config_name)\n", "sm_client.delete_model(ModelName=model_name)" ] }, { "cell_type": "markdown", "id": "66bd8cc6", "metadata": {}, "source": [ "## Notebook CI Test Results\n", "\n", "This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.\n", "\n", "![This us-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-1/inference|generativeai|llm-workshop|lab7-stablelm-base-alpha-7b|stablelm-base-alpha-7b-djl-sagemaker.ipynb)\n", "\n", "![This us-east-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-2/inference|generativeai|llm-workshop|lab7-stablelm-base-alpha-7b|stablelm-base-alpha-7b-djl-sagemaker.ipynb)\n", "\n", "![This us-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-1/inference|generativeai|llm-workshop|lab7-stablelm-base-alpha-7b|stablelm-base-alpha-7b-djl-sagemaker.ipynb)\n", "\n", "![This ca-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ca-central-1/inference|generativeai|llm-workshop|lab7-stablelm-base-alpha-7b|stablelm-base-alpha-7b-djl-sagemaker.ipynb)\n", "\n", "![This sa-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/sa-east-1/inference|generativeai|llm-workshop|lab7-stablelm-base-alpha-7b|stablelm-base-alpha-7b-djl-sagemaker.ipynb)\n", "\n", "![This eu-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-1/inference|generativeai|llm-workshop|lab7-stablelm-base-alpha-7b|stablelm-base-alpha-7b-djl-sagemaker.ipynb)\n", "\n", "![This eu-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-2/inference|generativeai|llm-workshop|lab7-stablelm-base-alpha-7b|stablelm-base-alpha-7b-djl-sagemaker.ipynb)\n", "\n", "![This eu-west-3 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-3/inference|generativeai|llm-workshop|lab7-stablelm-base-alpha-7b|stablelm-base-alpha-7b-djl-sagemaker.ipynb)\n", "\n", "![This eu-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-central-1/inference|generativeai|llm-workshop|lab7-stablelm-base-alpha-7b|stablelm-base-alpha-7b-djl-sagemaker.ipynb)\n", "\n", "![This eu-north-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-north-1/inference|generativeai|llm-workshop|lab7-stablelm-base-alpha-7b|stablelm-base-alpha-7b-djl-sagemaker.ipynb)\n", "\n", "![This ap-southeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-1/inference|generativeai|llm-workshop|lab7-stablelm-base-alpha-7b|stablelm-base-alpha-7b-djl-sagemaker.ipynb)\n", "\n", "![This ap-southeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-2/inference|generativeai|llm-workshop|lab7-stablelm-base-alpha-7b|stablelm-base-alpha-7b-djl-sagemaker.ipynb)\n", "\n", "![This ap-northeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-1/inference|generativeai|llm-workshop|lab7-stablelm-base-alpha-7b|stablelm-base-alpha-7b-djl-sagemaker.ipynb)\n", "\n", "![This ap-northeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-2/inference|generativeai|llm-workshop|lab7-stablelm-base-alpha-7b|stablelm-base-alpha-7b-djl-sagemaker.ipynb)\n", "\n", "![This ap-south-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-south-1/inference|generativeai|llm-workshop|lab7-stablelm-base-alpha-7b|stablelm-base-alpha-7b-djl-sagemaker.ipynb)\n" ] } ], "metadata": { "availableInstances": [ { "_defaultOrder": 0, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.t3.medium", "vcpuNum": 2 }, { "_defaultOrder": 1, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.t3.large", "vcpuNum": 2 }, { "_defaultOrder": 2, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.t3.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 3, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.t3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 4, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5.large", "vcpuNum": 2 }, { "_defaultOrder": 5, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 6, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 7, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 8, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 9, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 10, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 11, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 12, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5d.large", "vcpuNum": 2 }, { "_defaultOrder": 13, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5d.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 14, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5d.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 15, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5d.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 16, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5d.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 17, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5d.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 18, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5d.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 19, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 20, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": true, "memoryGiB": 0, "name": "ml.geospatial.interactive", "supportedImageNames": [ "sagemaker-geospatial-v1-0" ], "vcpuNum": 0 }, { "_defaultOrder": 21, "_isFastLaunch": true, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.c5.large", "vcpuNum": 2 }, { "_defaultOrder": 22, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.c5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 23, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.c5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 24, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.c5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 25, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 72, "name": "ml.c5.9xlarge", "vcpuNum": 36 }, { "_defaultOrder": 26, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 96, "name": "ml.c5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 27, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 144, "name": "ml.c5.18xlarge", "vcpuNum": 72 }, { "_defaultOrder": 28, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.c5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 29, "_isFastLaunch": true, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g4dn.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 30, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g4dn.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 31, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g4dn.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 32, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g4dn.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 33, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g4dn.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 34, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g4dn.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 35, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 61, "name": "ml.p3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 36, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 244, "name": "ml.p3.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 37, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 488, "name": "ml.p3.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 38, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.p3dn.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 39, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.r5.large", "vcpuNum": 2 }, { "_defaultOrder": 40, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.r5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 41, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.r5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 42, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.r5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 43, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.r5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 44, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.r5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 45, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 512, "name": "ml.r5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 46, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.r5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 47, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 48, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 49, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 50, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 51, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 52, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 53, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.g5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 54, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.g5.48xlarge", "vcpuNum": 192 }, { "_defaultOrder": 55, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 1152, "name": "ml.p4d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 56, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 1152, "name": "ml.p4de.24xlarge", "vcpuNum": 96 } ], "kernelspec": { "display_name": "Python 3 (Data Science 3.0)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-west-2:236514542706:image/sagemaker-data-science-310-v1" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.6" }, "vscode": { "interpreter": { "hash": "916dbcbb3f70747c44a77c7bcd40155683ae19c65e1c03b4aa3499c5328201f1" } } }, "nbformat": 4, "nbformat_minor": 5 }