{ "cells": [ { "cell_type": "markdown", "id": "bc5ab391", "metadata": {}, "source": [ "# Serve large models on SageMaker with model parallel inference and DJLServing" ] }, { "cell_type": "markdown", "id": "98b43ca5", "metadata": {}, "source": [ "In this notebook, we explore how to host a large language model on SageMaker using model parallelism from DeepSpeed and DJLServing. Please note this is a direct copy from the original content on the [SageMaker examples here.](https://github.com/aws/amazon-sagemaker-examples/blob/main/advanced_functionality/pytorch_deploy_large_GPT_model/GPT-J-6B-model-parallel-inference-DJL.ipynb)\n", "\n", "Language models have recently exploded in both size and popularity. In 2018, BERT-large entered the scene and, with its 340M parameters and novel transformer architecture, set the standard on NLP task accuracy. Within just a few years, state-of-the-art NLP model size has grown by more than 500x with models such as OpenAI’s 175 billion parameter GPT-3 and similarly sized open source Bloom 176B raising the bar on NLP accuracy. This increase in the number of parameters is driven by the simple and empirically-demonstrated positive relationship between model size and accuracy: more is better. With easy access from models zoos such as Hugging Face and improved accuracy in NLP tasks such as classification and text generation, practitioners are increasingly reaching for these large models. However, deploying them can be a challenge because of their size.\n", "\n", "Model parallelism can help deploy large models that would normally be too large for a single GPU. With model parallelism, we partition and distribute a model across multiple GPUs. Each GPU holds a different part of the model, resolving the memory capacity issue for the largest deep learning models with billions of parameters. This notebook uses tensor parallelism techniques which allow GPUs to work simultaneously on the same layer of a model and achieve low latency inference relative to a pipeline parallel solution.\n", "\n", "In this notebook, we deploy a PyTorch GPT-J model from Hugging Face with 6 billion parameters across two GPUs on an Amazon SageMaker ml.g5.48xlarge instance. DeepSpeed is used for tensor parallelism inference while DJLServing handles inference requests and the distributed workers." ] }, { "cell_type": "code", "execution_count": null, "id": "d6ed354b", "metadata": {}, "outputs": [], "source": [ "!pip install boto3==1.24.68" ] }, { "cell_type": "markdown", "id": "1ac32e96", "metadata": {}, "source": [ "## Step 1: Create a `model.py` and `serving.properties`" ] }, { "cell_type": "code", "execution_count": null, "id": "6f4864eb", "metadata": {}, "outputs": [], "source": [ "%%writefile model.py\n", "\n", "from djl_python import Input, Output\n", "import os\n", "import deepspeed\n", "import torch\n", "from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer\n", "\n", "predictor = None\n", "\n", "\n", "def get_model():\n", " model_name = \"EleutherAI/gpt-j-6B\"\n", " tensor_parallel = int(os.getenv(\"TENSOR_PARALLEL_DEGREE\", \"2\"))\n", " local_rank = int(os.getenv(\"LOCAL_RANK\", \"0\"))\n", " model = AutoModelForCausalLM.from_pretrained(\n", " model_name, revision=\"float32\", torch_dtype=torch.float32\n", " )\n", " tokenizer = AutoTokenizer.from_pretrained(model_name)\n", "\n", " model = deepspeed.init_inference(\n", " model,\n", " mp_size=tensor_parallel,\n", " dtype=model.dtype,\n", " replace_method=\"auto\",\n", " replace_with_kernel_inject=True,\n", " )\n", " generator = pipeline(\n", " task=\"text-generation\", model=model, tokenizer=tokenizer, device=local_rank\n", " )\n", " return generator\n", "\n", "\n", "def handle(inputs: Input) -> None:\n", " global predictor\n", " if not predictor:\n", " predictor = get_model()\n", "\n", " if inputs.is_empty():\n", " # Model server makes an empty call to warmup the model on startup\n", " return None\n", "\n", " data = inputs.get_as_string()\n", " result = predictor(data, do_sample=True, min_tokens=200, max_new_tokens=256)\n", " return Output().add(result)" ] }, { "cell_type": "markdown", "id": "f02b6929", "metadata": {}, "source": [ "### Setup serving.properties\n", "\n", "User needs to add engine Rubikon as shown below. If you would like to control how many worker groups, you can set by adding these lines in the below file.\n", "\n", "```\n", "gpu.minWorkers=1\n", "gpu.maxWorkers=1\n", "```\n", "By default, we will create as much worker group as possible based on `gpu_numbers/tensor_parallel_degree`." ] }, { "cell_type": "code", "execution_count": null, "id": "2c5ea96a", "metadata": {}, "outputs": [], "source": [ "%%writefile serving.properties\n", "engine = Rubikon" ] }, { "cell_type": "markdown", "id": "f44488e6", "metadata": {}, "source": [ "The code below creates the SageMaker model file (`model.tar.gz`) and upload it to S3. " ] }, { "cell_type": "code", "execution_count": null, "id": "5a536439", "metadata": {}, "outputs": [], "source": [ "import sagemaker, boto3\n", "\n", "session = sagemaker.Session()\n", "account = session.account_id()\n", "region = session.boto_region_name\n", "img = \"djl_deepspeed\"\n", "fullname = account + \".dkr.ecr.\" + region + \".amazonaws.com/\" + img + \":latest\"\n", "bucket = session.default_bucket()\n", "path = \"s3://\" + bucket + \"/DEMO-djl-big-model\"" ] }, { "cell_type": "code", "execution_count": null, "id": "9965dd7c", "metadata": {}, "outputs": [], "source": [ "%%sh\n", "if [ -d gpt-j ]; then\n", " rm -d -r gpt-j\n", "fi #always start fresh\n", "\n", "mkdir -p gpt-j\n", "mv model.py gpt-j\n", "mv serving.properties gpt-j\n", "tar -czvf gpt-j.tar.gz gpt-j/\n", "#aws s3 cp gpt-j.tar.gz {path}" ] }, { "cell_type": "code", "execution_count": null, "id": "db47f969", "metadata": {}, "outputs": [], "source": [ "model_s3_url = sagemaker.s3.S3Uploader.upload(\n", " \"gpt-j.tar.gz\", path, kms_key=None, sagemaker_session=session\n", ")" ] }, { "cell_type": "markdown", "id": "c507e3ef", "metadata": {}, "source": [ "## Step 2: Create SageMaker endpoint" ] }, { "cell_type": "markdown", "id": "32589338", "metadata": {}, "source": [ "Now we create our [SageMaker model](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html#SageMaker.Client.create_model). Make sure your execution role has access to your model artifacts and ECR image. Please check out our SageMaker Roles [documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) for more details. " ] }, { "cell_type": "code", "execution_count": null, "id": "446747bc-5533-45d0-8536-6b9fb95fe909", "metadata": {}, "outputs": [], "source": [ "# hard code point to a custom image I'm hosting for this workshop\n", "image = '911195073761.dkr.ecr.us-east-1.amazonaws.com/djl_deepspeed:0.19'" ] }, { "cell_type": "code", "execution_count": null, "id": "026d27d2", "metadata": {}, "outputs": [], "source": [ "from datetime import datetime\n", "\n", "sm_client = boto3.client(\"sagemaker\")\n", "\n", "time_stamp = datetime.now().strftime(\"%Y-%m-%d-%H-%M-%S\")\n", "model_name = \"gpt-j-\" + time_stamp\n", "\n", "create_model_response = sm_client.create_model(\n", " ModelName=model_name,\n", " ExecutionRoleArn=session.get_caller_identity_arn(),\n", " PrimaryContainer={\n", " \"Image\": image,\n", " \"ModelDataUrl\": model_s3_url,\n", " \"Environment\": {\"TENSOR_PARALLEL_DEGREE\": \"2\"},\n", " },\n", ")" ] }, { "cell_type": "markdown", "id": "22d2fc2b", "metadata": {}, "source": [ "Now we create an endpoint configuration that SageMaker hosting services uses to deploy models. Note that we configured `ModelDataDownloadTimeoutInSeconds` and `ContainerStartupHealthCheckTimeoutInSeconds` to accommodate the large size of our model. " ] }, { "cell_type": "code", "execution_count": null, "id": "84e25dd4", "metadata": {}, "outputs": [], "source": [ "initial_instance_count = 1\n", "instance_type = \"ml.g5.48xlarge\"\n", "variant_name = \"AllTraffic\"\n", "endpoint_config_name = \"t-j-config-\" + time_stamp\n", "\n", "production_variants = [\n", " {\n", " \"VariantName\": variant_name,\n", " \"ModelName\": model_name,\n", " \"InitialInstanceCount\": initial_instance_count,\n", " \"InstanceType\": instance_type,\n", " \"ModelDataDownloadTimeoutInSeconds\": 1800,\n", " \"ContainerStartupHealthCheckTimeoutInSeconds\": 3600,\n", " }\n", "]\n", "\n", "endpoint_config = {\n", " \"EndpointConfigName\": endpoint_config_name,\n", " \"ProductionVariants\": production_variants,\n", "}\n", "\n", "ep_conf_res = sm_client.create_endpoint_config(**endpoint_config)" ] }, { "cell_type": "markdown", "id": "e4b3bc26", "metadata": {}, "source": [ "We are ready to create an endpoint using the model and the endpoint configuration created from above steps. " ] }, { "cell_type": "code", "execution_count": null, "id": "962a1aef", "metadata": {}, "outputs": [], "source": [ "endpoint_name = \"gpt-j\" + time_stamp\n", "ep_res = sm_client.create_endpoint(\n", " EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name\n", ")" ] }, { "cell_type": "markdown", "id": "2dc2a85a", "metadata": {}, "source": [ "The creation of the SageMaker endpoint might take a while. After the endpoint is created, you can test it out using the following code. " ] }, { "cell_type": "code", "execution_count": null, "id": "5ed7a325", "metadata": {}, "outputs": [], "source": [ "import json\n", "\n", "client = boto3.client(\"sagemaker-runtime\")\n", "\n", "content_type = \"text/plain\" # The MIME type of the input data in the request body.\n", "payload = \"Amazon.com is the best\" # Payload for inference.\n", "# response = client.invoke_endpoint(\n", " EndpointName=endpoint_name, ContentType=content_type, Body=payload\n", ")\n", "print(response[\"Body\"].read())" ] }, { "cell_type": "markdown", "id": "2eff050b", "metadata": {}, "source": [ "## Conclusion\n", "\n", "In this notebook, you use tensor parallelism to partition a large language model across multiple GPUs for low latency inference. With tensor parallelism, multiple GPUs work on the same model layer at once allowing for faster inference latency when a low batch size is used. Here, we use open source DeepSpeed as the model parallel library to partition the model and open source Deep Java Library Serving as the model serving solution.\n", "\n", "As a next step, you can experiment with larger models from Hugging Face such as GPT-NeoX. You can also adjust the tensor parallel degree to see the impact to latency with models of different sizes." ] } ], "metadata": { "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (Data Science)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/datascience-1.0" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.10" }, "vscode": { "interpreter": { "hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6" } } }, "nbformat": 4, "nbformat_minor": 5 }