{ "cells": [ { "cell_type": "markdown", "id": "a81af958-7c78-444c-b772-b687f9ed7497", "metadata": {}, "source": [ "---\n", "\n", "This notebook's CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook.\n", "\n", "![This us-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-2/inference|generativeai|llm-workshop|lab4-openchatkit|deploy_openchatkit_on_sagemaker.ipynb)\n", "\n", "---" ] }, { "cell_type": "markdown", "id": "0e48316a", "metadata": {}, "source": [ "\n", "# Deploy OpenChatKit Model with high performance on SageMaker \n", "\n", "In this notebook, we explore how to host a large language model on SageMaker using the latest container that packages some of the most popular open source libraries for model parallel inference like DeepSpeed and Hugging Face Accelerate. We use DJLServing as the model serving solution in this example. DJLServing is a high-performance universal model serving solution powered by the Deep Java Library (DJL) that is programming language agnostic. To learn more about DJL and DJLServing, you can refer to our recent blog post (https://aws.amazon.com/blogs/machine-learning/deploy-bloom-176b-and-opt-30b-on-amazon-sagemaker-with-large-model-inference-deep-learning-containers-and-deepspeed/).\n", "\n", "Language models have recently exploded in both size and popularity. In 2018, BERT-large entered the scene and, with its 340M parameters and novel transformer architecture, set the standard on NLP task accuracy. Within just a few years, state-of-the-art NLP model size has grown by more than 500x with models such as OpenAI’s 175 billion parameter GPT-3 and similarly sized open source Bloom 176B raising the bar on NLP accuracy. This increase in the number of parameters is driven by the simple and empirically-demonstrated positive relationship between model size and accuracy: more is better. With easy access from models zoos such as Hugging Face and improved accuracy in NLP tasks such as classification and text generation, practitioners are increasingly reaching for these large models. However, deploying them can be a challenge because of their size.\n", "\n", "Model parallelism can help deploy large models that would normally be too large for a single GPU. With model parallelism, we partition and distribute a model across multiple GPUs. Each GPU holds a different part of the model, resolving the memory capacity issue for the largest deep learning models with billions of parameters. This notebook uses tensor parallelism techniques which allow GPUs to work simultaneously on the same layer of a model and achieve low latency inference relative to a pipeline parallel solution.\n", "\n", "SageMaker has rolled out DeepSpeed and Accelerate container which now provides users with the ability to leverage the managed serving capabilities and help to provide the un-differentiated heavy lifting.\n", "\n", "In this notebook, we deploy the open source [GPT-NeoXT-Chat-Base-20B](https://huggingface.co/togethercomputer/GPT-NeoXT-Chat-Base-20B?text=As+part+of+OpenChatKit+%28codebase+available+here%29%2C+GPT-NeoXT-Chat-Base-20B+is+a+20B+parameter+language+model%2C+fine-tuned+from+EleutherAI%E2%80%99s+GPT-NeoX+with+over+40+million+instructions+on+100%25+carbon+negative+compute) (OpenChatKit) model across GPUs on a ml.g5.12xlarge instance. The open source [GPT-JT-Moderation-6b](https://huggingface.co/togethercomputer/GPT-JT-Moderation-6B) model is deployed across GPUs in the same instance\n", "\n", "OpenChatKit provides a powerful, open-source base to create both specialized and general purpose chatbots for various applications. The kit includes an instruction-tuned 20 billion parameter language model, a 6 billion parameter moderation model, and an extensible retrieval system for including up-to-date responses from custom repositories. It was trained on the OIG-43M training dataset, which was a collaboration between Together, LAION, and Ontocord.ai. Much more than a model release, this is the beginning of an open source project. We are releasing a set of tools and processes for ongoing improvement with community contributions. You can read more information on OpenChatKit [here](https://github.com/togethercomputer/OpenChatKit)\n", "\n", "In this example, we demonstrate how to use SageMaker large model inference container to host OpenChatKit. We used HuggingFace Accelerate's model parallel techniques with multiple GPUs on a single SageMaker machine learning instance. OpenChatKit also includes an extensible retrieval system. With the retrieval system the chatbot is able to incorporate regularly updated or custom content, such as knowledge from Wikipedia, news feeds, or sports scores in response. The additional component of OpenChatKit is a 6 billion parameter moderation model fine-tuned from GPT-JT. In chat applications, the moderation model runs in tandem with the main chat model, checking the user utterance for any inappropriate content. Based on the moderation model’s assessment, the chatbot can limit the input to moderated subjects. For more narrow tasks the moderation model can be used to detect out-of-domain questions and override when the question is not on topic Please refer to [this](https://www.together.xyz/blog/openchatkit) blog post to extend this model with retrieval system.\n", "\n", "Invocations to SageMaker endpoints are stateless, so a model cannot automatically refer to past messages in computing new outputs. As a result, a DynamoDB table is created to store conversations based on a unique identifier generated by the endpoint. When this identifier is passed in with the invocation request, the model concatenates the new prompt with the previous conversation before performing inference.\n", "\n", "As a result, the IAM role used for the endpoint needs permissions for the following actions:\n", "- `dynamodb:CreateTable`\n", "- `dynamodb:DescribeTable`\n", "- `dynamodb:PutItem`\n", "- `dynamodb:GetItem`\n", "\n", "\n", "HuggingFace Accelerate is used for tensor parallelism inference while DJLServing handles inference requests and the distributed workers. For further reading on HuggingFace you can refer to https://huggingface.co/docs" ] }, { "cell_type": "markdown", "id": "ff136863", "metadata": {}, "source": [ "## Licence agreement\n", " - View license information https://github.com/togethercomputer/OpenChatKit/blob/main/LICENSE before using the model.\n", " - This notebook is a sample notebook and not intended for production use. Please refer to the licence at https://github.com/aws/mit-0. \n", " - Faiss is available from https://github.com/facebookresearch/faiss. View license information at https://github.com/facebookresearch/faiss/blob/main/LICENSE\n", " \n", " \n" ] }, { "cell_type": "code", "execution_count": null, "id": "76fd81e4-d17a-44c1-9659-1b86cd6165ac", "metadata": { "tags": [] }, "outputs": [], "source": [ "!pip install boto3 huggingface_hub sagemaker-studio-image-build --upgrade --quiet" ] }, { "cell_type": "code", "execution_count": null, "id": "9ea19605", "metadata": { "tags": [] }, "outputs": [], "source": [ "import sagemaker\n", "import jinja2\n", "from sagemaker import image_uris\n", "import boto3\n", "import os\n", "import time\n", "import json\n", "from pathlib import Path" ] }, { "cell_type": "code", "execution_count": null, "id": "361b75a6", "metadata": { "tags": [] }, "outputs": [], "source": [ "role = sagemaker.get_execution_role() # execution role for the endpoint\n", "sess = sagemaker.session.Session() # sagemaker session for interacting with different AWS APIs\n", "bucket = sess.default_bucket() # bucket to house artifacts\n", "\n", "model_bucket = sess.default_bucket() # bucket to house artifacts\n", "s3_code_prefix = \"hf-large-model-djl-/code_gpt_neoxt-chatbase\" # folder within bucket where code artifact will go\n", "s3_model_prefix = \"hf-large-model-djl-/model_gpt_neoxt-chatbase\" # folder within bucket where code artifact will go\n", "region = sess._region_name\n", "account_id = sess.account_id()\n", "\n", "s3_client = boto3.client(\"s3\")\n", "sm_client = boto3.client(\"sagemaker\")\n", "smr_client = boto3.client(\"sagemaker-runtime\")\n", "\n", "jinja_env = jinja2.Environment()\n", "\n", "# define a variable to contain the s3url of the location that has the model\n", "pretrained_model_location = f\"s3://{model_bucket}/{s3_model_prefix}/\"\n", "print(f\"Pretrained model will be uploaded to ---- > {pretrained_model_location}\")" ] }, { "cell_type": "markdown", "id": "5b6318d1-2194-4b49-9ab4-d310aa7780c3", "metadata": {}, "source": [ "### Download the models from Hugging Face and upload the model artifacts on Amazon S3" ] }, { "cell_type": "code", "execution_count": null, "id": "148ac115-641c-4c50-b57d-b6daec5b15d9", "metadata": { "tags": [] }, "outputs": [], "source": [ "from huggingface_hub import snapshot_download\n", "from pathlib import Path\n", "import os\n", "\n", "# - This will download the model into the current directory where ever the jupyter notebook is running\n", "local_model_path = Path(\"./openchatkit\")\n", "local_model_path.mkdir(exist_ok=True)\n", "model_name = \"togethercomputer/GPT-NeoXT-Chat-Base-20B\"\n", "# Only download pytorch checkpoint files\n", "allow_patterns = [\"*.json\", \"*.pt\", \"*.bin\", \"*.txt\", \"*.model\"]\n", "\n", "# - Leverage the snapshot library to donload the model since the model is stored in repository using LFS\n", "chat_model_download_path = snapshot_download(\n", " repo_id=model_name,\n", " cache_dir=local_model_path,\n", " allow_patterns=allow_patterns,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "11c92d33-e16b-4432-b4e9-31b0082595ec", "metadata": { "tags": [] }, "outputs": [], "source": [ "model_artifact = sess.upload_data(path=chat_model_download_path, key_prefix=s3_model_prefix)\n", "print(f\"Model uploaded to --- > {model_artifact}\")\n", "print(f\"We will set option.s3url={model_artifact}\")" ] }, { "cell_type": "code", "execution_count": null, "id": "e7fa3e96-118d-4bf8-9378-1d75564d4826", "metadata": { "tags": [] }, "outputs": [], "source": [ "!rm -rf openchatkit/" ] }, { "cell_type": "markdown", "id": "4c6d6f88", "metadata": {}, "source": [ "## Create SageMaker compatible Model artifact, upload Model to S3 and bring your own inference script.\n", "\n", "SageMaker Large Model Inference containers can be used to host models without providing your own inference code. This is extremely useful when there is no custom pre-processing of the input data or post-processing of the model's predictions.\n", "\n", "However, in this notebook, we demonstrate how to deploy a model with custom inference code.\n", "\n", "SageMaker needs the model artifacts to be in a Tarball format. In this example, we provide the following files - `serving.properties` and `model.py`.\n", "\n", "The tarball is in the following format\n", "\n", "```\n", "code\n", "├──── \n", "│ └── serving.properties\n", "│ └── model.py\n", " \n", "\n", "```\n", "\n", "- `serving.properties` is the configuration file that can be used to configure the model server.\n", "- `model.py` is the script handles any requests for serving.\n" ] }, { "cell_type": "markdown", "id": "b200bd54", "metadata": {}, "source": [ "#### Create serving.properties \n", "\n", "This is a configuration file to indicate to DJL Serving which model parallelization and inference optimization libraries you would like to use. Depending on your need, you can set the appropriate configuration.\n", "\n", "Here is a list of settings that we use in this configuration file -\n", "- `engine`: The engine for DJL to use. In this case, it is **Python**.\n", "- `option.entryPoint`: The entry point python file or module. This should align with the engine that is being used. \n", "- `option.s3url`: Set this to the URI of the Amazon S3 bucket that contains the model. \n", "\n", "If you want to download the model from huggingface.co, you can set `option.modelid`. The model ID of a pretrained model hosted inside a model repository on huggingface.co (https://huggingface.co/models). The container uses this model ID to download the corresponding model repository on huggingface.co. \n", "- `option.tensor_parallel_degree`: Set to the number of GPU devices over which HuggingFace Accelerate needs to partition the model. This parameter also controls the number of workers per model which will be started up when DJL serving runs. As an example if we have an 8 GPU machine, and we are creating 8 partitions then we will have 1 worker per model to serve the requests. \n", "\n", "For more details on the configuration options and an exhaustive list, you can refer the documentation - https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-large-model-configuration.html.\n", "\n", "HuggingFace Accelerate can automatically handle the device map computation by setting the `device_map` option to a supported option, or a device map can be provided. By using the `auto` device map, HuggingFace evenly splits the model across all available GPUs by maximising the available GPU RAM" ] }, { "cell_type": "code", "execution_count": null, "id": "653da86a-6d53-44f0-afc6-7fe52a10d225", "metadata": { "tags": [] }, "outputs": [], "source": [ "!mkdir openchatkit" ] }, { "cell_type": "code", "execution_count": null, "id": "c295397f", "metadata": { "tags": [] }, "outputs": [], "source": [ "%%writefile openchatkit/serving.properties\n", "engine = Python\n", "option.tensor_parallel_degree = 4\n", "option.s3url = {{s3url}}" ] }, { "cell_type": "code", "execution_count": null, "id": "9019c06a", "metadata": { "tags": [] }, "outputs": [], "source": [ "# we plug in the appropriate model location into our `serving.properties` file based on the region in which this notebook is running\n", "template = jinja_env.from_string(Path(\"openchatkit/serving.properties\").open().read())\n", "Path(\"openchatkit/serving.properties\").open(\"w\").write(\n", " template.render(s3url=pretrained_model_location)\n", ")\n", "!pygmentize openchatkit/serving.properties | cat -n" ] }, { "cell_type": "markdown", "id": "00b55b06-57e0-4acf-99e5-4aeff7afded8", "metadata": {}, "source": [ "The below code implements the handling logic for the main OpenChatKit GPT-NeoX model. The overall solution is implemented over 4 files to handle:\n", "1. Receiving inference request and handling it (`model.py`)\n", "2. Downloading and preparing the Wikipedia index (`wikipedia_prepare.py`)\n", "3. Searching the Wikipedia Index for relevant documents (`wikipedia.py`)\n", "4. Storing and retrieving the conversation thread in DynamoDB for passing to the model and user (`conversation.py`)\n", "\n", "\n", "`model.py` implements a class `OpenChatKitService` which handles passing the data between the GPT-JT Moderation mode, GPT NeoX model, Faiss search, and the conversation object. This is called on when inference is performed. This will also generate a unique ID for each invocation if one is not supplied for the purpose of storing the prompts in DynamoDB.\n", "\n", "The `ChatModel` class loads the model and generates the response. A stopping criteria is configured for the generation to only produce the bot response on inference. This handles partitioning the model across multiple GPUs.\n", "\n", "The `ModerationModel` class will load the model and generate the classification for moderation. If it finds that the classification is `\"needs intervention\"`, the return value will be `True` to advise the model to censor the response to the user." ] }, { "cell_type": "code", "execution_count": null, "id": "581a558b-d73a-4a7b-8526-d9edaeccd299", "metadata": { "tags": [] }, "outputs": [], "source": [ "%%writefile openchatkit/model.py\n", "import torch\n", "import logging\n", "import uuid\n", "import wikipedia as wp\n", "import conversation as convo\n", "\n", "from djl_python import Input, Output\n", "from transformers import (\n", " pipeline,\n", " AutoConfig,\n", " AutoModelForCausalLM,\n", " AutoTokenizer,\n", " StoppingCriteria,\n", " StoppingCriteriaList,\n", ")\n", "from accelerate import infer_auto_device_map, init_empty_weights\n", "from typing import Optional\n", "\n", "\n", "class StopWordsCriteria(StoppingCriteria):\n", " def __init__(self, tokenizer, stop_words):\n", " self._tokenizer = tokenizer\n", " self._stop_words = stop_words\n", " self._partial_result = \"\"\n", " self._stream_buffer = \"\"\n", "\n", " def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:\n", " first = not self._partial_result\n", " text = self._tokenizer.decode(input_ids[0, -1])\n", " self._partial_result += text\n", " for stop_word in self._stop_words:\n", " if stop_word in self._partial_result:\n", " return True\n", " return False\n", "\n", "\n", "class ModerationModel:\n", " def __init__(self, properties):\n", " tensor_parallel = int(properties.get(\"tensor_parallel_degree\", -1))\n", " model_location = \"togethercomputer/GPT-JT-Moderation-6B\"\n", "\n", " kwargs = {}\n", "\n", " config = AutoConfig.from_pretrained(model_location)\n", "\n", " with init_empty_weights():\n", " model_from_conf = AutoModelForCausalLM.from_config(config)\n", "\n", " model_from_conf.tie_weights()\n", "\n", " if \"dtype\" in properties:\n", " if properties[\"dtype\"] == \"float16\":\n", " dtype_str = \"float16\"\n", " dtype = torch.float16\n", " if properties[\"dtype\"] == \"bfloat16\":\n", " dtype_str = \"bfloat16\"\n", " dtype = torch.bfloat16\n", " else:\n", " dtype_str = \"float16\"\n", " dtype = torch.float16\n", "\n", " if \"device_map\" in properties:\n", " kwargs[\"device_map\"] = properties[\"device_map\"]\n", " elif tensor_parallel > 0:\n", " kwargs[\"device_map\"] = \"auto\"\n", " world_size = torch.cuda.device_count()\n", " assert (\n", " world_size == tensor_parallel\n", " ), f\"TP degree ({tensor_parallel}) doesn't match available GPUs ({world_size})\"\n", " logging.info(f\"Using {world_size} gpus\")\n", " else:\n", " kwargs[\"device_map\"] = infer_auto_device_map(\n", " model_from_conf,\n", " no_split_module_classes=[\"GPTNeoXLayer\"],\n", " dtype=dtype_str,\n", " )\n", "\n", " if \"low_cpu_mem_usage\" in properties:\n", " kwargs[\"low_cpu_mem_usage\"] = properties.get(\"low_cpu_mem_usage\")\n", "\n", " logging.info(f\"Using device map: {kwargs['device_map']}\")\n", " logging.info(f\"Loading model in {model_location} with kwargs {kwargs}\")\n", "\n", " self.tokenizer = AutoTokenizer.from_pretrained(model_location)\n", "\n", " self.model = AutoModelForCausalLM.from_pretrained(\n", " model_location,\n", " torch_dtype=dtype,\n", " offload_folder=\"/tmp/offload\",\n", " offload_state_dict=True,\n", " **kwargs,\n", " )\n", "\n", " logging.info(f\"Moderation Model loaded Initialized\")\n", "\n", " def clean_output(self, output):\n", " # Clean the input prompt out\n", " print(output.split(\"Output:\"))\n", " _output = output.split(\"Output:\")[-1]\n", "\n", " return _output\n", "\n", " def inference(self, prompt):\n", " prompt = f\"\"\"Possible labels:\n", " 1. casual\n", " 2. needs caution\n", " 3. needs intervention\n", " 4. possibly needs caution\n", " 5. probably needs caution\n", " \n", " \n", " Input: I want to call in sick\n", " Output: casual\n", "\n", " Input: I hate my job and I don't want to go there.\n", " Output: possibly needs caution\n", "\n", " Input: It's so hard to deal with this. I hate this job and I want to quit.\n", " Output: possibly needs caution\n", "\n", " Input: Something tells me that you're sitting in the wrong section of the Stormfront bus.\n", " Output: needs caution\n", "\n", " Input: {prompt}\n", " Output:\"\"\"\n", "\n", " print(prompt)\n", "\n", " print(\"Inference moderation complete\")\n", "\n", " stop_criteria = StopWordsCriteria(self.tokenizer, [\"Input:\"])\n", "\n", " inputs = self.tokenizer(prompt, return_tensors=\"pt\").to(self.model.device)\n", "\n", " outputs = self.model.generate(\n", " **inputs,\n", " max_new_tokens=512,\n", " stopping_criteria=StoppingCriteriaList([stop_criteria]),\n", " )\n", "\n", " output = self.tokenizer.batch_decode(outputs)[0]\n", " output = self.clean_output(output)\n", " if \"needs intervention\" in output:\n", " return True\n", " else:\n", " return False\n", "\n", "\n", "class ChatModel:\n", " human_id = \"\"\n", " bot_id = \"\"\n", "\n", " def __init__(self, properties):\n", " tensor_parallel = int(properties.get(\"tensor_parallel_degree\", -1))\n", " if \"model_dir\" in properties:\n", " model_location = properties[\"model_dir\"]\n", " if \"model_id\" in properties:\n", " model_location = properties[\"model_id\"]\n", "\n", " kwargs = {}\n", "\n", " config = AutoConfig.from_pretrained(model_location)\n", "\n", " with init_empty_weights():\n", " model_from_conf = AutoModelForCausalLM.from_config(config)\n", "\n", " model_from_conf.tie_weights()\n", "\n", " if \"dtype\" in properties:\n", " if properties[\"dtype\"] == \"float16\":\n", " dtype_str = \"float16\"\n", " dtype = torch.float16\n", " if properties[\"dtype\"] == \"bfloat16\":\n", " dtype_str = \"bfloat16\"\n", " dtype = torch.bfloat16\n", " else:\n", " dtype_str = \"float16\"\n", " dtype = torch.float16\n", "\n", " if \"device_map\" in properties:\n", " kwargs[\"device_map\"] = properties[\"device_map\"]\n", " elif tensor_parallel > 0:\n", " kwargs[\"device_map\"] = \"auto\"\n", " world_size = torch.cuda.device_count()\n", " assert (\n", " world_size == tensor_parallel\n", " ), f\"TP degree ({tensor_parallel}) doesn't match available GPUs ({world_size})\"\n", " logging.info(f\"Using {world_size} gpus\")\n", " else:\n", " kwargs[\"device_map\"] = infer_auto_device_map(\n", " model_from_conf,\n", " no_split_module_classes=[\"GPTNeoXLayer\"],\n", " dtype=dtype_str,\n", " )\n", "\n", " if \"low_cpu_mem_usage\" in properties:\n", " kwargs[\"low_cpu_mem_usage\"] = properties.get(\"low_cpu_mem_usage\")\n", "\n", " logging.info(f\"Using device map: {kwargs['device_map']}\")\n", " logging.info(f\"Loading model in {model_location} with kwargs {kwargs}\")\n", "\n", " self.tokenizer = AutoTokenizer.from_pretrained(model_location)\n", "\n", " self.model = AutoModelForCausalLM.from_pretrained(\n", " model_location,\n", " torch_dtype=dtype,\n", " offload_folder=\"/tmp/offload\",\n", " offload_state_dict=True,\n", " **kwargs,\n", " )\n", "\n", " logging.info(f\"ChatModel loaded Initialized\")\n", "\n", " def do_inference(self, prompt, **generate_kwargs):\n", " stop_criteria = StopWordsCriteria(self.tokenizer, [self.human_id])\n", " inputs = self.tokenizer(prompt, return_tensors=\"pt\").to(self.model.device)\n", "\n", " outputs = self.model.generate(\n", " **inputs,\n", " pad_token_id=self.tokenizer.eos_token_id,\n", " stopping_criteria=StoppingCriteriaList([stop_criteria]),\n", " **generate_kwargs,\n", " )\n", "\n", " output = self.tokenizer.batch_decode(outputs)[0]\n", " output = output.split(self.bot_id)[-1].strip()\n", "\n", " return output\n", "\n", "\n", "class OpenChatKitService:\n", " def __init__(self):\n", " self.input_model = None\n", " self.model = None\n", " self.output_model = None\n", " self.initialized = False\n", " self.index = None\n", " self.conversation = None\n", "\n", " def initialize(self, properties):\n", " print(\"Done\")\n", " logging.info(f\"Loading models...\")\n", " self.input_model = ModerationModel(properties)\n", " self.model = ChatModel(properties)\n", " self.output_model = ModerationModel(properties)\n", "\n", " logging.info(\"Loading Wikipedia Retrieval\")\n", " import wikipedia_prepare\n", "\n", " self.index = wp.WikipediaIndex()\n", " self.conversation = convo.Conversation(self.model.human_id, self.model.bot_id)\n", " self.initialized = True\n", "\n", " def inference(self, inputs: Input):\n", " data = inputs.get_as_json()\n", "\n", " input_sentences = data[\"inputs\"]\n", " params = data[\"parameters\"]\n", "\n", " print(params)\n", "\n", " if self.input_model.inference(input_sentences):\n", " return Output().add_as_json(\n", " {\"outputs\": \"Unfortunately I am unable to provide any information about this topic\"}\n", " )\n", "\n", " if \"session_id\" in params.keys():\n", " session_id = params.pop(\"session_id\")\n", " else:\n", " session_id = str(uuid.uuid4())\n", "\n", " if \"no_retrieval\" not in params.keys():\n", " results = self.index.search(input_sentences)\n", " if len(results) > 0:\n", " self.conversation.push_context_turn(results[0], session_id)\n", " else:\n", " params.pop(\"no_retrieval\")\n", "\n", " self.conversation.push_human_turn(input_sentences, session_id)\n", "\n", " output = self.model.do_inference(self.conversation.get_raw_prompt(session_id), **params)\n", "\n", " self.conversation.push_model_response(output, session_id)\n", "\n", " response = self.conversation.get_last_turn(session_id).strip()\n", "\n", " if self.output_model.inference(response):\n", " return Output().add_as_json(\n", " {\"outputs\": \"Unfortunately I am unable to provide any information about this topic\"}\n", " )\n", "\n", " return Output().add_as_json({\"outputs\": response, \"session_id\": session_id})\n", "\n", "\n", "_service = OpenChatKitService()\n", "\n", "\n", "def handle(inputs: Input) -> Optional[Output]:\n", " if not _service.initialized:\n", " _service.initialize(inputs.get_properties())\n", "\n", " if inputs.is_empty():\n", " return None\n", "\n", " return _service.inference(inputs)" ] }, { "cell_type": "markdown", "id": "e39a4630-9d8d-4e22-a052-db55e2625c57", "metadata": {}, "source": [ "`conversation.py` is adapted from the open source OpenChatKit repository. This file is responsible for defining the object that stores the conversation turns between the human and the model. With this, the model is able to retain a session for the conversation allowing a user to refer to previous messages. \n", "\n", "As SageMaker endpoint invocations are stateless, this conversation needs to be stored in a location external to the endpoint instances. On startup, the instance will create a DynamoDB table if it does not exist. All updates to the conversation are then stored in DynamoDB based on the `session_id` key which is generated by the endpoint. Any invocation with a session ID will retrieve the associated conversation string and update it as required." ] }, { "cell_type": "code", "execution_count": null, "id": "2f20beea-1b9d-441c-9f8c-d684de3892eb", "metadata": { "tags": [] }, "outputs": [], "source": [ "%%writefile openchatkit/conversation.py\n", "# This file was adapted from togethercomputer/openchatkit:\n", "# https://github.com/togethercomputer/OpenChatKit/blob/main/inference/conversation.py\n", "#\n", "# The original file was licensed under the Apache 2.0 License\n", "\n", "import re\n", "import time\n", "import boto3\n", "import logging\n", "\n", "MEANINGLESS_WORDS = [\"\", \"\", \"<|endoftext|>\"]\n", "PRE_PROMPT = \"\"\"\\\n", "Current Date: {}\n", "Current Time: {}\n", "\n", "\"\"\"\n", "\n", "\n", "def clean_response(response):\n", " for word in MEANINGLESS_WORDS:\n", " response = response.replace(word, \"\")\n", " response = response.strip(\"\\n\")\n", " return response\n", "\n", "\n", "class Conversation:\n", " DEFAULT_KEY_NAME = \"session_id\"\n", "\n", " def __init__(self, human_id, bot_id, db_name=\"openchatkit_chat_logs\"):\n", " cur_date = time.strftime(\"%Y-%m-%d\")\n", " cur_time = time.strftime(\"%H:%M:%S %p %Z\")\n", "\n", " self._human_id = human_id\n", " self._bot_id = bot_id\n", " prompt = PRE_PROMPT.format(cur_date, cur_time)\n", " self.db_name = db_name\n", " self.ddb_client = boto3.client(\"dynamodb\")\n", "\n", " try:\n", " self.ddb_client.describe_table(TableName=db_name)\n", " except self.ddb_client.exceptions.ResourceNotFoundException:\n", " logging.info(f\"Table {db_name} not found. Creating...\")\n", " self.ddb_client.create_table(\n", " TableName=db_name,\n", " AttributeDefinitions=[\n", " {\"AttributeName\": self.DEFAULT_KEY_NAME, \"AttributeType\": \"S\"},\n", " ],\n", " KeySchema=[{\"AttributeName\": self.DEFAULT_KEY_NAME, \"KeyType\": \"HASH\"}],\n", " BillingMode=\"PAY_PER_REQUEST\",\n", " )\n", " waiter = self.ddb_client.get_waiter(\"table_exists\")\n", " waiter.wait(TableName=db_name, WaiterConfig={\"Delay\": 1})\n", "\n", " def push_context_turn(self, context, session_id):\n", " # for now, context is represented as a human turn\n", " prompt = self.get_raw_prompt(session_id)\n", " prompt += f\"{self._human_id}: {context}\\n\"\n", " self.set_prompt(session_id, prompt)\n", "\n", " def push_human_turn(self, query, session_id):\n", " prompt = self.get_raw_prompt(session_id)\n", " prompt += f\"{self._human_id}: {query}\\n\"\n", " prompt += f\"{self._bot_id}:\"\n", " self.set_prompt(session_id, prompt)\n", "\n", " def push_model_response(self, response, session_id):\n", " has_finished = self._human_id in response\n", " bot_turn = response.split(f\"{self._human_id}:\")[0]\n", " bot_turn = clean_response(bot_turn)\n", " # if it is truncated, then append \"...\" to the end of the response\n", " if not has_finished:\n", " bot_turn += \"...\"\n", "\n", " prompt = self.get_raw_prompt(session_id)\n", " prompt += f\"{bot_turn}\\n\"\n", " self.set_prompt(session_id, prompt)\n", "\n", " def get_last_turn(self, session_id):\n", " human_tag = f\"{self._human_id}:\"\n", " bot_tag = f\"{self._bot_id}:\"\n", " prompt = self.get_raw_prompt(session_id)\n", " turns = re.split(f\"({human_tag}|{bot_tag})\\W?\", prompt)\n", " # print(turns)\n", " return turns[-1]\n", "\n", " def set_prompt(self, session_id, prompt):\n", " self.ddb_client.put_item(\n", " TableName=self.db_name,\n", " Item={self.DEFAULT_KEY_NAME: {\"S\": session_id}, \"content\": {\"S\": prompt}},\n", " )\n", "\n", " def get_raw_prompt(self, session_id):\n", " data = self.ddb_client.get_item(\n", " TableName=self.db_name, Key={self.DEFAULT_KEY_NAME: {\"S\": session_id}}\n", " )\n", "\n", " # If no data is associated with the session id (meaning session did not exist)\n", " if \"Item\" not in data.keys():\n", " cur_date = time.strftime(\"%Y-%m-%d\")\n", " cur_time = time.strftime(\"%H:%M:%S %p %Z\")\n", " prompt = PRE_PROMPT.format(cur_date, cur_time)\n", " self.set_prompt(session_id, prompt)\n", " return prompt\n", "\n", " return data[\"Item\"][\"content\"][\"S\"]\n", "\n", " @classmethod\n", " def from_raw_prompt(cls, value, session_id):\n", " self.set_prompt(session_id, value)" ] }, { "cell_type": "markdown", "id": "e8579d1c-bb15-47da-af61-d57ffc5a777d", "metadata": {}, "source": [ "In order to search the Wikipedia documents for relevant text, the index needs to be downloaded from HuggingFace as it is not packaged elsewhere.\n", "\n", "This file is responsible for handling the download when imported. Only a single process in the multiple that are running for inference can clone the repository. The rest will instead wait until the files are present in the local filesystem." ] }, { "cell_type": "code", "execution_count": null, "id": "a39c479b-1eae-431d-abf6-18b922e14271", "metadata": { "tags": [] }, "outputs": [], "source": [ "%%writefile openchatkit/wikipedia_prepare.py\n", "# This file was adapted from togethercomputer/openchatkit:\n", "# https://github.com/togethercomputer/OpenChatKit/blob/main/data/wikipedia-3sentence-level-retrieval-index/prepare.py\n", "#\n", "# The original file was licensed under the Apache 2.0 license.\n", "\n", "import os\n", "import subprocess\n", "import time\n", "\n", "DIR = os.path.dirname(os.path.abspath(__file__))\n", "print(DIR)\n", "print(\"Running lfs check\")\n", "\n", "print(DIR)\n", "print(\"Prior clone\")\n", "print\n", "if not os.path.isdir(\"/tmp/files/index\"):\n", " print(\"Cloning to local\")\n", " try:\n", " process = subprocess.run(\n", " f\"git clone https://huggingface.co/datasets/ChristophSchuhmann/wikipedia-3sentence-level-retrieval-index /tmp/files/index\",\n", " shell=True,\n", " check=True,\n", " )\n", " except:\n", " pass\n", "\n", "while not os.path.isfile(os.path.join(\"/tmp/files/index\", \"wikipedia-en-sentences.parquet\")):\n", " time.sleep(5)\n", " print(\"Waiting for clone to finish...\")\n", "print(\"After clone\")" ] }, { "cell_type": "markdown", "id": "85e2eafe-4f7b-43bc-be06-85ddb02e32e5", "metadata": {}, "source": [ "This code is responsible for loading and searching the Wikipedia document index. This helps to provide additional context to the chatbot which can improve performance." ] }, { "cell_type": "code", "execution_count": null, "id": "3d6b528d-df26-4849-bb86-e7e18a97ce1a", "metadata": { "tags": [] }, "outputs": [], "source": [ "%%writefile openchatkit/wikipedia.py\n", "# This file was adapted from ChristophSchuhmann/wikipedia-3sentence-level-retrieval-index:\n", "# https://huggingface.co/datasets/ChristophSchuhmann/wikipedia-3sentence-level-retrieval-index/blob/main/wikiindexquery.py\n", "#\n", "# The original file was licensed under the Apache 2.0 license.\n", "\n", "import os\n", "\n", "from transformers import AutoTokenizer, AutoModel\n", "import faiss\n", "import numpy as np\n", "import pandas as pd\n", "\n", "DIR = os.path.dirname(os.path.abspath(__file__))\n", "\n", "\n", "def mean_pooling(token_embeddings, mask):\n", " token_embeddings = token_embeddings.masked_fill(~mask[..., None].bool(), 0.0)\n", " sentence_embeddings = token_embeddings.sum(dim=1) / mask.sum(dim=1)[..., None]\n", " return sentence_embeddings\n", "\n", "\n", "def cos_sim_2d(x, y):\n", " norm_x = x / np.linalg.norm(x, axis=1, keepdims=True)\n", " norm_y = y / np.linalg.norm(y, axis=1, keepdims=True)\n", " return np.matmul(norm_x, norm_y.T)\n", "\n", "\n", "class WikipediaIndex:\n", " def __init__(self):\n", " path = os.path.join(\"/tmp/files\", \"index\")\n", " indexpath = os.path.join(path, \"knn.index\")\n", " wiki_sentence_path = os.path.join(path, \"wikipedia-en-sentences.parquet\")\n", "\n", " self._device = \"cuda\"\n", " self._tokenizer = AutoTokenizer.from_pretrained(\"facebook/contriever-msmarco\")\n", " self._contriever = AutoModel.from_pretrained(\"facebook/contriever-msmarco\").to(self._device)\n", "\n", " self._df_sentences = pd.read_parquet(wiki_sentence_path, engine=\"fastparquet\")\n", "\n", " self._wiki_index = faiss.read_index(indexpath, faiss.IO_FLAG_MMAP | faiss.IO_FLAG_READ_ONLY)\n", "\n", " def search(self, query, k=1, w=5, w_th=0.5):\n", " inputs = self._tokenizer(query, padding=True, truncation=True, return_tensors=\"pt\").to(\n", " self._device\n", " )\n", " outputs = self._contriever(**inputs)\n", " embeddings = mean_pooling(outputs[0], inputs[\"attention_mask\"])\n", "\n", " query_vector = embeddings.cpu().detach().numpy().reshape(1, -1)\n", "\n", " distances, indices = self._wiki_index.search(query_vector, k)\n", "\n", " texts = []\n", " for i, (dist, indice) in enumerate(zip(distances[0], indices[0])):\n", " text = self._df_sentences.iloc[indice][\"text_snippet\"]\n", "\n", " try:\n", " input_texts = [self._df_sentences.iloc[indice][\"text_snippet\"]]\n", " for j in range(1, w + 1):\n", " input_texts = [\n", " self._df_sentences.iloc[indice - j][\"text_snippet\"]\n", " ] + input_texts\n", " for j in range(1, w + 1):\n", " input_texts = input_texts + [\n", " self._df_sentences.iloc[indice + j][\"text_snippet\"]\n", " ]\n", "\n", " inputs = self._tokenizer(\n", " input_texts, padding=True, truncation=True, return_tensors=\"pt\"\n", " ).to(self._device)\n", "\n", " outputs = self._contriever(**inputs)\n", " embeddings = (\n", " mean_pooling(outputs[0], inputs[\"attention_mask\"]).detach().cpu().numpy()\n", " )\n", "\n", " for j in range(1, w + 1):\n", " if (\n", " cos_sim_2d(\n", " embeddings[w - j].reshape(1, -1),\n", " embeddings[w].reshape(1, -1),\n", " )\n", " > w_th\n", " ):\n", " text = self._df_sentences.iloc[indice - j][\"text_snippet\"] + text\n", " else:\n", " break\n", "\n", " for j in range(1, w + 1):\n", " if (\n", " cos_sim_2d(\n", " embeddings[w + j].reshape(1, -1),\n", " embeddings[w].reshape(1, -1),\n", " )\n", " > w_th\n", " ):\n", " text += self._df_sentences.iloc[indice + j][\"text_snippet\"]\n", " else:\n", " break\n", "\n", " except Exception as e:\n", " print(e)\n", "\n", " texts.append(text)\n", "\n", " return texts" ] }, { "cell_type": "markdown", "id": "90dd77c2-a18b-4c63-bc63-e5fe3b03c128", "metadata": {}, "source": [ "One of the other features of OpenChatKit are the moderation capabilities. While the model itself does have some moderation built in, TogetherComputer trained a [GPT-JT-Moderation-6B](https://huggingface.co/togethercomputer/GPT-JT-Moderation-6B) model with Ontocord.ai's [OIG-moderation dataset](https://huggingface.co/datasets/ontocord/OIG-moderation). This model runs alongside the main chatbot to check both the user input and answer from the bot do not contain inappropriate results. In the scenario they do, the input model will indicate to the chat model that the input is inappropriate to override the inference result, and the output model will override the inference result.\n", "\n", "The input moderation model returns the data in a format that is readable by the bot as if it were a regular input. The output moderation model does not include this change." ] }, { "cell_type": "markdown", "id": "5dd60a29", "metadata": {}, "source": [ "**Image URI for the DJL container is being used here**" ] }, { "cell_type": "code", "execution_count": null, "id": "3884d357", "metadata": { "tags": [] }, "outputs": [], "source": [ "inference_image_uri = image_uris.retrieve(\n", " framework=\"djl-deepspeed\", region=sess.boto_session.region_name, version=\"0.21.0\"\n", ")\n", "\n", "print(f\"Image going to be used is ---- > {inference_image_uri}\")" ] }, { "cell_type": "markdown", "id": "5bb6b1c9-e7ed-4873-b2f1-7a4dad9a51ac", "metadata": {}, "source": [ "The index search uses Facebook's [Faiss](https://github.com/facebookresearch/faiss) library for performing the similarity search. As this is not included in the base LMI image, the container needs to be adapted to install this library. The below defines a Dockerfile which installs Faiss from source alongside other libraries needed by the bot endpoint." ] }, { "cell_type": "code", "execution_count": null, "id": "62f4fcf5-9a13-4cb8-8e9b-5218358048ed", "metadata": {}, "outputs": [], "source": [ "%%writefile Dockerfile.template\n", "FROM {{imagebase}}\n", "\n", "ARG FAISS_URL=https://github.com/facebookresearch/faiss.git\n", "RUN apt-get update && apt-get install -y git-lfs wget cmake pkg-config build-essential apt-utils\n", "RUN apt search openblas && apt-get install -y libopenblas-dev swig\n", "\n", "RUN git clone $FAISS_URL && \\\n", " cd faiss && \\\n", " cmake -B build . -DFAISS_OPT_LEVEL=avx2 -DCMAKE_CUDA_ARCHITECTURES=\"86\" && \\\n", " make -C build -j faiss && \\\n", " make -C build -j swigfaiss && \\\n", " make -C build -j swigfaiss_avx2 && \\\n", " (cd build/faiss/python && python -m pip install .)\n", "\n", "RUN pip install pandas fastparquet boto3 && \\\n", " git lfs install --skip-repo && \\\n", " apt-get clean all" ] }, { "cell_type": "code", "execution_count": null, "id": "1fce626f-5a03-4334-9138-81daeb65b1ef", "metadata": { "tags": [] }, "outputs": [], "source": [ "# we plug in the appropriate model location into our `serving.properties` file based on the region in which this notebook is running\n", "template = jinja_env.from_string(Path(\"Dockerfile.template\").open().read())\n", "Path(\"Dockerfile\").open(\"w\").write(template.render(imagebase=inference_image_uri))\n", "!pygmentize Dockerfile | cat -n" ] }, { "cell_type": "markdown", "id": "d91cf177-69a0-414c-bd63-fc1c7294d093", "metadata": {}, "source": [ "This uses the [SageMaker Studio Image Build CLI](https://github.com/aws-samples/sagemaker-studio-image-build-cli) to build the Docker image defined above as SageMaker Studio does not allow for Docker to be installed for building the image. This will leverage CodeBuild to remotely build the image and push it to a private ECR repository.\n", "\n", "This same Dockerfile can be built anywhere that allows for running Docker commands and pushing to a relevant ECR repository." ] }, { "cell_type": "code", "execution_count": null, "id": "ff14cb74-e86c-407a-a6ef-d7fa9455aed1", "metadata": { "tags": [] }, "outputs": [], "source": [ "!sm-docker build . --repository openchatkit:djl --compute-type BUILD_GENERAL1_2XLARGE" ] }, { "cell_type": "code", "execution_count": null, "id": "e1ba8049-c4a2-420b-824a-98a712356435", "metadata": { "tags": [] }, "outputs": [], "source": [ "chat_inference_image_uri = (\n", " f\"{sess.account_id()}.dkr.ecr.{sess.boto_session.region_name}.amazonaws.com/openchatkit:djl\"\n", ")" ] }, { "cell_type": "markdown", "id": "4ddce346", "metadata": {}, "source": [ "**Create the Tarball and then upload to S3 location**" ] }, { "cell_type": "code", "execution_count": null, "id": "c9c52338", "metadata": { "tags": [] }, "outputs": [], "source": [ "%%sh\n", "tar czvf model.tar.gz openchatkit/\n", "rm -rf openchatkit" ] }, { "cell_type": "code", "execution_count": null, "id": "f388dd32", "metadata": { "tags": [] }, "outputs": [], "source": [ "s3_code_artifact = sess.upload_data(\"model.tar.gz\", bucket, s3_code_prefix)\n", "print(f\"S3 Code or Model tar ball uploaded to --- > {s3_code_artifact}\")" ] }, { "cell_type": "markdown", "id": "e60ecd16", "metadata": {}, "source": [ "### To create the endpoint the steps are:\n", "\n", "1. Build an image adapted from the DJL container that installs Faiss for information retrieval\n", "2. Create the Model using the Image container and the Model Tarball uploaded earlier\n", "3. Create the endpoint config using the following key parameters\n", "\n", " a) Instance Type is ml.g5.12xlarge \n", " \n", " b) ContainerStartupHealthCheckTimeoutInSeconds is 2400 to ensure health check starts after the model is ready \n", "3. Create the end point using the endpoint config created \n", " " ] }, { "cell_type": "markdown", "id": "649cdd53", "metadata": {}, "source": [ "#### Create the Model\n", "Use the image URI built from the DJL container and the s3 location to which the tarball was uploaded. The moderation models will use the DJL container.\n", "\n", "The container downloads the model into the `/tmp` space on the instance because SageMaker maps the `/tmp` to the Amazon Elastic Block Store (Amazon EBS) volume that is mounted when we specify the endpoint creation parameter VolumeSizeInGB. It leverages `s5cmd`(https://github.com/peak/s5cmd) which offers a very fast download speed and hence extremely useful when downloading large models.\n", "\n", "For instances like p4dn, which come pre-built with the volume instance, we can continue to leverage the `/tmp` on the container. The size of this mount is large enough to hold the model.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "2cf704c4-a1fc-4320-87fc-8583fb9c09cb", "metadata": { "tags": [] }, "outputs": [], "source": [ "chat_inference_image_uri" ] }, { "cell_type": "code", "execution_count": null, "id": "ccde7032-ba10-4174-b641-5570b11cd75e", "metadata": { "tags": [] }, "outputs": [], "source": [ "from sagemaker.utils import name_from_base\n", "\n", "chat_model_name = name_from_base(f\"gpt-neoxt-chatbase-ds\")\n", "print(chat_model_name)\n", "\n", "create_model_response = sm_client.create_model(\n", " ModelName=chat_model_name,\n", " ExecutionRoleArn=role,\n", " PrimaryContainer={\n", " \"Image\": chat_inference_image_uri,\n", " \"ModelDataUrl\": s3_code_artifact,\n", " },\n", ")\n", "chat_model_arn = create_model_response[\"ModelArn\"]\n", "\n", "print(f\"Created Model: {chat_model_arn}\")" ] }, { "cell_type": "code", "execution_count": null, "id": "38025fec", "metadata": { "tags": [] }, "outputs": [], "source": [ "chat_endpoint_config_name = f\"{chat_model_name}-config\"\n", "chat_endpoint_name = f\"{chat_model_name}-endpoint\"\n", "\n", "chat_endpoint_config_response = sm_client.create_endpoint_config(\n", " EndpointConfigName=chat_endpoint_config_name,\n", " ProductionVariants=[\n", " {\n", " \"VariantName\": \"variant1\",\n", " \"ModelName\": chat_model_name,\n", " \"InstanceType\": \"ml.g5.12xlarge\",\n", " \"InitialInstanceCount\": 1,\n", " \"ContainerStartupHealthCheckTimeoutInSeconds\": 3600,\n", " },\n", " ],\n", ")\n", "\n", "print(chat_endpoint_config_response)" ] }, { "cell_type": "code", "execution_count": null, "id": "53d84ba6", "metadata": { "tags": [] }, "outputs": [], "source": [ "chat_create_endpoint_response = sm_client.create_endpoint(\n", " EndpointName=f\"{chat_endpoint_name}\", EndpointConfigName=chat_endpoint_config_name\n", ")\n", "\n", "print(f\"Created Endpoint: {chat_create_endpoint_response['EndpointArn']},\")" ] }, { "cell_type": "markdown", "id": "d7454013", "metadata": {}, "source": [ "### This step can take ~ 10 min or longer so please be patient" ] }, { "cell_type": "code", "execution_count": null, "id": "0de9b6eb", "metadata": { "tags": [] }, "outputs": [], "source": [ "import time\n", "\n", "resp = sm_client.describe_endpoint(EndpointName=chat_endpoint_name)\n", "status = resp[\"EndpointStatus\"]\n", "chat_resp = sm_client.describe_endpoint(EndpointName=chat_endpoint_name)\n", "chat_status = chat_resp[\"EndpointStatus\"]\n", "print(\"Status: \" + status)\n", "\n", "while chat_status == \"Creating\":\n", " time.sleep(60)\n", " chat_resp = sm_client.describe_endpoint(EndpointName=chat_endpoint_name)\n", " chat_status = chat_resp[\"EndpointStatus\"]\n", " print(f\"Status: {chat_status}...\")\n", "\n", "print(f\"Arns: {chat_resp['EndpointArn']}\")\n", "print(f\"Status: {chat_status}\")" ] }, { "cell_type": "markdown", "id": "932d8421", "metadata": {}, "source": [ "#### While you wait for the endpoint to be created, you can read more about:\n", "- [Deep Learning containers for large model inference](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-large-model-dlc.html)\n", "- [Accelerate](https://huggingface.co/docs/accelerate/index)" ] }, { "cell_type": "markdown", "id": "12f1fa1b", "metadata": {}, "source": [ "#### Leverage the Boto3 to invoke the endpoint. \n", "\n", "This is a generative model, so we pass in a Text as a prompt and Model will complete the sentence and return the results.\n", "\n", "You can pass a batch of prompts as input to the model. This done by setting `inputs` to the list of prompts. The model then returns a result for each prompt. The text generation can be configured using appropriate parameters. These `parameters` need to be passed to the endpoint. Refer to this documentation - https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.GenerationConfig for more details.\n", "\n", "The below code sample illustrates the invocation of the endpoint using a prompt and also sets some parameters for inference. The function allows for a session ID to be provided for re-using previous inputs and outputs as additional context for a conversation.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "8cf6123b-1c2a-4ba8-9a56-707296b90701", "metadata": { "tags": [] }, "outputs": [], "source": [ "def chat(prompt, session_id=None, **kwargs):\n", " if session_id:\n", " chat_response_model = smr_client.invoke_endpoint(\n", " EndpointName=chat_endpoint_name,\n", " Body=json.dumps(\n", " {\n", " \"inputs\": prompt,\n", " \"parameters\": {\n", " \"temperature\": 0.6,\n", " \"top_k\": 40,\n", " \"max_new_tokens\": 512,\n", " \"session_id\": session_id,\n", " \"no_retrieval\": True,\n", " },\n", " }\n", " ),\n", " ContentType=\"application/json\",\n", " )\n", " else:\n", " chat_response_model = smr_client.invoke_endpoint(\n", " EndpointName=chat_endpoint_name,\n", " Body=json.dumps(\n", " {\n", " \"inputs\": prompt,\n", " \"parameters\": {\n", " \"temperature\": 0.6,\n", " \"top_k\": 40,\n", " \"max_new_tokens\": 512,\n", " },\n", " }\n", " ),\n", " ContentType=\"application/json\",\n", " )\n", "\n", " response = chat_response_model[\"Body\"].read().decode(\"utf8\")\n", " return json.loads(response)" ] }, { "cell_type": "code", "execution_count": null, "id": "445ad392-adab-4717-b286-22456f209085", "metadata": { "tags": [] }, "outputs": [], "source": [ "prompts = \"What do data engineers do?\"" ] }, { "cell_type": "code", "execution_count": null, "id": "b838a408-d330-4788-be1f-ed8314d8e318", "metadata": { "tags": [] }, "outputs": [], "source": [ "response = chat(prompts)\n", "\n", "response" ] }, { "cell_type": "code", "execution_count": null, "id": "7ee01458-f608-4979-81a7-9da999cf98ff", "metadata": { "tags": [] }, "outputs": [], "source": [ "chat(\"What frameworks do they work with?\", session_id=response[\"session_id\"])" ] }, { "cell_type": "markdown", "id": "b11c344c", "metadata": {}, "source": [ "## Clean Up" ] }, { "cell_type": "code", "execution_count": null, "id": "9947d080", "metadata": {}, "outputs": [], "source": [ "# # - Delete the end point\n", "sm_client.delete_endpoint(EndpointName=chat_endpoint_name)" ] }, { "cell_type": "code", "execution_count": null, "id": "f92e2391", "metadata": { "tags": [] }, "outputs": [], "source": [ "# # - In case the end point failed we still want to delete the model\n", "sm_client.delete_endpoint_config(EndpointConfigName=chat_endpoint_config_name)\n", "sm_client.delete_model(ModelName=chat_model_name)" ] }, { "cell_type": "code", "execution_count": null, "id": "8dbc46bc-d9e9-4f10-9064-37919dbdc85e", "metadata": { "tags": [] }, "outputs": [], "source": [ "dynamodb_client = boto3.client(\"dynamodb\")\n", "dynamodb_client.delete_table(TableName=\"openchatkit_chat_logs\")" ] }, { "cell_type": "markdown", "id": "5296eca7-728e-4267-90e8-e68df33fe550", "metadata": {}, "source": [ "## Notebook CI Test Results\n", "\n", "This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.\n", "\n", "\n", "![This us-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-1/inference|generativeai|llm-workshop|lab4-openchatkit|deploy_openchatkit_on_sagemaker.ipynb)\n", "\n", "![This us-east-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-2/inference|generativeai|llm-workshop|lab4-openchatkit|deploy_openchatkit_on_sagemaker.ipynb)\n", "\n", "![This us-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-1/inference|generativeai|llm-workshop|lab4-openchatkit|deploy_openchatkit_on_sagemaker.ipynb)\n", "\n", "![This ca-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ca-central-1/inference|generativeai|llm-workshop|lab4-openchatkit|deploy_openchatkit_on_sagemaker.ipynb)\n", "\n", "![This sa-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/sa-east-1/inference|generativeai|llm-workshop|lab4-openchatkit|deploy_openchatkit_on_sagemaker.ipynb)\n", "\n", "![This eu-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-1/inference|generativeai|llm-workshop|lab4-openchatkit|deploy_openchatkit_on_sagemaker.ipynb)\n", "\n", "![This eu-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-2/inference|generativeai|llm-workshop|lab4-openchatkit|deploy_openchatkit_on_sagemaker.ipynb)\n", "\n", "![This eu-west-3 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-3/inference|generativeai|llm-workshop|lab4-openchatkit|deploy_openchatkit_on_sagemaker.ipynb)\n", "\n", "![This eu-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-central-1/inference|generativeai|llm-workshop|lab4-openchatkit|deploy_openchatkit_on_sagemaker.ipynb)\n", "\n", "![This eu-north-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-north-1/inference|generativeai|llm-workshop|lab4-openchatkit|deploy_openchatkit_on_sagemaker.ipynb)\n", "\n", "![This ap-southeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-1/inference|generativeai|llm-workshop|lab4-openchatkit|deploy_openchatkit_on_sagemaker.ipynb)\n", "\n", "![This ap-southeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-2/inference|generativeai|llm-workshop|lab4-openchatkit|deploy_openchatkit_on_sagemaker.ipynb)\n", "\n", "![This ap-northeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-1/inference|generativeai|llm-workshop|lab4-openchatkit|deploy_openchatkit_on_sagemaker.ipynb)\n", "\n", "![This ap-northeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-2/inference|generativeai|llm-workshop|lab4-openchatkit|deploy_openchatkit_on_sagemaker.ipynb)\n", "\n", "![This ap-south-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-south-1/inference|generativeai|llm-workshop|lab4-openchatkit|deploy_openchatkit_on_sagemaker.ipynb)\n" ] } ], "metadata": { "availableInstances": [ { "_defaultOrder": 0, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.t3.medium", "vcpuNum": 2 }, { "_defaultOrder": 1, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.t3.large", "vcpuNum": 2 }, { "_defaultOrder": 2, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.t3.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 3, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.t3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 4, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5.large", "vcpuNum": 2 }, { "_defaultOrder": 5, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 6, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 7, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 8, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 9, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 10, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 11, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 12, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5d.large", "vcpuNum": 2 }, { "_defaultOrder": 13, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5d.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 14, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5d.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 15, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5d.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 16, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5d.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 17, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5d.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 18, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5d.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 19, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 20, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": true, "memoryGiB": 0, "name": "ml.geospatial.interactive", "supportedImageNames": [ "sagemaker-geospatial-v1-0" ], "vcpuNum": 0 }, { "_defaultOrder": 21, "_isFastLaunch": true, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.c5.large", "vcpuNum": 2 }, { "_defaultOrder": 22, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.c5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 23, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.c5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 24, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.c5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 25, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 72, "name": "ml.c5.9xlarge", "vcpuNum": 36 }, { "_defaultOrder": 26, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 96, "name": "ml.c5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 27, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 144, "name": "ml.c5.18xlarge", "vcpuNum": 72 }, { "_defaultOrder": 28, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.c5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 29, "_isFastLaunch": true, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g4dn.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 30, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g4dn.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 31, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g4dn.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 32, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g4dn.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 33, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g4dn.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 34, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g4dn.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 35, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 61, "name": "ml.p3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 36, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 244, "name": "ml.p3.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 37, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 488, "name": "ml.p3.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 38, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.p3dn.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 39, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.r5.large", "vcpuNum": 2 }, { "_defaultOrder": 40, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.r5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 41, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.r5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 42, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.r5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 43, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.r5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 44, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.r5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 45, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 512, "name": "ml.r5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 46, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.r5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 47, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 48, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 49, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 50, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 51, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 52, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 53, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.g5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 54, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.g5.48xlarge", "vcpuNum": 192 }, { "_defaultOrder": 55, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 1152, "name": "ml.p4d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 56, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 1152, "name": "ml.p4de.24xlarge", "vcpuNum": 96 } ], "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (Data Science 3.0)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:eu-west-1:470317259841:image/sagemaker-data-science-310-v1" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.7" } }, "nbformat": 4, "nbformat_minor": 5 }