{ "cells": [ { "cell_type": "markdown", "id": "9fb44f6d", "metadata": {}, "source": [ "# Introduction to SageMaker JumpStart - Text Generation with Falcon models" ] }, { "cell_type": "markdown", "id": "7da2e71e", "metadata": { "collapsed": false }, "source": [ "---\n", "\n", "This notebook's CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook.\n", "\n", "![This us-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-2/introduction_to_amazon_algorithms|jumpstart-foundation-models|text-generation-falcon.ipynb)\n", "\n", "---" ] }, { "cell_type": "markdown", "id": "6101cb04", "metadata": { "collapsed": false }, "source": [ "---\n", "In this demo notebook, we demonstrate how to use the SageMaker Python SDK to deploy Falcon models for text generation. It is a permissively licensed ([Apache-2.0](https://jumpstart-cache-prod-us-east-2.s3.us-east-2.amazonaws.com/licenses/Apache-License/LICENSE-2.0.txt)) open source model trained on the [RefinedWeb dataset](https://huggingface.co/datasets/tiiuae/falcon-refinedweb). We show several example use cases including code generation, question answering, translation etc.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "9b05b931-992e-4526-978d-f03196874a3b", "metadata": { "tags": [] }, "outputs": [], "source": [ "!pip install sagemaker --quiet --upgrade --force-reinstall\n", "!pip install ipywidgets==7.0.0 --quiet" ] }, { "cell_type": "code", "execution_count": null, "id": "c8dd6de9-0bc2-4d2c-b428-7203bb31fa3c", "metadata": { "jumpStartAlterations": [ "modelIdVersion" ], "tags": [] }, "outputs": [], "source": [ "model_id, model_version, = (\n", " \"huggingface-llm-falcon-7b-instruct-bf16\",\n", " \"*\",\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "70215fdd", "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false }, "tags": [] }, "outputs": [], "source": [ "from ipywidgets import Dropdown\n", "\n", "model_ids = [\n", " \"huggingface-llm-falcon-40b-bf16\",\n", " \"huggingface-llm-falcon-40b-instruct-bf16\",\n", " \"huggingface-llm-falcon-7b-bf16\",\n", " \"huggingface-llm-falcon-7b-instruct-bf16\",\n", "]\n", "\n", "# display the model-ids in a dropdown to select a model for inference.\n", "model_dropdown = Dropdown(\n", " options=model_ids,\n", " value=model_id,\n", " description=\"Select a model\",\n", " style={\"description_width\": \"initial\"},\n", " layout={\"width\": \"max-content\"},\n", ")\n", "display(model_dropdown)" ] }, { "cell_type": "code", "execution_count": null, "id": "5970aa71", "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false }, "tags": [] }, "outputs": [], "source": [ "model_id = model_dropdown.value" ] }, { "cell_type": "code", "execution_count": null, "id": "85a2a8e5-789f-4041-9927-221257126653", "metadata": { "tags": [] }, "outputs": [], "source": [ "%%time\n", "from sagemaker.jumpstart.model import JumpStartModel\n", "\n", "my_model = JumpStartModel(model_id=model_id)\n", "predictor = my_model.deploy()" ] }, { "cell_type": "markdown", "id": "67abf8ea-16c7-4d55-8500-bfd2a16d1294", "metadata": {}, "source": [ "### Changing instance type\n", "---\n", "\n", "\n", "Models have been tested on the following instance types:\n", "\n", " - Falcon 7B and 7B instruct: `ml.g5.2xlarge`, `ml.g5.2xlarge`, `ml.g5.4xlarge`, `ml.g5.8xlarge`, `ml.g5.16xlarge`, `ml.g5.12xlarge`, `ml.g5.24xlarge`, `ml.g5.48xlarge`, `ml.p4d.24xlarge`\n", " - Falcon 40B and 40B instruct: `ml.g5.12xlarge`, `ml.g5.48xlarge`, `ml.p4d.24xlarge`\n", "\n", "If an instance type is not available in you region, please try a different instance. You can do so by specifying instance type in the JumpStartModel class.\n", "\n", "`my_model = JumpStartModel(model_id=\"huggingface-llm-falcon-40b-instruct-bf16\", instance_type=\"ml.g5.12xlarge\")`\n", "\n", "---" ] }, { "cell_type": "markdown", "id": "23b42484-1770-4084-887c-c48ffccc02cc", "metadata": {}, "source": [ "### Changing number of GPUs\n", "---\n", "Falcon models are served with HuggingFace (HF) LLM DLC which requires specifying number of GPUs during model deployment. \n", "\n", "**Falcon 7B and 7B instruct:** HF LLM DLC currently does not support sharding for 7B model. Thus, even if more than one GPU is available on the instance, please do not increase number of GPUs. \n", "\n", "**Falcon 40B and 40B instruct:** By default number of GPUs are set to 4. However, if you are using `ml.g5.48xlarge` or `ml.p4d.24xlarge`, you can increase number of GPUs to be 8 as follows: \n", "\n", "`my_model = JumpStartModel(model_id=\"huggingface-llm-falcon-40b-instruct-bf16\", instance_type=\"ml.g5.48xlarge\")`\n", "\n", "`my_model.env['SM_NUM_GPUS'] = '8'`\n", "\n", "`predictor = my_model.deploy()`\n", "\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "c1a55aa5-f2ad-4db8-9718-b76f969cffbe", "metadata": { "tags": [] }, "outputs": [], "source": [ "%%time\n", "\n", "\n", "prompt = \"Tell me about Amazon SageMaker.\"\n", "\n", "payload = {\n", " \"inputs\": prompt,\n", " \"parameters\": {\n", " \"do_sample\": True,\n", " \"top_p\": 0.9,\n", " \"temperature\": 0.8,\n", " \"max_new_tokens\": 1024,\n", " \"stop\": [\"<|endoftext|>\", \"\"],\n", " },\n", "}\n", "\n", "response = predictor.predict(payload)\n", "print(response[0][\"generated_text\"])" ] }, { "cell_type": "markdown", "id": "15da465e-b855-4249-93b8-f80a3627a62b", "metadata": { "jumpStartAlterations": [], "tags": [] }, "source": [ "### About the model\n", "\n", "---\n", "Falcon is a causal decoder-only model built by [Technology Innovation Institute](https://www.tii.ae/) (TII) and trained on more than 1 trillion tokens of RefinedWeb enhanced with curated corpora. It was built using custom-built tooling for data pre-processing and model training built on Amazon SageMaker. As of June 6, 2023, it is the best open-source model currently available. Falcon-40B outperforms LLaMA, StableLM, RedPajama, MPT, etc. To see comparison, see [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). It features an architecture optimized for inference, with FlashAttention and multiquery. \n", "\n", "\n", "[Refined Web Dataset](https://huggingface.co/datasets/tiiuae/falcon-refinedweb): Falcon RefinedWeb is a massive English web dataset built by TII and released under an Apache 2.0 license. It is a highly filtered dataset with large scale de-duplication of CommonCrawl. It is observed that models trained on RefinedWeb achieve performance equal to or better than performance achieved by training model on curated datasets, while only relying on web data.\n", "\n", "**Model Sizes:**\n", "- **Falcon-7b**: It is a 7 billion parameter model trained on 1.5 trillion tokens. It outperforms comparable open-source models (e.g., MPT-7B, StableLM, RedPajama etc.). To see comparison, see [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). To use this model, please select `model_id` in the cell above to be \"huggingface-textgeneration-falcon-7b-bf16\".\n", "- **Falcon-40B**: It is a 40 billion parameter model trained on 1 trillion tokens. It has surpassed renowned models like LLaMA-65B, StableLM, RedPajama and MPT on the public leaderboard maintained by Hugging Face, demonstrating its exceptional performance without specialized fine-tuning. To see comparison, see [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). \n", "\n", "**Instruct models (Falcon-7b-instruct/Falcon-40B-instruct):** Instruct models are base falcon models fine-tuned on a mixture of chat and instruction datasets. They are ready-to-use chat/instruct models. To use these models, please select `model_id` in the cell above to be \"huggingface-textgeneration-falcon-7b-instruct-bf16\" or \"huggingface-textgeneration-falcon-40b-instruct-bf16\".\n", "\n", "It is [recommended](https://huggingface.co/tiiuae/falcon-7b) that Instruct models should be used without fine-tuning and base models should be fine-tuned further on the specific task.\n", "\n", "**Limitations:**\n", "\n", "- Falcon models are mostly trained on English data and may not generalize to other languages. \n", "- Falcon carries the stereotypes and biases commonly encountered online and in the training data. Hence, it is recommended to develop guardrails and to take appropriate precautions for any production use. This is a raw, pretrained model, which should be further finetuned for most usecases.\n", "\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "719560fd-c9b2-4d4c-a7de-914b8aa72557", "metadata": { "tags": [] }, "outputs": [], "source": [ "def query_endpoint(payload):\n", " \"\"\"Query endpoint and print the response\"\"\"\n", " response = predictor.predict(payload)\n", " print(f\"\\033[1m Input:\\033[0m {payload['inputs']}\")\n", " print(f\"\\033[1m Output:\\033[0m {response[0]['generated_text']}\")" ] }, { "cell_type": "code", "execution_count": null, "id": "009490f0-fb8a-4d92-b2ad-85e53223922a", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Code generation\n", "payload = {\n", " \"inputs\": \"Write a program to compute factorial in python:\",\n", " \"parameters\": {\"max_new_tokens\": 200},\n", "}\n", "query_endpoint(payload)" ] }, { "cell_type": "code", "execution_count": null, "id": "7816dd22-fd8f-4374-ba9d-a62b941ebb16", "metadata": { "tags": [] }, "outputs": [], "source": [ "payload = {\n", " \"inputs\": \"Building a website can be done in 10 simple steps:\",\n", " \"parameters\": {\"max_new_tokens\": 110, \"no_repeat_ngram_size\": 3},\n", "}\n", "query_endpoint(payload)" ] }, { "cell_type": "code", "execution_count": null, "id": "c4aac6cc-d282-428a-b334-5dcdd505e480", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Translation\n", "payload = {\n", " \"inputs\": \"\"\"Translate English to French:\n", "\n", " sea otter => loutre de mer\n", "\n", " peppermint => menthe poivrée\n", "\n", " plush girafe => girafe peluche\n", "\n", " cheese =>\"\"\",\n", " \"parameters\": {\"max_new_tokens\": 3},\n", "}\n", "\n", "query_endpoint(payload)" ] }, { "cell_type": "code", "execution_count": null, "id": "117b6645-56c5-4f0d-bbab-75bae8955943", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Sentiment-analysis\n", "payload = {\n", " \"inputs\": \"\"\"\"I hate it when my phone battery dies.\"\n", " Sentiment: Negative\n", " ###\n", " Tweet: \"My day has been :+1:\"\n", " Sentiment: Positive\n", " ###\n", " Tweet: \"This is the link to the article\"\n", " Sentiment: Neutral\n", " ###\n", " Tweet: \"This new music video was incredibile\"\n", " Sentiment:\"\"\",\n", " \"parameters\": {\"max_new_tokens\": 2},\n", "}\n", "query_endpoint(payload)" ] }, { "cell_type": "code", "execution_count": null, "id": "4594ede3-1272-4e56-8926-ccaf9e66f314", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Question answering\n", "payload = {\n", " \"inputs\": \"Could you remind me when was the C programming language invented?\",\n", " \"parameters\": {\"max_new_tokens\": 50},\n", "}\n", "query_endpoint(payload)" ] }, { "cell_type": "code", "execution_count": null, "id": "fa8d82be-3c33-47c8-b0d7-d8675484b1d7", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Recipe generation\n", "payload = {\n", " \"inputs\": \"What is the recipe for a delicious lemon cheesecake?\",\n", " \"parameters\": {\"max_new_tokens\": 400},\n", "}\n", "query_endpoint(payload)" ] }, { "cell_type": "code", "execution_count": null, "id": "d4f125c1-eae1-45ad-a065-651d9e13dfa8", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Summarization\n", "\n", "payload = {\n", " \"inputs\": \"\"\"Starting today, the state-of-the-art Falcon 40B foundation model from Technology\n", " Innovation Institute (TII) is available on Amazon SageMaker JumpStart, SageMaker's machine learning (ML) hub\n", " that offers pre-trained models, built-in algorithms, and pre-built solution templates to help you quickly get\n", " started with ML. You can deploy and use this Falcon LLM with a few clicks in SageMaker Studio or\n", " programmatically through the SageMaker Python SDK.\n", " Falcon 40B is a 40-billion-parameter large language model (LLM) available under the Apache 2.0 license that\n", " ranked #1 in Hugging Face Open LLM leaderboard, which tracks, ranks, and evaluates LLMs across multiple\n", " benchmarks to identify top performing models. Since its release in May 2023, Falcon 40B has demonstrated\n", " exceptional performance without specialized fine-tuning. To make it easier for customers to access this\n", " state-of-the-art model, AWS has made Falcon 40B available to customers via Amazon SageMaker JumpStart.\n", " Now customers can quickly and easily deploy their own Falcon 40B model and customize it to fit their specific\n", " needs for applications such as translation, question answering, and summarizing information.\n", " Falcon 40B are generally available today through Amazon SageMaker JumpStart in US East (Ohio),\n", " US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Mumbai),\n", " Europe (London), Europe (Frankfurt), Europe (Ireland), and Canada (Central),\n", " with availability in additional AWS Regions coming soon. To learn how to use this new feature,\n", " please see SageMaker JumpStart documentation, the Introduction to SageMaker JumpStart –\n", " Text Generation with Falcon LLMs example notebook, and the blog Technology Innovation Institute trainsthe\n", " state-of-the-art Falcon LLM 40B foundation model on Amazon SageMaker. Summarize the article above:\"\"\",\n", " \"parameters\": {\"max_new_tokens\": 200},\n", "}\n", "query_endpoint(payload)" ] }, { "cell_type": "markdown", "id": "132d3fee-5ce2-4ff2-93fe-5779762ce2cf", "metadata": {}, "source": [ "### Supported parameters\n", "\n", "***\n", "Some of the supported parameters while performing inference are the following:\n", "\n", "* **max_length:** Model generates text until the output length (which includes the input context length) reaches `max_length`. If specified, it must be a positive integer.\n", "* **max_new_tokens:** Model generates text until the output length (excluding the input context length) reaches `max_new_tokens`. If specified, it must be a positive integer.\n", "* **num_beams:** Number of beams used in the greedy search. If specified, it must be integer greater than or equal to `num_return_sequences`.\n", "* **no_repeat_ngram_size:** Model ensures that a sequence of words of `no_repeat_ngram_size` is not repeated in the output sequence. If specified, it must be a positive integer greater than 1.\n", "* **temperature:** Controls the randomness in the output. Higher temperature results in output sequence with low-probability words and lower temperature results in output sequence with high-probability words. If `temperature` -> 0, it results in greedy decoding. If specified, it must be a positive float.\n", "* **early_stopping:** If True, text generation is finished when all beam hypotheses reach the end of sentence token. If specified, it must be boolean.\n", "* **do_sample:** If True, sample the next word as per the likelihood. If specified, it must be boolean.\n", "* **top_k:** In each step of text generation, sample from only the `top_k` most likely words. If specified, it must be a positive integer.\n", "* **top_p:** In each step of text generation, sample from the smallest possible set of words with cumulative probability `top_p`. If specified, it must be a float between 0 and 1.\n", "* **return_full_text:** If True, input text will be part of the output generated text. If specified, it must be boolean. The default value for it is False.\n", "* **stop**: If specified, it must a list of strings. Text generation stops if any one of the specified strings is generated.\n", "\n", "We may specify any subset of the parameters mentioned above while invoking an endpoint. \n", "\n", "For more parameters and information on HF LLM DLC, please see [this article](https://huggingface.co/blog/sagemaker-huggingface-llm#4-run-inference-and-chat-with-our-model).\n", "***" ] }, { "cell_type": "markdown", "id": "6f3dfec1", "metadata": { "collapsed": false }, "source": [ "### Limits on the number of input and output tokens\n", "\n", "---\n", "\n", "Large models such as Falcon have very high accelerator memory footprint. Thus, a very large input payload or generating a large output can cause out of memory errors. Furthermore, generating large outputs can take secs or even minutes. However, SageMaker has a response time limit of 60 seconds. Thus, large input or output payload can cause timeout issues. Based on these two constraints, we recommend the following limits on the input and new tokens\n", "\n", "\n", "| Model | Small Input | Medium Input | Large Input |\n", "|----------------------------------------| --- | --- | --- |\n", "| | (#input_tokens, #max_new_tokens) | (#input_tokens, #max_new_tokens) | (#input_tokens, #max_new_tokens) |\n", "| Falcon 7B/Instruct | (100, 1900) | (1500, 1500) | (20000, 1000) |\n", "| Falcon 40B/Instruct on ml.g5.12xlarge | (100,1150) | (950,900) | (4000,100) |\n", "| Falcon 40B/Instruct on ml.g5.48xlarge | (100, 1850) | (950, 1800) | (20000, 600)|\n", "\n", "Note that, limits don't apply equally to input tokens and new tokens. Models typically support much larger input tokens if you decrease max_new_tokens slightly. Also, note that non-default values of inference parameters will impact the size of input and output payload supported. For instance, higher value of num_beams will reduce the number of `max_new_tokens` you can generate.\n", "\n", "\n", "**Words-Token ratio:** Ratio of words to tokens is roughly 1.5. So, if number of input tokens 900, it corresponds to ~600 input words. Note that this is not always the case. There are several pieces of text where words-tokens ratio can be significantly different.\n", "\n", "**Setting non-default environment variables:** If setting num_input_tokens >1024 or num_total_tokens >=2048, you would need to change the environment variable before deploying the model:\n", " my_model.env['MAX_INPUT_LENGTH'] = '2048' (default '1024')\n", " my_model.env['MAX_TOTAL_TOKENS'] = '4096' (default '2048')\n", "\n", "Note that the endpoint supports maximum length on the number of total tokens (number of input + new tokens) whereas we report the limits above on number of input tokens and number of new tokens.\n", "\n", "If using ml.g5.48xlarge for Falcon 40b, you would need to use 8 GPUs (my_model.env['SM_NUM_GPUS'] = '8')\n", "\n", "**Concurrency/invoking at short intervals:** When sending multiple requests at once or at very short interval, the endpoint can not handle the same thresholds as above. It may result in OOM CUDA error. Thus, we recommend trying smaller number of input and output tokens than mentioned above. This is because the TGI container batches requests. While this results in significant throughput improvements, it may cause CUDA OOM or timeout errors. To reduce the number of batched requests, you can set the `max_concurrent_requests` parameter (my_model.env['MAX_CONCURRENT_REQUESTS'] = '1').\n", "\n", "**Corrupted endpoint state:** It has been observed that once an endpoint suffers a CUDA OOM error, the endpoint may get in corrupted state where it may not function for very small inputs and small `max_new_token` parameter. It has been observed that letting endpoint sit ideally can help reset the state. If that does not work, please restart the endpoint (delete and launch a new endpoint).\n", "\n", "**Model quality:** Note that above limits only refer to the memory limitation of the instance types available and the sagemaker endpoint response timeout limit. Model itself can theoretically support arbitrary large input and output payload if infinite CUDA memory is available and there is no limit on sagemaker endpoint response time. However, it has been observed that the quality of the model's output decrease substantially when provided a very large input payload (eg. generate summary of a document with 20,000 tokens) or generate large outputs (write a story with 2000 tokens). Thus, you may want to stay below the limits recommended above to generate high quality outputs.\n", "\n", "---" ] }, { "cell_type": "markdown", "id": "c4574038", "metadata": { "collapsed": false }, "source": [ "### Generating few tokens at a time - Supporting large outputs on smaller instances\n", "\n", "---\n", "As observed above, model can support much larger input tokens than output tokens. For instance, endpoint has higher latency and more memory requirement when given an input sequence of length 100 tokens and generating 100 tokens compared to the case with input sequence of length 190 tokens and generating 10 new tokens. Based on this observation, we can set up our text generation process to avoid CUDA OOM issue and endpoint response timeout issue by invoke endpoint repeatedly to generate very large output sequence which would have been otherwise infeasible. For Falcon 40b instruct model, we observed that you can generate more than 5000 new tokens on `ml.g5.12xlarge` if you generate 100 tokens at a time.\n", "\n", "\n", "\n", "We also experimented with the computation overhead of repeatedly querying endpoint and thus computing activations/states for the input text mutliple times. We observed that even when generating 10 tokens at a time, this contributed less than 5% of the overall time to generate the desired number of output tokens. Thus, even when generating the entire sequence is feasible for a single query, you can simply generate generate in batches.\n", "\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "55e5448a", "metadata": { "collapsed": false }, "outputs": [], "source": [ "max_new_tokens = 1000\n", "max_new_tokens_single_iteration = 100\n", "\n", "payload = {\n", " \"inputs\": \"List down all the services by Amazon and a detailed description of each of the service. Tell me how to use Kendra. Tell me how to use AWS. Recite the guide to get started with SageMaker?\",\n", " \"parameters\": {\"max_new_tokens\": max_new_tokens_single_iteration},\n", "}\n", "\n", "print(f\"Input Text: {payload['inputs']}\")\n", "\n", "for i, _ in enumerate(range(0, max_new_tokens, max_new_tokens_single_iteration)):\n", " response = predictor.predict(payload)\n", " generated_text = response[0][\"generated_text\"]\n", " full_text = payload[\"inputs\"] + generated_text\n", " print(f\"\\033[1mIteration {i+1}:\\033[0m\\n {generated_text}\\n\")\n", " payload[\"inputs\"] = full_text" ] }, { "cell_type": "markdown", "id": "b04d99e2", "metadata": {}, "source": [ "### 5. Clean up the endpoint" ] }, { "cell_type": "code", "execution_count": null, "id": "6c3b60c2", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Delete the SageMaker endpoint\n", "predictor.delete_model()\n", "predictor.delete_endpoint()" ] } ], "metadata": { "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (Data Science 2.0)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-west-2:236514542706:image/sagemaker-data-science-38" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.13" } }, "nbformat": 4, "nbformat_minor": 5 }