{ "cells": [ { "attachments": {}, "cell_type": "markdown", "id": "fded102b", "metadata": {}, "source": [ "# Text summarization with small files with Anthropic Claude" ] }, { "attachments": {}, "cell_type": "markdown", "id": "fab8b2cf", "metadata": {}, "source": [ "## Overview\n", "\n", "In this example, you are going to ingest a small amount of data (String data) directly into Amazon Bedrock API (using Anthropic Claude model) and give it an instruction to summarize the respective text.\n", "\n", "### Architecture\n", "\n", "![](./images/41-text-simple-1.png)\n", "\n", "In this architecture:\n", "\n", "1. A small piece of text (or small file) is loaded\n", "1. A foundational model processes the input data\n", "1. Model returns a response with the summary of the ingested text\n", "\n", "### Use case\n", "\n", "This approach can be used to summarize call transcripts, meetings transcripts, books, articles, blog posts, and other relevant content.\n", "\n", "### Challenges\n", "\n", "This approach can be used when the input text or file fits within the model context length. In notebook `02.long-text-summarization-titan.ipynb`, we will explore an approach to address the challenge when users have large document(s) that exceed the token limit.\n", "\n", "## Setup" ] }, { "attachments": {}, "cell_type": "markdown", "id": "7eaf6ce4", "metadata": {}, "source": [ "#### ⚠️⚠️⚠️ Execute the following cells before running this notebook ⚠️⚠️⚠️\n", "\n", "For a detailed description on what the following cells do refer to [Bedrock boto3 setup](../00_Intro/bedrock_boto3_setup.ipynb) notebook." ] }, { "cell_type": "code", "execution_count": null, "id": "a77295e8-364e-4a29-b320-670d697a0b3e", "metadata": { "tags": [] }, "outputs": [], "source": [ "# Make sure you run `download-dependencies.sh` from the root of the repository to download the dependencies before running this cell\n", "%pip install ../dependencies/botocore-1.29.162-py3-none-any.whl ../dependencies/boto3-1.26.162-py3-none-any.whl ../dependencies/awscli-1.27.162-py3-none-any.whl --force-reinstall\n", "%pip install langchain==0.0.190 --quiet" ] }, { "cell_type": "code", "execution_count": null, "id": "66edf151", "metadata": { "tags": [] }, "outputs": [], "source": [ "#### Un comment the following lines to run from your local environment outside of the AWS account with Bedrock access\n", "\n", "#import os\n", "#os.environ['BEDROCK_ASSUME_ROLE'] = ''\n", "#os.environ['AWS_PROFILE'] = ''" ] }, { "cell_type": "code", "execution_count": null, "id": "871b730e", "metadata": { "tags": [] }, "outputs": [], "source": [ "import boto3\n", "import json\n", "import os\n", "import sys\n", "\n", "module_path = \"..\"\n", "sys.path.append(os.path.abspath(module_path))\n", "from utils import bedrock, print_ww\n", "\n", "os.environ['AWS_DEFAULT_REGION'] = 'us-east-1'\n", "boto3_bedrock = bedrock.get_bedrock_client(os.environ.get('BEDROCK_ASSUME_ROLE', None))" ] }, { "attachments": {}, "cell_type": "markdown", "id": "342796d0", "metadata": {}, "source": [ "## Summarizing a short text with boto3\n", " \n", "To learn detail of API request to Amazon Bedrock, this notebook introduces how to create API request and send the request via Boto3 rather than relying on langchain, which gives simpler API by wrapping Boto3 operation. " ] }, { "attachments": {}, "cell_type": "markdown", "id": "9da4d9ee", "metadata": {}, "source": [ "### Request Syntax of InvokeModel in Boto3\n", "\n", "\n", "We use `InvokeModel` API for sending request to a foundation model. Here is an example of API request for sending text to Anthropic Claude. Inference parameters in `textGenerationConfig` depends on the model that you are about to use. Inference paramerters of Anthropic Claude are:\n", "\n", "- **temperature** tunes the degree of randomness in generation. Lower temperatures mean less random generations.\n", "- **top_p** less than one keeps only the smallest set of most probable tokens with probabilities that add up to top_p or higher for generation.\n", "- **top_k** can be used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.\n", "- **max_tokens_to_sample** is maximum number of tokens to generate. Responses are not guaranteed to fill up to the maximum desired length.\n", "- **stop_sequences** are sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.\n", "\n", "```python\n", "response = bedrock.invoke_model(body=\n", " {\"prompt\":\"this is where you place your input text\",\n", " \"max_tokens_to_sample\":4096,\n", " \"temperature\":0.5,\n", " \"top_k\":250,\n", " \"top_p\":0.5,\n", " \"stop_sequences\":[]\n", " },\n", " modelId=\"anthropic.claude-v1\", \n", " accept=accept, \n", " contentType=contentType)\n", "\n", "```\n", "\n", "### Writing prompt with text to be summarized\n", "\n", "In this notebook, you can use any short text whose tokens are less than the maximum token of a foundation model. As an exmple of short text, let's take one paragraph of an [AWS blog post](https://aws.amazon.com/jp/blogs/machine-learning/announcing-new-tools-for-building-with-generative-ai-on-aws/) about announcement of Amazon Bedrock.\n", "\n", "The prompt starts with an instruction `Please provide a summary of the following text.`. " ] }, { "cell_type": "code", "execution_count": null, "id": "ece0c069", "metadata": { "tags": [] }, "outputs": [], "source": [ "prompt = \"\"\"\n", "Please provide a summary of the following text.\n", "\n", "AWS took all of that feedback from customers, and today we are excited to announce Amazon Bedrock, \\\n", "a new service that makes FMs from AI21 Labs, Anthropic, Stability AI, and Amazon accessible via an API. \\\n", "Bedrock is the easiest way for customers to build and scale generative AI-based applications using FMs, \\\n", "democratizing access for all builders. Bedrock will offer the ability to access a range of powerful FMs \\\n", "for text and images—including Amazons Titan FMs, which consist of two new LLMs we’re also announcing \\\n", "today—through a scalable, reliable, and secure AWS managed service. With Bedrock’s serverless experience, \\\n", "customers can easily find the right model for what they’re trying to get done, get started quickly, privately \\\n", "customize FMs with their own data, and easily integrate and deploy them into their applications using the AWS \\\n", "tools and capabilities they are familiar with, without having to manage any infrastructure (including integrations \\\n", "with Amazon SageMaker ML features like Experiments to test different models and Pipelines to manage their FMs at scale).\n", "\n", "\"\"\"" ] }, { "attachments": {}, "cell_type": "markdown", "id": "3efddbb0", "metadata": {}, "source": [ "## Creating request body with prompt and inference parameters \n", "\n", "Following the request syntax of `invoke_model`, you create request body with the above prompt and inference parameters." ] }, { "cell_type": "code", "execution_count": null, "id": "60d191eb", "metadata": { "tags": [] }, "outputs": [], "source": [ "body = json.dumps({\"prompt\": prompt,\n", " \"max_tokens_to_sample\":4096,\n", " \"temperature\":0.5,\n", " \"top_k\":250,\n", " \"top_p\":0.5,\n", " \"stop_sequences\":[]\n", " }) " ] }, { "attachments": {}, "cell_type": "markdown", "id": "cc9f3326", "metadata": {}, "source": [ "## Invoke foundation model via Boto3\n", "\n", "Here sends the API request to Amazon Bedrock with specifying request parameters `modelId`, `accept`, and `contentType`. Following the prompt, the foundation model in Amazon Bedrock summarizes the text." ] }, { "cell_type": "code", "execution_count": null, "id": "9f400d76", "metadata": { "tags": [] }, "outputs": [], "source": [ "modelId = 'anthropic.claude-v1' # change this to use a different version from the model provider\n", "accept = 'application/json'\n", "contentType = 'application/json'\n", "\n", "response = boto3_bedrock.invoke_model(body=body, modelId=modelId, accept=accept, contentType=contentType)\n", "response_body = json.loads(response.get('body').read())\n", "\n", "print_ww(response_body.get('completion'))" ] }, { "attachments": {}, "cell_type": "markdown", "id": "180c84a0", "metadata": {}, "source": [ "In the above the Bedrock service generates the entire summary for the given prompt in a single output, this can be slow if the output contains large amount of tokens. \n", "\n", "Below we explore the option how we can use Bedrock to stream the output such that the user could start consuming it as it is being generated by the model. For this Bedrock supports `invoke_model_with_response_stream` API providing `ResponseStream` that streams the output in form of chunks." ] }, { "cell_type": "code", "execution_count": null, "id": "94e5ca2f", "metadata": {}, "outputs": [], "source": [ "response = boto3_bedrock.invoke_model_with_response_stream(body=body, modelId=modelId, accept=accept, contentType=contentType)\n", "stream = response.get('body')\n", "output = list(stream)\n", "output" ] }, { "attachments": {}, "cell_type": "markdown", "id": "fc9c1b3b", "metadata": {}, "source": [ "Instead of generating the entire output, Bedrock sends smaller chunks from the model. This can be displayed in a consumable manner as well." ] }, { "cell_type": "code", "execution_count": null, "id": "01ab3461", "metadata": {}, "outputs": [], "source": [ "from IPython.display import display_markdown,Markdown,clear_output" ] }, { "cell_type": "code", "execution_count": null, "id": "f0148858", "metadata": {}, "outputs": [], "source": [ "response = boto3_bedrock.invoke_model_with_response_stream(body=body, modelId=modelId, accept=accept, contentType=contentType)\n", "stream = response.get('body')\n", "output = []\n", "i = 1\n", "if stream:\n", " for event in stream:\n", " chunk = event.get('chunk')\n", " if chunk:\n", " chunk_obj = json.loads(chunk.get('bytes').decode())\n", " text = chunk_obj['completion']\n", " clear_output(wait=True)\n", " output.append(text)\n", " display_markdown(Markdown(''.join(output)))\n", " i+=1" ] }, { "attachments": {}, "cell_type": "markdown", "id": "93e8ee83", "metadata": {}, "source": [ "## Conclusion\n", "You have now experimented with using `boto3` SDK which provides a vanilla exposure to Amazon Bedrock API. Using this API you have seen the use case of generating a summary of AWS news about Amazon Bedrock in 2 different ways: entire output and streaming output generation.\n", "\n", "### Take aways\n", "- Adapt this notebook to experiment with different models available through Amazon Bedrock such as Amazon Titan and AI21 Labs Jurassic models.\n", "- Change the prompts to your specific usecase and evaluate the output of different models.\n", "- Play with the token length to understand the latency and responsiveness of the service.\n", "- Apply different prompt engineering principles to get better outputs.\n", "\n", "## Thank You" ] } ], "metadata": { "availableInstances": [ { "_defaultOrder": 0, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.t3.medium", "vcpuNum": 2 }, { "_defaultOrder": 1, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.t3.large", "vcpuNum": 2 }, { "_defaultOrder": 2, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.t3.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 3, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.t3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 4, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5.large", "vcpuNum": 2 }, { "_defaultOrder": 5, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 6, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 7, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 8, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 9, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 10, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 11, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 12, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5d.large", "vcpuNum": 2 }, { "_defaultOrder": 13, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5d.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 14, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5d.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 15, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5d.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 16, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5d.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 17, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5d.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 18, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5d.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 19, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 20, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": true, "memoryGiB": 0, "name": "ml.geospatial.interactive", "supportedImageNames": [ "sagemaker-geospatial-v1-0" ], "vcpuNum": 0 }, { "_defaultOrder": 21, "_isFastLaunch": true, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.c5.large", "vcpuNum": 2 }, { "_defaultOrder": 22, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.c5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 23, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.c5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 24, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.c5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 25, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 72, "name": "ml.c5.9xlarge", "vcpuNum": 36 }, { "_defaultOrder": 26, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 96, "name": "ml.c5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 27, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 144, "name": "ml.c5.18xlarge", "vcpuNum": 72 }, { "_defaultOrder": 28, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.c5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 29, "_isFastLaunch": true, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g4dn.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 30, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g4dn.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 31, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g4dn.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 32, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g4dn.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 33, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g4dn.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 34, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g4dn.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 35, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 61, "name": "ml.p3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 36, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 244, "name": "ml.p3.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 37, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 488, "name": "ml.p3.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 38, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.p3dn.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 39, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.r5.large", "vcpuNum": 2 }, { "_defaultOrder": 40, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.r5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 41, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.r5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 42, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.r5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 43, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.r5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 44, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.r5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 45, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 512, "name": "ml.r5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 46, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.r5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 47, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 48, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 49, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 50, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 51, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 52, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 53, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.g5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 54, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.g5.48xlarge", "vcpuNum": 192 }, { "_defaultOrder": 55, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 1152, "name": "ml.p4d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 56, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 1152, "name": "ml.p4de.24xlarge", "vcpuNum": 96 } ], "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "tmp-bedrock", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.16" } }, "nbformat": 4, "nbformat_minor": 5 }