{ "cells": [ { "attachments": {}, "cell_type": "markdown", "id": "16e07853", "metadata": { "tags": [] }, "source": [ "## LangChain 経由で SageMaker でホストした大規模言語モデル (LLM) を使う\n", "\n", "### このノートブックについて \n", "\n", "こちらは大規模言語モデル(LLM)を使ったアプリケーションを構築するためのライブラリーである [LangChain](https://langchain.com/) を用いて SageMaker 上でホストした LLM から推論結果を得るサンプルコードを示したものです。 \n", "\n", "LLM の例として今回は HuggingFace 上で rinna 社が公開している [rinna/japanese-gpt-neox-3.6b-instruction-ppo](https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-ppo) を使用します。 \n", " \n", "\n", "こちらの Notebook は 以下の環境で動作確認を行っています。\n", "\n", "- SageMaker Studio Notebooks \n", " - `ml.t3.medium`: `Data Science 3.0`\n", "- SageMaker Notebooks\n", " - `ml.t3.medium`: `conda_pytorch_p310`\n", "\n", "[各インスタンスの料金についてはこちら](https://aws.amazon.com/jp/sagemaker/pricing/)をご確認ください。 \n", "\n", "\n", "また、ノートブックを動かすにあたって、各セルを上から順番に実行すれば動きますが、SageMaker 上での推論の仕組みについては、[AI/ML DarkPark](https://www.youtube.com/playlist?list=PLAOq15s3RbuL32mYUphPDoeWKUiEUhcug) の特に [Amazon SageMaker 推論 Part2すぐにプロダクション利用できる!モデルをデプロイして推論する方法 【ML-Dark-04】【AWS Black Belt】](https://youtu.be/sngNd79GpmE) をご参照ください。" ] }, { "attachments": {}, "cell_type": "markdown", "id": "3c3cc4dc-bf6b-4dda-9270-8a0126609ccf", "metadata": {}, "source": [ "### 前準備 \n", "\n", "まずは事前準備として LLM を SageMaker Realtime Endpoint でホストします。 \n", "下記リンクの Notebook を実行することでエンドポイントを立てます。 \n", "https://github.com/aws-samples/aws-ml-jp/blob/main/tasks/generative-ai/text-to-text/fine-tuning/instruction-tuning/Transformers/Rinna_Neox_Inference_ja.ipynb\n" ] }, { "cell_type": "code", "execution_count": null, "id": "dc57b769-d58a-48cb-b7f4-77a4d80c6b85", "metadata": { "tags": [] }, "outputs": [], "source": [ "# !wget https://raw.githubusercontent.com/aws-samples/aws-ml-jp/main/tasks/generative-ai/text-to-text/fine-tuning/instruction-tuning/Transformers/Rinna_Neox_Inference_ja.ipynb\n", "# Notebook をコピーする必要がある場合はこちらのコメントアウトを外して実行しダウンロードした Notebook を実行します。" ] }, { "attachments": {}, "cell_type": "markdown", "id": "0fd595db", "metadata": {}, "source": [ "立ち上げたエンドポイント名を下の値に置き換えます。" ] }, { "cell_type": "code", "execution_count": null, "id": "d0485332", "metadata": { "tags": [] }, "outputs": [], "source": [ "# endpoint_name = <エンドポイント名>\n", "endpoint_name = 'Rinna-Inference'" ] }, { "attachments": {}, "cell_type": "markdown", "id": "d0f5e10d", "metadata": {}, "source": [ "### 必要モジュールのインストール" ] }, { "cell_type": "code", "execution_count": null, "id": "69112440-68de-4510-8ec2-76500dda1dce", "metadata": { "scrolled": true, "tags": [] }, "outputs": [], "source": [ "!pip install 'langchain>=0.0.186'" ] }, { "attachments": {}, "cell_type": "markdown", "id": "cdf6c634", "metadata": {}, "source": [ "## LangChain を使ってみる \n", "\n", "必要モジュールのインストールが完了したので、ここから実際に LangChain を使ってみましょう。\n", "\n", "### 必要モジュールの import " ] }, { "cell_type": "code", "execution_count": null, "id": "17e22b40-64a7-4d9c-a034-ee064278d56a", "metadata": { "tags": [] }, "outputs": [], "source": [ "import codecs\n", "import json\n", "from typing import Dict\n", "\n", "from langchain.docstore.document import Document\n", "from langchain import PromptTemplate, SagemakerEndpoint\n", "from langchain.llms.sagemaker_endpoint import LLMContentHandler\n", "from langchain.chains.question_answering import load_qa_chain" ] }, { "cell_type": "code", "execution_count": null, "id": "9e7b74cf-f9c4-4b2e-a290-ddd5a88686c5", "metadata": { "tags": [] }, "outputs": [], "source": [ "region_name = \"us-east-1\" # 適宜使っているリージョン名に書き換えてください" ] }, { "attachments": {}, "cell_type": "markdown", "id": "18408c94", "metadata": {}, "source": [ "ここで LLM からのレスポンスから適切に文字列を抜き出すための操作を ContentHandler という名前のクラスで定義します。 \n", "\n", "ここで受け付ける入力と出力の形式ははホストする LLM ごとによって変化しうるので注意してください。 \n", "\n", "rinna では改行コードとして``を使っているので置き換えも行なっています。\n" ] }, { "cell_type": "code", "execution_count": null, "id": "587ea765", "metadata": { "tags": [] }, "outputs": [], "source": [ "class ContentHandler(LLMContentHandler):\n", " content_type = \"application/json\"\n", " accepts = \"application/json\"\n", "\n", " def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes:\n", " input_str = json.dumps(\n", " {\n", " \"input\": prompt.replace(\"\\n\", \"\"), \n", " \"instruction\": \"\", \n", " **model_kwargs\n", " })\n", " return input_str.encode('utf-8')\n", " \n", " def transform_output(self, output: bytes) -> str:\n", " response_json = json.loads(output.read().decode(\"utf-8\"))\n", " return response_json.replace(\"\", \"\\n\")" ] }, { "attachments": {}, "cell_type": "markdown", "id": "0337d5ef", "metadata": {}, "source": [ "LangChain では、LLM が応答の根拠として使用する文書を `Document` オブジェクトとして管理します。今回は簡易的な SageMaker に関する説明文を使って、質問に回答させてみます。" ] }, { "cell_type": "code", "execution_count": null, "id": "549772ee-9268-4c0d-ade3-55e8c576f142", "metadata": { "tags": [] }, "outputs": [], "source": [ "example_doc_1 = \"\"\"\n", "Amazon SageMakerは、フルマネージド型の機械学習サービスです。SageMakerを利用することで、データサイエンティストや開発者は、機械学習モデルを迅速かつ容易に構築・訓練し、本番環境に直接デプロイすることができます。Jupyterオーサリングノートブックのインスタンスを統合して提供し、データソースに簡単にアクセスして探索や分析を行うことができるため、サーバーを管理する必要がありません。また、分散環境で非常に大きなデータに対して効率的に実行できるように最適化された、一般的な機械学習アルゴリズムも提供します。SageMakerは、Bring-your-own-algorithmsとフレームワークのネイティブサポートにより、特定のワークフローに適応する柔軟な分散トレーニングオプションを提供します。SageMaker StudioまたはSageMakerコンソールから数回クリックするだけでモデルを起動し、安全でスケーラブルな環境にデプロイすることができます。\n", "\"\"\"\n", "\n", "docs = [\n", " Document(\n", " page_content=example_doc_1,\n", " )\n", "]" ] }, { "attachments": {}, "cell_type": "markdown", "id": "4905331e", "metadata": {}, "source": [ "#### prompt の定義\n", "\n", "ここでは LLM に入力するプロンプトを定義していきます。プロンプトには後述する `Chain` の中で得られた情報などが変数として代入されうります。 \n", "また、カスタマイズした変数(今回のケースだと `instruction`) として `Chain` の呼び出しごとにコントロールすることも可能です。 " ] }, { "cell_type": "code", "execution_count": null, "id": "96351ce7", "metadata": { "tags": [] }, "outputs": [], "source": [ "instruction = '以下の情報を使って質問に答えてください。'\n", "\n", "prompt_template = \"\"\"システム: 以下は、ユーザーとシステムとの会話です。システムは資料から抜粋して質問に答えます。資料にない内容は答えず「わかりません」と答えます。\n", "\n", "{context}\n", "\n", "{instruction}\n", "ユーザー: \"\"\"\n", "PROMPT = PromptTemplate(\n", " template=prompt_template, input_variables=[\"context\", \"instruction\"]\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "id": "c0755c12", "metadata": {}, "source": [ "#### `LLM` オブジェクトの定義 \n", "\n", "次は LLM の呼び出しに使う `LLM` オブジェクトを定義していきます。 \n", "SageMaker Endpoint の `LLM` ラッパーとして `SagemakerEndpoint` が LangChain では用意されているためこちらを使用します。 \n", "\n", "モデルを制御するためのパラメータ(例えばどれぐらいの長さの文章を出力するかを決める `max_new_token` など)もここで設定することになります。 \n", "インプットするパラメータもホストするモデルによって異なるので適宜変更してください。 " ] }, { "cell_type": "code", "execution_count": null, "id": "d0627174", "metadata": { "tags": [] }, "outputs": [], "source": [ "content_handler = ContentHandler()\n", "llm = SagemakerEndpoint(\n", " endpoint_name=endpoint_name, \n", " region_name=region_name, \n", " model_kwargs={\n", " \"max_new_tokens\": 128,\n", " \"temperature\": 0.7,\n", " \"do_sample\": True,\n", " \"pad_token_id\": 0,\n", " \"bos_token_id\": 2,\n", " \"eos_token_id\": 265, # 「。」の ID に相当。\n", " # \"stop_ids\": [50278, 50279, 50277, 1, 0],\n", " },\n", " content_handler=content_handler\n", " )" ] }, { "cell_type": "code", "execution_count": null, "id": "e9a3d61f-7f8e-46fc-a8c2-d3e28ead4093", "metadata": { "tags": [] }, "outputs": [], "source": [ "chain = load_qa_chain(\n", " llm=llm,\n", " prompt=PROMPT\n", ")\n", "\n", "chain({\"input_documents\": docs, \"instruction\": instruction}, return_only_outputs=True)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "422dd18f", "metadata": {}, "source": [ "## 後片付け\n", "\n", "立ち上げた SageMaker Endpoint の削除を忘れないようにしましょう。 \n", "SageMaker SDK 経由でモデルをデプロイしている場合は例えば以下のコードで実施可能です。 \n", "\n", "```python\n", "predictor.delete_model()\n", "predictor.delete_endpoint()\n", "```\n" ] }, { "cell_type": "code", "execution_count": null, "id": "b6b5ed2a-3958-4e0f-958d-20ed3ebb4d34", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "availableInstances": [ { "_defaultOrder": 0, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.t3.medium", "vcpuNum": 2 }, { "_defaultOrder": 1, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.t3.large", "vcpuNum": 2 }, { "_defaultOrder": 2, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.t3.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 3, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.t3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 4, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5.large", "vcpuNum": 2 }, { "_defaultOrder": 5, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 6, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 7, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 8, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 9, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 10, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 11, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 12, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5d.large", "vcpuNum": 2 }, { "_defaultOrder": 13, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5d.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 14, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5d.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 15, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5d.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 16, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5d.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 17, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5d.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 18, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5d.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 19, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 20, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": true, "memoryGiB": 0, "name": "ml.geospatial.interactive", "supportedImageNames": [ "sagemaker-geospatial-v1-0" ], "vcpuNum": 0 }, { "_defaultOrder": 21, "_isFastLaunch": true, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.c5.large", "vcpuNum": 2 }, { "_defaultOrder": 22, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.c5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 23, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.c5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 24, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.c5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 25, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 72, "name": "ml.c5.9xlarge", "vcpuNum": 36 }, { "_defaultOrder": 26, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 96, "name": "ml.c5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 27, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 144, "name": "ml.c5.18xlarge", "vcpuNum": 72 }, { "_defaultOrder": 28, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.c5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 29, "_isFastLaunch": true, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g4dn.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 30, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g4dn.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 31, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g4dn.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 32, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g4dn.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 33, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g4dn.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 34, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g4dn.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 35, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 61, "name": "ml.p3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 36, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 244, "name": "ml.p3.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 37, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 488, "name": "ml.p3.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 38, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.p3dn.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 39, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.r5.large", "vcpuNum": 2 }, { "_defaultOrder": 40, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.r5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 41, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.r5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 42, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.r5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 43, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.r5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 44, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.r5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 45, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 512, "name": "ml.r5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 46, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.r5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 47, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 48, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 49, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 50, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 51, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 52, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 53, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.g5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 54, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.g5.48xlarge", "vcpuNum": 192 }, { "_defaultOrder": 55, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 1152, "name": "ml.p4d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 56, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 1152, "name": "ml.p4de.24xlarge", "vcpuNum": 96 } ], "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (Data Science 3.0)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/sagemaker-data-science-310-v1" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.6" } }, "nbformat": 4, "nbformat_minor": 5 }