{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# SageMaker Hosting を利用して TensorFlow のモデルを SageMaker hosting のいろいろな機能を使ってホスティングする\n", "* ホスティングするモデルは pretrained モデルである MobileNet (とv2)を題材にする\n", "* SageMaker Studio の以下環境の使用を前提とする\n", " * `TensorFlow 2.3 Python 3.7 CPU Optimized`\n", " * `t3.medium` または `m5.large`\n", "* 以下コンテンツの 1 と 2 は実行必須だが、あとは実行したいところから実行可能\n", "\n", "## contents\n", "* [1. (必須)使用するモジュールのインストールと読み込み、定数の設定](#1.-(必須)使用するモジュールのインストールと読み込み、定数の設定)\n", "* [2. (必須)使用するモデルの動作確認と-S3-への転送](#2.-(必須)使用するモデルの動作確認と-S3-への転送)\n", "* [3. TensorFlow の SageMaker マネージドコンテナ(TensorFlow Serving)を利用した hosting](#3.-TensorFlow-の-SageMaker-マネージドコンテナ(TensorFlow-Serving)を利用した-hosting)\n", " * [#3-1. SageMaker SDK の場合の手順概要](#3-1.-SageMaker-SDK-の場合の手順概要)\n", " * [#3-2. boto3 の場合の手順概要](#3-2.-boto3-の場合の手順概要)\n", "* [4. 前処理/後処理追加](#4.-前処理/後処理追加)\n", " * [4-1. SageMaker Python SDK で前処理/後処理を追加してホスティングと推論](#4-1.-SageMaker-Python-SDK-で前処理/後処理を追加してホスティングと推論)\n", " * [4-2. Boto3 で前処理/後処理を追加してホスティングと推論](#4-2.-Boto3-で前処理/後処理を追加してホスティングと推論)\n", "* [5. マルチモデルエンドポイント](#5.-マルチモデルエンドポイント)\n", "* [6. 非同期推論](#6.-非同期推論)\n", "* [7. オートスケール](#7.-オートスケール)\n", "* [8. サーバーレス推論](#8.-サーバーレス推論)\n", "* [9. 独自コンテナイメージの持ち込みを利用した推論](#9.-独自コンテナイメージの持ち込みを利用した推論)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. (必須)使用するモジュールのインストールと読み込み、定数の設定" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "pip install -U matplotlib sagemaker" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import tensorflow as tf, os, tarfile, json, numpy as np, base64, sagemaker, boto3\n", "from sagemaker.tensorflow import TensorFlowModel\n", "from io import BytesIO\n", "from matplotlib import pyplot as plt\n", "from PIL import Image\n", "from time import sleep\n", "from glob import glob\n", "sm_client = boto3.client('sagemaker')\n", "smr_client = boto3.client('sagemaker-runtime')\n", "s3_client = boto3.client('s3')\n", "endpoint_inservice_waiter = sm_client.get_waiter('endpoint_in_service')\n", "sm_role = sagemaker.get_execution_role()\n", "sess = sagemaker.session.Session()\n", "bucket = sess.default_bucket()\n", "print(f'使用するロール : {sm_role}')\n", "print(f'使用するバケット : {bucket}')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. (必須)使用するモデルの動作確認と S3 への転送\n", "1. tensorflow の pre-trained model である mobilenet を読み込む\n", "2. 推論用の画像とラベルをダウンロード\n", "3. 推論用の画像を前処理\n", "4. モデルを tar.gz に固めて S3 にアップロード(SageMaker Hosting するため)\n", "5. 使用する推論用コンテナの URI を取得" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "model = tf.keras.applications.mobilenet.MobileNet()\n", "model.summary()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "WORK_DIR = f'{os.getcwd()}/work/'\n", "\n", "os.makedirs(WORK_DIR, exist_ok=True)\n", "# サンプル画像をダウンロード\n", "file = tf.keras.utils.get_file(\n", " f'{WORK_DIR}cat.jpg',\n", " 'https://gahag.net/img/201608/11s/gahag-0115329292-1.jpg')\n", "\n", "# 分類クラスをダウンロード\n", "labels_path = tf.keras.utils.get_file(\n", " f'{WORK_DIR}/ImageNetLabels.txt',\n", " 'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt')\n", "labels = list(np.array(open(labels_path).read().splitlines())[1:])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with open('./code/labels.txt','wt') as f:\n", " for txt in labels:\n", " f.write(txt+'\\n')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "imsize = Image.open(file).size" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# 画像のresizeと前処理結果の確認\n", "x,y = imsize[0]-imsize[1],0\n", "img = Image.open(file).crop((x,y,900,537)).resize((model.input_shape[1],model.input_shape[2]))\n", "img_arr = ((np.array(img)-127.5)/127.5).astype(np.float32).reshape(-1,model.input_shape[1],model.input_shape[2],3)\n", "img" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# モデルの動作確認\n", "print(labels[np.argmax(model.predict(img_arr))]) # tabby" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# 保存ディレクトリを指定\n", "MODEL_DIR = './mobilenet/0001'\n", "\n", "# tar.gz の出力先を指定\n", "TAR_DIR = 'MyModel'\n", "os.makedirs(TAR_DIR, exist_ok=True)\n", "TAR_NAME = os.path.join(TAR_DIR, 'model.tar.gz')\n", "\n", "# モデルを SavedModel 形式で保存\n", "model.save(MODEL_DIR)\n", "\n", "# tar.gz ファイルを出力\n", "with tarfile.open(TAR_NAME, mode='w:gz') as tar:\n", " tar.add(MODEL_DIR)\n", "\n", "# S3 にアップロードして、返り値としてS3のURIを受け取る\n", "model_s3_path = f's3://{bucket}/{TAR_DIR}'\n", "\n", "model_s3_uri = sagemaker.s3.S3Uploader.upload(\n", " local_path = TAR_NAME,\n", " desired_s3_uri = model_s3_path\n", ")\n", "\n", "print(model_s3_uri)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Sagemaker SDK でマネージドコンテナの URI を取得\n", "container_image_tf24_uri = sagemaker.image_uris.retrieve(\n", " \"tensorflow\", # TensorFlow のマネージドコンテナを利用\n", " sagemaker.session.Session().boto_region_name, # ECR のリージョンを指定\n", " version='2.4', # TensorFlow のバージョンを指定\n", " instance_type = 'ml.m5.large', # インスタンスタイプを指定\n", " image_scope = 'inference' # 推論コンテナを指定\n", ")\n", "\n", "print(container_image_tf24_uri)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3. TensorFlow の SageMaker マネージドコンテナ(TensorFlow Serving)を利用した hosting\n", "* コンテナの詳細は[こちら](https://github.com/aws/sagemaker-tensorflow-serving-container)\n", "* SageMaker SDK と boto3 を利用した場合それぞれ行う\n", "* 前提として事前に saved model 形式で保存したモデルを tar.gz に固めて S3 に配置しておく(2 で実施済)\n", "\n", "### 3-1. SageMaker SDK の場合の手順概要\n", "1. SavedModel 形式でモデルを保存(済)\n", "2. モデルを tar.gz で固める(済)\n", "3. S3 にモデルをアップロード(済)\n", "4. SageMaker SDK の [TensorFlowModel](https://sagemaker.readthedocs.io/en/stable/frameworks/tensorflow/sagemaker.tensorflow.html?highlight=TensorFlowModel#sagemaker.tensorflow.model.TensorFlowModel) API で S3 に配置したモデルを読み込む\n", "5. [deploy](https://sagemaker.readthedocs.io/en/stable/frameworks/tensorflow/sagemaker.tensorflow.html?highlight=TensorFlowModel#sagemaker.tensorflow.model.TensorFlowModel.deploy) メソッドで推論エンドポイントを作成\n", "6. 推論実行\n", "7. 推論エンドポイントを削除(付随するものも合わせて削除)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "MODEL_NAME = 'MyTFModelFromSMSDK'\n", "ENDPOINT_CONFIG_NAME = MODEL_NAME + 'Endpoint'\n", "ENDPOINT_NAME = ENDPOINT_CONFIG_NAME" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# モデルとコンテナの指定\n", "tf_model = TensorFlowModel(\n", " name = MODEL_NAME,\n", " model_data=model_s3_uri, # モデルの S3 URI\n", " role= sm_role, # 割り当てるロール\n", " image_uri = container_image_tf24_uri, # コンテナイメージの S3 URI\n", ")\n", "# デプロイ(endpoint 生成)\n", "predictor = tf_model.deploy(\n", " endpoint_name=ENDPOINT_NAME, # エンドポイントの名前\n", " initial_instance_count=1, # インスタンス数\n", " instance_type='ml.m5.large', # インスタンスタイプ\n", ")\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "img = Image.open(file).resize((model.input_shape[1],model.input_shape[2]))\n", "img_arr = ((np.array(img)-127.5)/127.5).astype(np.float32).reshape(-1,model.input_shape[1],model.input_shape[2],3)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "result = np.argmax(predictor.predict(img_arr)['predictions'][0])\n", "print(labels[result])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "r = sm_client.delete_endpoint(EndpointName=ENDPOINT_NAME)\n", "r = sm_client.delete_endpoint_config(EndpointConfigName=ENDPOINT_CONFIG_NAME)\n", "r = sm_client.delete_model(ModelName=MODEL_NAME)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 3-2. boto3 の場合の手順概要\n", "1. SavedModel 形式でモデルを保存(済)\n", "2. モデルを tar.gz で固める(済)\n", "3. S3 にモデルをアップロード(済)\n", "4. boto3 の [create_model](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html#SageMaker.Client.create_model) メソッドで SageMaker のサービスに S3 にアップロードしたモデルを登録する\n", "5. boto3 の [create_endpoint_config](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html#SageMaker.Client.create_endpoint_config)で推論エンドポイントの設定を作成する\n", "6. boto3 の [create_endpoint](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html#SageMaker.Client.create_endpoint) で推論エンドポイントを作成する\n", "7. boto3 の [invoke_endpoint](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker-runtime.html#SageMakerRuntime.Client.invoke_endpoint) で推論する\n", "8. 推論エンドポイントを削除" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "MODEL_NAME = 'MyTFModelAddProcessFromBoto3'\n", "ENDPOINT_CONFIG_NAME = MODEL_NAME + 'EndpointConfig'\n", "ENDPOINT_NAME = MODEL_NAME + 'Endpoint'" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response = sm_client.create_model(\n", " ModelName=MODEL_NAME,\n", " PrimaryContainer={\n", " # SageMaker SDK の時と同じ URI を指定\n", " 'Image': container_image_tf24_uri,\n", " # SageMaker SDK の時と同じ URI を指定\n", " 'ModelDataUrl': model_s3_uri,\n", " },\n", " # SageMaker SDK の時と同じ role を指定\n", " ExecutionRoleArn=sm_role,\n", ")\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response = sm_client.create_endpoint_config(\n", " EndpointConfigName=ENDPOINT_CONFIG_NAME,\n", " ProductionVariants=[\n", " {\n", " 'VariantName': 'AllTrafic',\n", " 'ModelName': MODEL_NAME,\n", " 'InitialInstanceCount': 1,\n", " 'InstanceType': 'ml.m5.xlarge',\n", " },\n", " ],\n", ")\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "response = sm_client.create_endpoint(\n", " EndpointName=ENDPOINT_NAME,\n", " EndpointConfigName=ENDPOINT_CONFIG_NAME,\n", ")\n", "endpoint_inservice_waiter.wait(\n", " EndpointName=ENDPOINT_NAME,\n", " WaiterConfig={'Delay': 5,}\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# リストを文字列にして渡すパターン\n", "request_args = {\n", " 'EndpointName': ENDPOINT_NAME,\n", " 'ContentType' : 'application/json',\n", " 'Accept' : 'application/json',\n", " 'Body' : str(img_arr.tolist())\n", "}\n", "response = smr_client.invoke_endpoint(**request_args)\n", "predictions = json.loads(response['Body'].read().decode('utf-8'))['predictions'][0]\n", "print(labels[np.argmax(predictions)],predictions[np.argmax(predictions)])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# jsonにして渡すパターン\n", "request_args = {\n", " 'EndpointName': ENDPOINT_NAME,\n", " 'ContentType' : 'application/json',\n", " 'Accept' : 'application/json',\n", " 'Body' : json.dumps({\"instances\": img_arr.tolist()})\n", "}\n", "response = smr_client.invoke_endpoint(**request_args)\n", "predictions = json.loads(response['Body'].read().decode('utf-8'))['predictions'][0]\n", "print(labels[np.argmax(predictions)],predictions[np.argmax(predictions)])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# リソース削除\n", "r = sm_client.delete_endpoint(EndpointName=ENDPOINT_NAME)\n", "r = sm_client.delete_endpoint_config(EndpointConfigName=ENDPOINT_CONFIG_NAME)\n", "r = sm_client.delete_model(ModelName=MODEL_NAME)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4. 前処理/後処理追加\n", "* リスト形式でデータを作成し(た後で json形式に変換し)て predict を行っていたが、 `inference.py` を使うことで前処理/後処理を endpoint 側で行うことも可能。\n", " * 重い画像の前処理を潤沢なエンドポイントのコンピューティングリソースで実行することで、呼び出し側 (Lambda など)の頻繁に処理するコンピューティングリソースの負荷を低減できる\n", " * 呼び出し側が前処理を意識せずに実装できるようになる(呼び出し側はデータサイエンティストの領域に入らずに済み、エンドポイントで実行する前処理までをDSの領域にできる)\n", "* 以下を例に実装する。 \n", " * 前処理)画像のバイナリデータを base64 エンコーディングしたものを直接送りつけて、 endpoint 側でリストに変換\n", " * 後処理)softmax の結果から一番可能性の高い値を取得し、そのインデックスからラベルに変換\n", "\n", "### 4-1. SageMaker Python SDK で前処理/後処理を追加してホスティングと推論\n", "手順は前後の処理無しの場合と同じで、`TensorFlowModel` APIでモデルを読み込む際、前処理/後処理を記載した `inference.py` とそのディレクトリを指定する" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pygmentize ./code/inference.py" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "MODEL_NAME = 'MyTFModelAddProcessFromSMSDK'\n", "ENDPOINT_CONFIG_NAME = MODEL_NAME + 'Endpoint'\n", "ENDPOINT_NAME = ENDPOINT_CONFIG_NAME" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "TAR_DIR = 'MyModelAddProcess'\n", "code_dir = './code'\n", "os.makedirs(TAR_DIR, exist_ok=True)\n", "TAR_NAME = os.path.join(TAR_DIR, 'model.tar.gz')\n", "with tarfile.open(TAR_NAME, mode='w:gz') as tar:\n", " tar.add(MODEL_DIR)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "model_add_process_s3_path = f's3://{bucket}/{TAR_DIR}'\n", "\n", "model_add_process_s3_uri = sagemaker.s3.S3Uploader.upload(\n", " local_path = TAR_NAME,\n", " desired_s3_uri = model_add_process_s3_path\n", ")\n", "print(model_add_process_s3_uri)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### inference.py と必要なファイルの設定\n", "* entry_point 引数で `inference.py` (名前固定)を指定すると `input_handler` と `output_handler` を推論前後に実行してくれる\n", "* 必要なモジュール等がある場合は `source_dir` 引数に格納してあるディレクトリを指定すると一緒に読み込むが、 inference.py が `source_dir` のルートに存在する必要がある\n", "* ホスティング先の展開ディレクトリは `/opt/ml/model/code` になるので、他のファイルを読み込む時は絶対パスで指定するとよい(カレントディレクトリは `/sagemaker` で実行される)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# モデルとコンテナの指定\n", "tf_model = TensorFlowModel(\n", " name = MODEL_NAME,\n", " model_data=model_add_process_s3_uri, # モデルの S3 URI\n", " role= sm_role, # 割り当てるロール\n", " image_uri = container_image_tf24_uri, # コンテナイメージの S3 URI\n", " entry_point = './code/inference.py',\n", " source_dir = './code/'\n", ")\n", "# デプロイ(endpoint 生成)\n", "predictor = tf_model.deploy(\n", " endpoint_name=ENDPOINT_NAME,\n", " initial_instance_count=1, # インスタンス数\n", " instance_type='ml.m5.xlarge', # インスタンスタイプ\n", ")\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# 推論\n", "with open('./work/cat.jpg', 'rb') as img:\n", " data = img.read()\n", "bio = BytesIO()\n", "bio.write(data)\n", "b64_data = base64.b64encode(bio.getvalue()).decode('utf-8')\n", "json_b64 = json.dumps({'b64_image':b64_data})\n", "request_args = {\n", " 'EndpointName': ENDPOINT_NAME,\n", " 'ContentType' : 'application/json',\n", " 'Accept' : 'application/json',\n", " 'Body' : json_b64\n", "}\n", "response = smr_client.invoke_endpoint(**request_args)\n", "print(response['Body'].read().decode('utf-8'))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# おかたづけ\n", "r = sm_client.delete_endpoint(EndpointName=ENDPOINT_NAME)\n", "r = sm_client.delete_endpoint_config(EndpointConfigName=ENDPOINT_CONFIG_NAME)\n", "r = sm_client.delete_model(ModelName=MODEL_NAME)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 4-2. Boto3 で前処理/後処理を追加してホスティングと推論" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "MODEL_NAME = 'MyTFModelAddProcessFromBoto3'\n", "ENDPOINT_CONFIG_NAME = MODEL_NAME + 'EndpointConfig'\n", "ENDPOINT_NAME = MODEL_NAME + 'Endpoint'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### inference.py 他を model.tar.gz に同包 \n", "boto3 から endpoint を作成する場合は、SageMaker SDK のように `entry_point` や `source_dir` の設定ができないため、 必要なファイルは予め `model.tar.gz` に一緒に入れる必要がある \n", "(SageMaker SDK の場合は裏側で自動で `inference.py` などを model.tar.gz に再度固めて s3 にアップロードしてくれている)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# model.tar.gz にモデルなどを固める\n", "TAR_DIR = 'MyModelAddProcess'\n", "code_dir = './code'\n", "os.makedirs(TAR_DIR, exist_ok=True)\n", "TAR_NAME = os.path.join(TAR_DIR, 'model.tar.gz')\n", "with tarfile.open(TAR_NAME, mode='w:gz') as tar:\n", " tar.add(MODEL_DIR)\n", " tar.add(code_dir) # inference.py などを同包" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model_add_process_s3_path = f's3://{bucket}/{TAR_DIR}'\n", "\n", "model_add_process_s3_uri = sagemaker.s3.S3Uploader.upload(\n", " local_path = TAR_NAME,\n", " desired_s3_uri = model_add_process_s3_path\n", ")\n", "print(model_add_process_s3_uri)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response = sm_client.create_model(\n", " ModelName=MODEL_NAME,\n", " PrimaryContainer={\n", " # SageMaker SDK の時と同じ URI を指定\n", " 'Image': container_image_tf24_uri,\n", " # SageMaker SDK の時と同じ URI を指定\n", " 'ModelDataUrl': model_add_process_s3_uri,\n", " },\n", " # SageMaker SDK の時と同じ role を指定\n", " ExecutionRoleArn=sm_role,\n", ")\n", "response = sm_client.create_endpoint_config(\n", " EndpointConfigName=ENDPOINT_CONFIG_NAME,\n", " ProductionVariants=[\n", " {\n", " 'VariantName': 'AllTrafic',\n", " 'ModelName': MODEL_NAME,\n", " 'InitialInstanceCount': 1,\n", " 'InstanceType': 'ml.m5.xlarge',\n", " },\n", " ],\n", ")\n", "response = sm_client.create_endpoint(\n", " EndpointName=ENDPOINT_NAME,\n", " EndpointConfigName=ENDPOINT_CONFIG_NAME,\n", ")\n", "\n", "endpoint_inservice_waiter.wait(\n", " EndpointName=ENDPOINT_NAME,\n", " WaiterConfig={'Delay': 5,}\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# 推論\n", "request_args = {\n", " 'EndpointName': ENDPOINT_NAME,\n", " 'ContentType' : 'application/json',\n", " 'Accept' : 'application/json',\n", " 'Body' : json_b64\n", "}\n", "response = smr_client.invoke_endpoint(**request_args)\n", "print(response['Body'].read().decode('utf-8'))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# 削除\n", "r = sm_client.delete_endpoint(EndpointName=ENDPOINT_NAME)\n", "r = sm_client.delete_endpoint_config(EndpointConfigName=ENDPOINT_CONFIG_NAME)\n", "r = sm_client.delete_model(ModelName=MODEL_NAME)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5. マルチモデルエンドポイント\n", "* 1つの推論インスタンスに複数のモデルをデプロイすることが可能\n", "* モデルごとにtar.gzにかためて、S3 の指定プレフィックス直下に配置する\n", "* 以下は boto3 の例。SageMaker SDK でもマルチモデルエンドポイントは可能で詳細は[こちら](https://sagemaker.readthedocs.io/en/stable/frameworks/tensorflow/deploying_tensorflow_serving.html?highlight=multi%20model#deploying-more-than-one-model-to-your-endpoint)\n", "* エンドポイント作成手順はシングルモデルと変わらないが、それぞれのモデルを {モデル名}.tar.gz に固めた上で同じキープレフィックスに配置し、create_model する際の引数に、tar.gzを配置しているプレフィックス(tar.gzのオブジェクトのURIではない)を指定する\n", "* 呼び出す(invoke_endpoint)する際にモデルのファイル名を指定する\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### モデル準備と動作確認( mobilenetv2 )\n", "新しくmobilenetv2を追加し、mobilenetとmobilenetv2の2モデルを1つのエンドポイントでホスティングする準備" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "model2 = tf.keras.applications.mobilenet_v2.MobileNetV2()\n", "model2.summary()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# モデルの動作確認(v1との比較)\n", "# mobilenet\n", "prediction = model.predict(img_arr)[0]\n", "print(prediction[np.argmax(prediction)],labels[np.argmax(prediction)])\n", "# mobilenetV2\n", "prediction = model2.predict(img_arr)[0]\n", "print(prediction[np.argmax(prediction)],labels[np.argmax(prediction)])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# MobileNetV2 の準備\n", "# 保存ディレクトリを指定\n", "MODEL2_DIR = './mobilenetv2/0001'\n", "\n", "# モデルを SavedModel 形式で保存\n", "model2.save(MODEL2_DIR)\n", "\n", "# mobilenetv2.tar.gz の出力先を指定\n", "TAR_DIR = 'MyMultiModel'\n", "os.makedirs(TAR_DIR, exist_ok=True)\n", "TAR_NAME = os.path.join(TAR_DIR, 'mobilenetv2.tar.gz')\n", "\n", "# tar.gz ファイルを出力\n", "with tarfile.open(TAR_NAME, mode='w:gz') as tar:\n", " tar.add(MODEL2_DIR, arcname=\"0001\")\n", "\n", "# MobileNet の準備\n", "# mobilenet.tar.gz の出力先を指定\n", "TAR_NAME = os.path.join(TAR_DIR, 'mobilenet.tar.gz')\n", "# tar.gz ファイルを出力\n", "with tarfile.open(TAR_NAME, mode='w:gz') as tar:\n", " tar.add(MODEL_DIR, arcname=\"0001\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# mobilenet と mobilenet v2 をそれぞれ S3 にアップロードする\n", "multi_model_s3_path = f's3://{bucket}/{TAR_DIR}/'\n", "\n", "!aws s3 cp ./{TAR_DIR}/ {multi_model_s3_path} --recursive" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "MODEL_NAME = 'MyMultiModel'\n", "ENDPOINT_CONFIG_NAME = MODEL_NAME + 'EndpointConfig'\n", "ENDPOINT_NAME = MODEL_NAME + 'Endpoint'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### モデルの作成~エンドポイント作成\n", "* シングルモデルのときはtar.gzのパスを指定していたが、マルチモデルのときはモデルを保存しているプレフィックスを指定する\n", "* 他はシングルモデルと同じ" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response = sm_client.create_model(\n", " ModelName=MODEL_NAME,\n", " PrimaryContainer={\n", " 'Image': container_image_tf24_uri,\n", " 'Mode':'MultiModel',\n", " 'ModelDataUrl': multi_model_s3_path, # tar.gz を複数配置している s3 のプレフィックスを指定\n", " },\n", " ExecutionRoleArn=sm_role,\n", ")\n", "response = sm_client.create_endpoint_config(\n", " EndpointConfigName=ENDPOINT_CONFIG_NAME,\n", " ProductionVariants=[\n", " {\n", " 'VariantName': 'AllTrafic',\n", " 'ModelName': MODEL_NAME,\n", " 'InitialInstanceCount': 1,\n", " 'InstanceType': 'ml.m5.xlarge',\n", " },\n", " ],\n", ")\n", "response = sm_client.create_endpoint(\n", " EndpointName=ENDPOINT_NAME,\n", " EndpointConfigName=ENDPOINT_CONFIG_NAME,\n", ")\n", "\n", "endpoint_inservice_waiter.wait(\n", " EndpointName=ENDPOINT_NAME,\n", " WaiterConfig={'Delay': 5,}\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### マルチモデルエンドポイントでの推論\n", "* `TargetModel` 引数にtar.gzに固めたモデルのファイル名を入れればそのモデルが使用される" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# mobilenet推論\n", "request_args = {\n", " 'EndpointName': ENDPOINT_NAME,\n", " 'ContentType' : 'application/json',\n", " 'Accept' : 'application/json',\n", " 'TargetModel' : 'mobilenet.tar.gz',\n", " 'Body' : json.dumps({\"instances\": img_arr.tolist()})\n", "}\n", "response = smr_client.invoke_endpoint(**request_args)\n", "predictions = json.loads(response['Body'].read().decode('utf-8'))['predictions'][0]\n", "print(labels[np.argmax(predictions)],predictions[np.argmax(predictions)])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# mobilenetv2推論\n", "request_args = {\n", " 'EndpointName': ENDPOINT_NAME,\n", " 'ContentType' : 'application/json',\n", " 'Accept' : 'application/json',\n", " 'TargetModel' : 'mobilenetv2.tar.gz',\n", " 'Body' : json.dumps({\"instances\": img_arr.tolist()})\n", "}\n", "response = smr_client.invoke_endpoint(**request_args)\n", "predictions = json.loads(response['Body'].read().decode('utf-8'))['predictions'][0]\n", "print(labels[np.argmax(predictions)],predictions[np.argmax(predictions)])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### モデルの追加\n", "* 同じプレフィックス下に新しくモデルを追加すれば追加したモデルで推論可能\n", "* ここでは mobilenetv2 を別名に差し替えて(mobilenetv2**_2**)、追加でアップロードしてそちらも機能することを確認する\n", "\n", "注1)モデルの削除は S3 から削除すればできるが、タイムラグがかなりあるので注意。モデルをホスティングをしているインスタンスからモデルが削除されない限り(コントロールできない領域で、ホスティングしているインスタンスのメモリ/ストレージが不足したときのみ自動で読み込んでいるモデルが削除される)S3から削除したモデルで推論できる。 \n", "注2)同様にモデルの更新についても、S3に配置したモデルを上書き保存しても古いモデルがうごき続けてしまう可能性がある。[公式のメッセージ](https://docs.aws.amazon.com/sagemaker/latest/dg/add-models-to-endpoint.html)としては「上書き保存はするな」" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!aws s3 cp ./{TAR_DIR}/mobilenetv2.tar.gz {multi_model_s3_path}mobilenetv2_2.tar.gz" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# mobilenet推論\n", "request_args = {\n", " 'EndpointName': ENDPOINT_NAME,\n", " 'ContentType' : 'application/json',\n", " 'Accept' : 'application/json',\n", " 'TargetModel' : 'mobilenetv2_2.tar.gz', # 後から追加したモデル\n", " 'Body' : json.dumps({\"instances\": img_arr.tolist()})\n", "}\n", "response = smr_client.invoke_endpoint(**request_args)\n", "predictions = json.loads(response['Body'].read().decode('utf-8'))['predictions'][0]\n", "print(labels[np.argmax(predictions)],predictions[np.argmax(predictions)])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "r = sm_client.delete_endpoint(EndpointName=ENDPOINT_NAME)\n", "r = sm_client.delete_endpoint_config(EndpointConfigName=ENDPOINT_CONFIG_NAME)\n", "r = sm_client.delete_model(ModelName=MODEL_NAME)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 6. 非同期推論\n", "* キュー形式でリクエストを出して、非同期に推論結果を得る\n", "* 大きいデータに対して数分オーダで推論結果を求められる場合に適する(非同期推論は推論対象データが数 GB でも返せる)\n", "* 詳細は[こちら](https://aws.amazon.com/jp/about-aws/whats-new/2021/08/amazon-sagemaker-asynchronous-new-inference-option/)\n", "* 使い方はリアルタイム推論に近いが、`endpoint_config`で非同期独自の設定をする" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "MODEL_NAME = 'MyTFModelFromBoto3Async'\n", "ENDPOINT_CONFIG_NAME = MODEL_NAME + 'EndpointConfig'\n", "ENDPOINT_NAME = MODEL_NAME + 'Endpoint'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### モデルの作成\n", "リアルタイム推論と同じ" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response = sm_client.create_model(\n", " ModelName=MODEL_NAME,\n", " PrimaryContainer={\n", " # SageMaker SDK の時と同じ URI を指定\n", " 'Image': container_image_tf24_uri,\n", " # SageMaker SDK の時と同じ URI を指定\n", " 'ModelDataUrl': model_s3_uri,\n", " },\n", " # SageMaker SDK の時と同じ role を指定\n", " ExecutionRoleArn=sm_role,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 推論エンドポイントの設定\n", "`AsyncInferenceConfig` という引数で、推論結果を配置するS3の出力先を指定する" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response = sm_client.create_endpoint_config(\n", " EndpointConfigName=ENDPOINT_CONFIG_NAME,\n", " ProductionVariants=[\n", " {\n", " 'VariantName': 'AllTrafic',\n", " 'ModelName': MODEL_NAME,\n", " 'InitialInstanceCount': 1,\n", " 'InstanceType': 'ml.m5.xlarge',\n", " },\n", " ],\n", " AsyncInferenceConfig={\n", " \"OutputConfig\": {\n", " \"S3OutputPath\": f\"s3://{bucket}/async_inference/output\"\n", " },\n", " }\n", ")\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 推論エンドポイントの作成\n", "リアルタイム推論と同じ" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response = sm_client.create_endpoint(\n", " EndpointName=ENDPOINT_NAME,\n", " EndpointConfigName=ENDPOINT_CONFIG_NAME,\n", ")\n", "\n", "endpoint_inservice_waiter.wait(\n", " EndpointName=ENDPOINT_NAME,\n", " WaiterConfig={'Delay': 5,}\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 非同期推論実行\n", "事前にS3に推論データを配置して、`invoke_endpoint_async`で非同期推論を実行する" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "json_name = './tabby.json' \n", "with open(json_name,'wt') as f:\n", " f.write(json.dumps({\"instances\": img_arr.tolist()}))\n", "tabby_s3_uri = sagemaker.s3.S3Uploader.upload(\n", " local_path = json_name,\n", " desired_s3_uri = f\"s3://{bucket}/async_inference/input\"\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "response = smr_client.invoke_endpoint_async(\n", " EndpointName=ENDPOINT_NAME, \n", " InputLocation=tabby_s3_uri,\n", " ContentType='application/json'\n", ")\n", "output_s3_uri = response['OutputLocation']\n", "output_key = output_s3_uri.replace(f's3://{bucket}/','')\n", "while True:\n", " result = s3_client.list_objects(Bucket=bucket, Prefix=output_key)\n", " exists = True if \"Contents\" in result else False\n", " if exists:\n", " print('!')\n", " obj = s3_client.get_object(Bucket=bucket, Key=output_key)\n", " predictions = json.loads(obj['Body'].read().decode())['predictions'][0]\n", " print(labels[np.argmax(predictions)],predictions[np.argmax(predictions)])\n", " break\n", " else:\n", " print('.',end='')\n", " sleep(0.1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "r = sm_client.delete_endpoint(EndpointName=ENDPOINT_NAME)\n", "r = sm_client.delete_endpoint_config(EndpointConfigName=ENDPOINT_CONFIG_NAME)\n", "r = sm_client.delete_model(ModelName=MODEL_NAME)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 7. オートスケール\n", "* Endpoint はオートスケールさせることができる\n", " * 推論が増えたら自動で増強、減ったら削減、など\n", " * スケーリング対象のメトリクスは[こちら](https://docs.aws.amazon.com/sagemaker/latest/dg/monitoring-cloudwatch.html#cloudwatch-metrics-endpoint-invocation)\n", "* 以下は非同期推論を用いてオートスケールをした場合だが、同期推論もやり方は同じ\n", "* オートスケールはエンドポイントを立てた後、 AWS の アプリケーションオートスケーリングサービスを利用して実現する" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "MODEL_NAME = 'MyTFModelFromBoto3AsyncWithAutoScaling'\n", "ENDPOINT_CONFIG_NAME = MODEL_NAME + 'EndpointConfig'\n", "ENDPOINT_NAME = MODEL_NAME + 'Endpoint'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### モデルの作成" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response = sm_client.create_model(\n", " ModelName=MODEL_NAME,\n", " PrimaryContainer={\n", " 'Image': container_image_tf24_uri,\n", " 'ModelDataUrl': model_s3_uri,\n", " },\n", " ExecutionRoleArn=sm_role,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 推論エンドポイントの設定" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "VARIANT_NAME = 'MyVariant'\n", "response = sm_client.create_endpoint_config(\n", " EndpointConfigName=ENDPOINT_CONFIG_NAME,\n", " ProductionVariants=[\n", " {\n", " 'VariantName': VARIANT_NAME,\n", " 'ModelName': MODEL_NAME,\n", " 'InitialInstanceCount': 1,\n", " 'InstanceType': 'ml.m5.xlarge',\n", " },\n", " ],\n", " AsyncInferenceConfig={\n", " \"OutputConfig\": {\n", " \"S3OutputPath\": f\"s3://{bucket}/async_inference_with_autoscaling/output\"\n", " },\n", " }\n", ")\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response = sm_client.create_endpoint(\n", " EndpointName=ENDPOINT_NAME,\n", " EndpointConfigName=ENDPOINT_CONFIG_NAME,\n", ")\n", "\n", "endpoint_inservice_waiter.wait(\n", " EndpointName=ENDPOINT_NAME,\n", " WaiterConfig={'Delay': 5,}\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# オートスケーリングの設定\n", "aa_client = boto3.client('application-autoscaling')\n", "SCALABLE_DIMENSION = 'sagemaker:variant:DesiredInstanceCount'\n", "\n", "resource_id = (f'endpoint/{ENDPOINT_NAME}/variant/{VARIANT_NAME}')\n", "\n", "response = aa_client.register_scalable_target(\n", " ServiceNamespace=\"sagemaker\",\n", " ResourceId=resource_id,\n", " ScalableDimension=SCALABLE_DIMENSION,\n", " MinCapacity=1,\n", " MaxCapacity=2,\n", ")\n", "\n", "response = aa_client.put_scaling_policy(\n", " PolicyName=\"Invocations-ScalingPolicy\",\n", " ServiceNamespace=\"sagemaker\",\n", " ResourceId=resource_id,\n", " ScalableDimension=SCALABLE_DIMENSION,\n", " PolicyType=\"TargetTrackingScaling\",\n", " TargetTrackingScalingPolicyConfiguration={\n", " \"TargetValue\": 1.0,\n", " \"CustomizedMetricSpecification\": {\n", " \"MetricName\": \"ApproximateBacklogSizePerInstance\",\n", " \"Namespace\": \"AWS/SageMaker\",\n", " \"Dimensions\": [{\"Name\": \"EndpointName\", \"Value\": ENDPOINT_NAME}],\n", " \"Statistic\": \"Average\",\n", " },\n", " \"ScaleInCooldown\": 10,\n", " \"ScaleOutCooldown\": 10\n", " },\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# インスタンス数の確認\n", "instance_count=sm_client.describe_endpoint(EndpointName=ENDPOINT_NAME)['ProductionVariants'][0]['CurrentInstanceCount']\n", "print(f'現在稼動しているインスタンス数: {instance_count}')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 推論を50000回行って負荷をかけてオートスケーリングするかを確認する" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "%%time\n", "\n", "json_name = './tabby.json' \n", "with open(json_name,'wt') as f:\n", " f.write(json.dumps({\"instances\": img_arr.tolist()}))\n", "tabby_s3_uri = sagemaker.s3.S3Uploader.upload(\n", " local_path = json_name,\n", " desired_s3_uri = f\"s3://{bucket}/async_inference/input\"\n", ")\n", "\n", "output_key_list = []\n", "# 推論\n", "for _ in range(50000):\n", " response = smr_client.invoke_endpoint_async(\n", " EndpointName=ENDPOINT_NAME, \n", " InputLocation=tabby_s3_uri,\n", " ContentType='application/json'\n", " )\n", " output_s3_uri = response['OutputLocation']\n", " output_key = output_s3_uri.replace(f's3://{bucket}/','')\n", " output_key_list.append(output_key)\n", "\n", "# 全ての結果を確認する\n", "for output_key in output_key_list:\n", " while True:\n", " result = s3_client.list_objects(Bucket=bucket, Prefix=output_key)\n", " exists = True if \"Contents\" in result else False\n", " if exists:\n", "# print('!',end='')\n", "# # 結果確認\n", "# obj = s3_client.get_object(Bucket=bucket, Key=output_key)\n", "# predictions = json.loads(obj['Body'].read().decode())['predictions'][0]\n", "# print(labels[np.argmax(predictions)],predictions[np.argmax(predictions)])\n", " break\n", " else:\n", " print('.',end='')\n", " sleep(1)\n", "\n", "# インスタンス数の確認\n", "instance_count=sm_client.describe_endpoint(EndpointName=ENDPOINT_NAME)['ProductionVariants'][0]['CurrentInstanceCount']\n", "print(f'現在稼動しているインスタンス数: {instance_count}')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "r = sm_client.delete_endpoint(EndpointName=ENDPOINT_NAME)\n", "r = sm_client.delete_endpoint_config(EndpointConfigName=ENDPOINT_CONFIG_NAME)\n", "r = sm_client.delete_model(ModelName=MODEL_NAME)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 8. サーバーレス推論(2021/12時点ではパブリックプレビュー)\n", "* インスタンスを意識せずにエンドポイントだけ建てる\n", "* 実態は推論イベント発生時に都度コンピューティングリソースが立ち上がる\n", "* 通常のリアルタイム推論との違いは `create_endpoint_config` する際に、インスタンス数やインスタンスタイプの設定はしなくなり、代わりに`ServerlessConfig`の中でメモリと最大同時期同数を設定する" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "MODEL_NAME = 'MyTFModelFromBoto3Serverless'\n", "ENDPOINT_CONFIG_NAME = MODEL_NAME + 'EndpointConfig'\n", "ENDPOINT_NAME = MODEL_NAME + 'Endpoint'\n", "response = sm_client.create_model(\n", " ModelName=MODEL_NAME,\n", " PrimaryContainer={\n", " 'Image': container_image_tf24_uri,\n", " 'ModelDataUrl': model_s3_uri,\n", " },\n", " ExecutionRoleArn=sm_role,\n", ")\n", "response = sm_client.create_endpoint_config(\n", " EndpointConfigName=ENDPOINT_CONFIG_NAME,\n", " ProductionVariants=[\n", " {\n", " 'VariantName': 'AllTrafic',\n", " 'ModelName': MODEL_NAME,\n", " # インスタンスカウントやインスタンスタイプはなくなる\n", " 'ServerlessConfig': { # 通常のリアルタイム推論とは違い、ServerlessConfig というキーで設定する\n", " 'MemorySizeInMB': 1024, # メモリサイズは 1024 , 2048, 3072, 4096, 5120, 6144 から選ぶ\n", " 'MaxConcurrency': 3 # 最大同時起動数\n", " }\n", " },\n", " ],\n", ")\n", "response = sm_client.create_endpoint(\n", " EndpointName=ENDPOINT_NAME,\n", " EndpointConfigName=ENDPOINT_CONFIG_NAME,\n", ")\n", "\n", "endpoint_inservice_waiter.wait(\n", " EndpointName=ENDPOINT_NAME,\n", " WaiterConfig={'Delay': 5,}\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# jsonにして渡すパターン\n", "request_args = {\n", " 'EndpointName': ENDPOINT_NAME,\n", " 'ContentType' : 'application/json',\n", " 'Accept' : 'application/json',\n", " 'Body' : json.dumps({\"instances\": img_arr.tolist()})\n", "}\n", "response = smr_client.invoke_endpoint(**request_args)\n", "predictions = json.loads(response['Body'].read().decode('utf-8'))['predictions'][0]\n", "print(labels[np.argmax(predictions)],predictions[np.argmax(predictions)])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "r = sm_client.delete_endpoint(EndpointName=ENDPOINT_NAME)\n", "r = sm_client.delete_endpoint_config(EndpointConfigName=ENDPOINT_CONFIG_NAME)\n", "r = sm_client.delete_model(ModelName=MODEL_NAME)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 9. 独自コンテナイメージの持ち込みを利用した推論\n", "* SageMaker のマネージドコンテナイメージ以外に、独自のコンテナイメージを持ち込める\n", "* コンテナイメージをビルドし、ECR にプッシュしてその URL を指定する以外はマネージドコンテナイメージと使い方が一緒\n", "* SageMaker Studio の場合はコンテナイメージのビルドに `sm-docker build` コマンドを使う必要がある\n", " * `sm-docker build` する前にロールの信頼関係とポリシーを追加する必要がある \n", " `sm-docker build` が裏側で AWS CodeBuild を使うために必要\n", "* SageMaker Notebook (classic) の場合はそのまま `docker build` コマンドを使えばよい" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# 定数の設定\n", "IMAGE_NAME = 'sagemaker_byoc_tf_inference-cpu'\n", "TAG = ':1'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### ビルドするコンテナイメージと、依存関係のあるモジュールの確認" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!cat container/Dockerfile" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!cat container/requirements.txt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### **SageMaker Studio を使っている場合のみ**以下を実行\n", "#### sm-docker コマンドのインストール" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "!pip install sagemaker-studio-image-build" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Studio アタッチされているロールに以下の信頼関係を追加\n", "以下のコマンドの出力結果をコピーして追加する" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "!cat container/trust_relationships.json" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Studio アタッチされているロールにインラインポリシー(もしくはポリシーを別途作成して)アタッチする\n", "以下のコマンドの出力結果をコピーしてアタッチする" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "!cat container/inline_policy.json" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### コンテナイメージのビルド" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "%cd container\n", "!chmod +x tfserve/serve\n", "!sm-docker build . --repository {IMAGE_NAME}{TAG}\n", "%cd ../" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "account_id = boto3.client('sts').get_caller_identity()['Account']\n", "region_name = boto3.session.Session().region_name\n", "tf_own_image_uri = f'{account_id}.dkr.ecr.{region_name}.amazonaws.com/{IMAGE_NAME}{TAG}'\n", "print(tf_own_image_uri)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### **SageMaker Notebook (classic) をつかってる場合のみ**以下を実行" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%cd container\n", "!chmod +x tfserve/serve\n", "!docker build -t {IMAGE_NAME}{TAG} .\n", "%cd ../" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# boto3の機能を使ってリポジトリ名に必要な情報を取得する\n", "account_id = boto3.client('sts').get_caller_identity().get('Account')\n", "region = boto3.session.Session().region_name\n", "ecr_endpoint = f'{account_id}.dkr.ecr.{region}.amazonaws.com/' \n", "repository_uri = f'{ecr_endpoint}{IMAGE_NAME}'\n", "tf_own_image_uri = f'{repository_uri}{TAG}'\n", "\n", "!aws ecr get-login-password --region {region} | docker login --username AWS --password-stdin {ecr_endpoint}\n", "!docker tag {image_name}{tag} {image_uri}\n", "# 同名のリポジトリがあった場合は削除\n", "!aws ecr delete-repository --repository-name $image_name --force\n", "# リポジトリを作成\n", "!aws ecr create-repository --repository-name $image_name\n", "# イメージをプッシュ\n", "!docker push {image_uri}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 以下共通で推論エンドポイントの作成" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "MODEL_NAME = 'MyTFModelFromBoto3BYOC'\n", "ENDPOINT_CONFIG_NAME = MODEL_NAME + 'EndpointConfig'\n", "ENDPOINT_NAME = MODEL_NAME + 'Endpoint'\n", "response = sm_client.create_model(\n", " ModelName=MODEL_NAME,\n", " PrimaryContainer={\n", " 'Image': tf_own_image_uri,\n", " 'ModelDataUrl': model_s3_uri,\n", " },\n", " ExecutionRoleArn=sm_role,\n", ")\n", "response = sm_client.create_endpoint_config(\n", " EndpointConfigName=ENDPOINT_CONFIG_NAME,\n", " ProductionVariants=[\n", " {\n", " 'VariantName': 'AllTrafic',\n", " 'ModelName': MODEL_NAME,\n", " 'InitialInstanceCount': 1,\n", " 'InstanceType': 'ml.m5.xlarge',\n", " },\n", " ],\n", ")\n", "\n", "response = sm_client.create_endpoint(\n", " EndpointName=ENDPOINT_NAME,\n", " EndpointConfigName=ENDPOINT_CONFIG_NAME,\n", ")\n", "endpoint_inservice_waiter.wait(\n", " EndpointName=ENDPOINT_NAME,\n", " WaiterConfig={'Delay': 5,}\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# jsonにして渡すパターン\n", "request_args = {\n", " 'EndpointName': ENDPOINT_NAME,\n", " 'ContentType' : 'application/json',\n", " 'Accept' : 'application/json',\n", " 'Body' : json.dumps({\"instances\": img_arr.tolist()})\n", "}\n", "response = smr_client.invoke_endpoint(**request_args)\n", "predictions = json.loads(response['Body'].read().decode('utf-8'))['predictions'][0]\n", "print(labels[np.argmax(predictions)],predictions[np.argmax(predictions)])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "r = sm_client.delete_endpoint(EndpointName=ENDPOINT_NAME)\n", "r = sm_client.delete_endpoint_config(EndpointConfigName=ENDPOINT_CONFIG_NAME)\n", "r = sm_client.delete_model(ModelName=MODEL_NAME)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "instance_type": "ml.m5.large", "kernelspec": { "display_name": "Python 3 (TensorFlow 2.3 Python 3.7 CPU Optimized)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-2:429704687514:image/tensorflow-2.3-cpu-py37-ubuntu18.04-v1" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.10" } }, "nbformat": 4, "nbformat_minor": 4 }