{ "cells": [ { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "# bring-your-own-container for NGC TF&TRT BERT \n", "## Overview\n", "You can build a docker image with NGC assets for a \"bring your own container\" approach for a NGC BERT on AWS SageMaker. \n", "- With launch arg `train`: the container fine-tunes a pre-trained TF BERT and converts the fine-tuned model to TRT\n", "- With launch arg `serve`: the container can take inference requests. \n", "\n", "The NGC assets used can be seen in the `Dockerfile` and `entrypoint.sh`. \n", "\n", "> Note: the container used in this notebook assumes that the instance contains V100 GPUs, and will run the training job on the number of V100 GPUs available. Please select your instance type accordingly.\n", "\n", "## Details\n", "### About the Docker image\n", "\n", "The Docker image is built from base image from NGC `nvcr.io/nvidia/tensorflow:19.12-tf1-py3`. Additional packages are installed for TensorRT, as indicated in the Dockerfile. This example using TensorRT6, same as the public repo of NVIDIA TF BERT - see the [trt directory](https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow/LanguageModeling/BERT/trt).\n", "\n", "### About the NGC model\n", "The NGC model is a BERT large model. You can explore the model [here on NGC](https://ngc.nvidia.com/catalog/models/nvidia:bert_tf_pretraining_lamb_16n).\n", "\n", "### About the entrypoint script\n", "- For `train`, `entrypoint.sh` downloads a pre-trained BERT TF model from NGC, clones the [GitHub repo](https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow/LanguageModeling/BERT/) (equivalent to [NGC model scripts](https://ngc.nvidia.com/catalog/model-scripts/nvidia:bert_for_tensorflow)), modifies some of the scripts a little for this use case, runs fine-tuning, then converts the fine-tuned model into a TensorRT engine. \n", "\n", " - Note: the command in entrypoint.sh line 41 ```bash scripts/run_bert_squad.sh 5 5e-6 fp16 true $num_V100 384 128 large 1.1 $pretrained_modeldir/model.ckpt-1564 0.2 $finetuned_modeldir true true``` finetunes for 0.2 epochs to save time for demo purposes. If you want to fully fine-tune, change that 0.2 to 1.5 or more.\n", "\n", "- For `serve`, `entrypoint.sh` calls `serve.py`, which starts and defines a server for the model. The place that defines how inference requests are handled is `predictor.py`, imported to `wsgi.py`, and called by `serve.py`. Serving scripts are modified from the [trt directory](https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow/LanguageModeling/BERT/trt) in the GitHub repo.\n", "\n", "As a first step, let's build the docker image and push it to ECR." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# build the image and push it to ECR\n", "# build-and-push.sh takes in one arg: the tag. Here we tag the image with 0.1, but feel free to change the tag\n", "# see docker/Dockerfile.sagemaker.gpu for details about the image\n", "!cd docker && bash build-and-push.sh 0.1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Environment Imports\n", "\n", "For this notebook, you can use the kernel conda_pytorch_p36. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import sagemaker as sage\n", "\n", "from sagemaker import get_execution_role" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "## Sagemaker Configuration" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "role = get_execution_role()\n", "session = sage.Session()\n", "\n", "TRAIN_INSTANCE_TYPE_ID = 'ml.p3.16xlarge'\n", "TRAIN_INSTANCE_COUNT = 1\n", "\n", "INFERENCE_INSTANCE_TYPE_ID = 'ml.p3.2xlarge'\n", "INFERENCE_INSTANCE_COUNT = 1\n", "\n", "\n", "OUTPUT_BUCKET = 's3://{bucket}/output'.format(bucket=session.default_bucket())\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create Estimator and Train" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "account = session.boto_session.client('sts').get_caller_identity()['Account']\n", "region = session.boto_session.region_name\n", "image_name = '{acct}.dkr.ecr.{region}.amazonaws.com/ngc-tf-bert-sagemaker-demo:0.1'.format(acct=account, region=region)\n", "\n", "estimator = sage.estimator.Estimator(image_name=image_name,\n", " role=role,\n", " train_instance_count=TRAIN_INSTANCE_COUNT,\n", " train_instance_type=TRAIN_INSTANCE_TYPE_ID,\n", " output_path=OUTPUT_BUCKET,\n", " sagemaker_session=session)\n", "\n", "estimator.fit(inputs=None)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Deploy the Estimator" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "predictor = estimator.deploy(initial_instance_count=INFERENCE_INSTANCE_COUNT,\n", " instance_type=INFERENCE_INSTANCE_TYPE_ID)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Run one inference\n", "Define the context and question, and run inference for one query." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.predictor import json_serializer\n", "from sagemaker.content_types import CONTENT_TYPE_JSON\n", "import numpy as np\n", "short_paragraph_text = \"The Apollo program was the third United States human spaceflight program. First conceived as a three-man spacecraft to follow the one-man Project Mercury which put the first Americans in space, Apollo was dedicated to President John F. Kennedy's national goal of landing a man on the Moon. The first manned flight of Apollo was in 1968. Apollo ran from 1961 to 1972 followed by the Apollo-Soyuz Test Project a joint Earth orbit mission with the Soviet Union in 1975.\"\n", "question_text = \"What project put the first Americans into space?\"\n", "qa_test_sample = {'short_paragraph_text':short_paragraph_text, 'question_text':question_text}\n", "\n", "predictor.content_type=CONTENT_TYPE_JSON\n", "predictor.serializer= json_serializer\n", "\n", "predictor.predict(qa_test_sample).decode(\"utf-8\") " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Cleanup Endpoint" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "session.delete_endpoint(predictor.endpoint)" ] } ], "metadata": { "kernelspec": { "display_name": "conda_tensorflow_p36", "language": "python", "name": "conda_tensorflow_p36" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.6" } }, "nbformat": 4, "nbformat_minor": 1 }