{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Gender Prediction, using Pre-trained Keras Model\n", "\n", "Deep Neural Networks can be used to extract features in the input and derive higher level abstractions. This technique is used regularly in vision, speech and text analysis. In this exercise, we use a pre-trained model deep learning model that would identify low level features in texts containing people's names, and would be able to classify them in one of two categories - Male or Female.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Network Architecture\n", "The problem we are trying to solve is to predict whether a given name belongs to a male or female. We will use supervised learning, where the character sequence making up the names would be `X` variable, and the flag indicating **Male(M)** or **Female(F)** would be `Y` variable.\n", "\n", "We use a stacked 2-Layer LSTM model and a final dense layer with softmax activation as our network architecture. We use categorical cross-entropy as loss function, with an Adam optimizer. We also add a 20% dropout layer is added for regularization to avoid over-fitting. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Dependencies\n", "* The model was built using Keras, therefore we need to include Keras deep learning library to build the network locally, in order to be able to test, prior to hosting the model. \n", "* While running on SageMaker Notebook Instance, we choose conda_tensorflow kernel, so that Keras code is compiled to use tensorflow in the backend. \n", "* If you choose P2 and P3 class of instances for your Notebook, using Tensorflow ensures the low level code takes advantage of all available GPUs. So further dependencies needs to be installed.\n" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Using TensorFlow backend.\n" ] } ], "source": [ "import os\n", "import time\n", "import numpy as np\n", "import keras\n", "from keras.models import load_model\n", "import boto3" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Model testing\n", "To test the validity of the model, we do some local testing.
\n", "The model was built to be able to process one-hot encoded data representing names, therefore we need to do same pre-processing on our test data (one-hot encoding using the same character indices)
\n", "We feed this one-hot encoded test data to the model, and the `predict` generates a vector, similar to the training labels vector we used before. Except in this case, it contains what model thinks the gender represented by each of the test records.
\n", "To present data intutitively, we simply map it back to `Male` / `Female`, from the `0` / `1` flag. " ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "lstm-gender-classifier-model.h5\n", "lstm-gender-classifier-indices.npy\n" ] } ], "source": [ "!tar -zxvf ../pretrained-model/model.tar.gz -C ../pretrained-model/ " ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'p': 15, 'v': 21, 'd': 3, 'f': 5, 'm': 12, 's': 18, 'l': 11, 'j': 9, 'g': 6, 'w': 22, 'x': 23, 'q': 16, 'n': 13, 'k': 10, 'i': 8, 'r': 17, 'e': 4, 'z': 25, 'u': 20, 'h': 7, 'b': 1, 'y': 24, 'a': 0, 'c': 2, 't': 19, 'o': 14}\n", "15\n", "26\n" ] } ], "source": [ "model = load_model('../pretrained-model/lstm-gender-classifier-model.h5')\n", "char_indices = np.load('../pretrained-model/lstm-gender-classifier-indices.npy').item()\n", "max_name_length = char_indices['max_name_length']\n", "char_indices.pop('max_name_length', None)\n", "alphabet_size = len(char_indices)\n", "print(char_indices)\n", "print(max_name_length)\n", "print(alphabet_size)" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Tom (M)\n", "Allie (F)\n", "Jim (M)\n", "Sophie (F)\n", "John (M)\n", "Kayla (F)\n", "Mike (M)\n", "Amanda (F)\n", "Andrew (M)\n" ] } ], "source": [ "names_test = [\"Tom\",\"Allie\",\"Jim\",\"Sophie\",\"John\",\"Kayla\",\"Mike\",\"Amanda\",\"Andrew\"]\n", "num_test = len(names_test)\n", "\n", "X_test = np.zeros((num_test, max_name_length, alphabet_size))\n", "\n", "for i,name in enumerate(names_test):\n", " name = name.lower()\n", " for t, char in enumerate(name):\n", " X_test[i, t,char_indices[char]] = 1\n", "\n", "predictions = model.predict(X_test)\n", "\n", "for i,name in enumerate(names_test):\n", " print(\"{} ({})\".format(names_test[i],\"M\" if predictions[i][0]>predictions[i][1] else \"F\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Model saving\n", "In order to deploy the model behind an hosted endpoint, we need to save the model fileto an S3 location.
\n", " \n", "We can obtain the name of the S3 bucket from the execution role we attached to this Notebook instance. This should work if the policies granting read permission to IAM policies was granted, as per the documentation.\n", "\n", "If for some reason, it fails to fetch the associated bucket name, it asks the user to enter the name of the bucket. If asked, use the bucket that you created in Module-3, such as 'smworkshop-firstname-lastname'.
\n", " \n", "It is important to ensure that this is the same S3 bucket, to which you provided access in the Execution role used while creating this Notebook instance." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "smworkshop-john-doe\n" ] } ], "source": [ "sts = boto3.client('sts')\n", "iam = boto3.client('iam')\n", "\n", "\n", "caller = sts.get_caller_identity()\n", "account = caller['Account']\n", "arn = caller['Arn']\n", "role = arn[arn.find(\"/AmazonSageMaker\")+1:arn.find(\"/SageMaker\")]\n", "timestamp = role[role.find(\"Role-\")+5:]\n", "policyarn = \"arn:aws:iam::{}:policy/service-role/AmazonSageMaker-ExecutionPolicy-{}\".format(account, timestamp)\n", "\n", "s3bucketname = \"\"\n", "policystatements = []\n", "\n", "try:\n", " policy = iam.get_policy(\n", " PolicyArn=policyarn\n", " )['Policy']\n", " policyversion = policy['DefaultVersionId']\n", " policystatements = iam.get_policy_version(\n", " PolicyArn = policyarn, \n", " VersionId = policyversion\n", " )['PolicyVersion']['Document']['Statement']\n", "except Exception as e:\n", " s3bucketname=input(\"Which S3 bucket do you want to use to host training data and model? \")\n", " \n", "for stmt in policystatements:\n", " action = \"\"\n", " actions = stmt['Action']\n", " for act in actions:\n", " if act == \"s3:ListBucket\":\n", " action = act\n", " break\n", " if action == \"s3:ListBucket\":\n", " resource = stmt['Resource'][0]\n", " s3bucketname = resource[resource.find(\":::\")+3:]\n", "\n", "print(s3bucketname)" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "s3 = boto3.resource('s3')\n", "s3.meta.client.upload_file('../pretrained-model/model.tar.gz', s3bucketname, 'model/model.tar.gz')" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "# Model hosting\n", "\n", "Amazon SageMaker provides a powerful orchestration framework that you can use to productionize any of your own machine learning algorithm, using any machine learning framework and programming languages.
\n", "This is possible because SageMaker, as a manager of containers, have standarized ways of interacting with your code running inside a Docker container. Since you are free to build a docker container using whatever code and depndency you like, this gives you freedom to bring your own machinery.
\n", "In the following steps, we'll containerize the prediction code and host the model behind an API endpoint.
\n", "This would allow us to use the model from web-application, and put it into real use.
\n", "The boilerplate code, which we affectionately call the `Dockerizer` framework, was made available on this Notebook instance by the Lifecycle Configuration that you used. Just look into the folder and ensure the necessary files are available as shown.
\n",
" \n",
" \n",
" \n",
" \n",
"* We'll write code into this file using Jupyter magic command - `writefile`. \n",
"We create `Class` variables in this class to hold loaded model, character indices, tensor-flow graph, and anything else that needs to be referenced while generating prediction. "
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Appending to byoa/predictor.py\n"
]
}
],
"source": [
"%%writefile -a byoa/predictor.py\n",
"\n",
"# A singleton for holding the model. This simply loads the model and holds it.\n",
"# It has a predict function that does a prediction based on the model and the input data.\n",
"\n",
"class ScoringService(object):\n",
" model_type = None # Where we keep the model type, qualified by hyperparameters used during training\n",
" model = None # Where we keep the model when it's loaded\n",
" graph = None\n",
" indices = None # Where we keep the indices of Alphabet when it's loaded"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Generally, we have to provide class methods to load the model and related artefacts from the model path as assigned by SageMaker within the running container. \n",
"Notice here that SageMaker copies the artefacts from the S3 location (as defined during model creation) into the container local file system."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Appending to byoa/predictor.py\n"
]
}
],
"source": [
"%%writefile -a byoa/predictor.py\n",
"\n",
" @classmethod\n",
" def get_indices(cls):\n",
" #Get the indices for Alphabet for this instance, loading it if it's not already loaded\n",
" if cls.indices == None:\n",
" model_type='lstm-gender-classifier'\n",
" index_path = os.path.join(model_path, '{}-indices.npy'.format(model_type))\n",
" if os.path.exists(index_path):\n",
" cls.indices = np.load(index_path).item()\n",
" else:\n",
" print(\"Character Indices not found.\")\n",
" return cls.indices\n",
"\n",
" @classmethod\n",
" def get_model(cls):\n",
" #Get the model object for this instance, loading it if it's not already loaded\n",
" if cls.model == None:\n",
" model_type='lstm-gender-classifier'\n",
" mod_path = os.path.join(model_path, '{}-model.h5'.format(model_type))\n",
" if os.path.exists(mod_path):\n",
" cls.model = load_model(mod_path)\n",
" cls.model._make_predict_function()\n",
" cls.graph = tf.get_default_graph()\n",
" else:\n",
" print(\"LSTM Model not found.\")\n",
" return cls.model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, inside another clas method, named `predict`, we provide the code that we used earlier to generate prediction. \n",
"Only difference with our previous test prediciton (in development notebook) is that in this case, the predictor will grab the data from the `input` variable, which in turn is obtained from the HTTP request payload."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Appending to byoa/predictor.py\n"
]
}
],
"source": [
"%%writefile -a byoa/predictor.py\n",
"\n",
" @classmethod\n",
" def predict(cls, input):\n",
"\n",
" mod = cls.get_model()\n",
" ind = cls.get_indices()\n",
"\n",
" result = {}\n",
"\n",
" if mod == None:\n",
" print(\"Model not loaded.\")\n",
" else:\n",
" if 'max_name_length' not in ind:\n",
" max_name_length = 15\n",
" alphabet_size = 26\n",
" else:\n",
" max_name_length = ind['max_name_length']\n",
" ind.pop('max_name_length', None)\n",
" alphabet_size = len(ind)\n",
"\n",
" inputs_list = input.strip('\\n').split(\",\")\n",
" num_inputs = len(inputs_list)\n",
"\n",
" X_test = np.zeros((num_inputs, max_name_length, alphabet_size))\n",
"\n",
" for i,name in enumerate(inputs_list):\n",
" name = name.lower().strip('\\n')\n",
" for t, char in enumerate(name):\n",
" if char in ind:\n",
" X_test[i, t,ind[char]] = 1\n",
"\n",
" with cls.graph.as_default():\n",
" predictions = mod.predict(X_test)\n",
"\n",
" for i,name in enumerate(inputs_list):\n",
" result[name] = 'M' if predictions[i][0]>predictions[i][1] else 'F'\n",
" print(\"{} ({})\".format(inputs_list[i],\"M\" if predictions[i][0]>predictions[i][1] else \"F\"))\n",
"\n",
" return json.dumps(result)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"With the prediction code captured, we move on to define the flask app, and provide a `ping`, which SageMaker uses to conduct health check on container instances that are responsible behind the hosted prediction endpoint. \n",
"Here we can have the container return healthy response, with status code `200` when everythings goes well. \n",
"For simplicity, we are only validating whether model has been loaded in this case. In practice, this provides opportunity extensive health check (including any external dependency check), as required."
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Appending to byoa/predictor.py\n"
]
}
],
"source": [
"%%writefile -a byoa/predictor.py\n",
"\n",
"# The flask app for serving predictions\n",
"app = flask.Flask(__name__)\n",
"\n",
"@app.route('/ping', methods=['GET'])\n",
"def ping():\n",
" #Determine if the container is working and healthy.\n",
" # Declare it healthy if we can load the model successfully.\n",
" health = ScoringService.get_model() is not None and ScoringService.get_indices() is not None\n",
" status = 200 if health else 404\n",
" return flask.Response(response='\\n', status=status, mimetype='application/json')\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Last but not the least, we define a `transformation` method that would intercept the HTTP request coming through to the SageMaker hosted endpoint. \n",
"Here we have the opportunity to decide what type of data we accept with the request. In this particular example, we are accepting only `CSV` formatted data, decoding the data, and invoking prediction. \n",
"The response is similarly funneled backed to the caller with MIME type of `CSV`. \n",
"You are free to choose any or multiple MIME types for your requests and response. However if you choose to do so, it is within this method that we have to transform the back to and from the format that is suitable to passed for prediction."
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Appending to byoa/predictor.py\n"
]
}
],
"source": [
"%%writefile -a byoa/predictor.py\n",
"\n",
"\n",
"@app.route('/invocations', methods=['POST'])\n",
"def transformation():\n",
" #Do an inference on a single batch of data\n",
" data = None\n",
"\n",
" # Convert from CSV to pandas\n",
" if flask.request.content_type == 'text/csv':\n",
" data = flask.request.data.decode('utf-8')\n",
" else:\n",
" return flask.Response(response='This predictor only supports CSV data', status=415, mimetype='text/plain')\n",
"\n",
" print('Invoked with {} records'.format(data.count(\",\")+1))\n",
"\n",
" # Do the prediction\n",
" predictions = ScoringService.predict(data)\n",
"\n",
" result = \"\"\n",
" for prediction in predictions:\n",
" result = result + prediction\n",
"\n",
" return flask.Response(response=result, status=200, mimetype='text/csv')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that in containerizing our custom LSTM Algorithm, where we used `Keras` as our framework of our choice, we did not have to interact directly with the SageMaker API, even though SageMaker API doesn't support `Keras`. \n",
"This serves to show the power and flexibility offered by containerized machine learning pipeline on SageMaker."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Container publishing\n",
"\n",
"In order to host and deploy the trained model using SageMaker, we need to build the `Docker` containers, publish it to `Amazon ECR` repository, and then either use SageMaker console or API to created the endpoint configuration and deploy the stages. \n",
"\n",
"Conceptually, the steps required for publishing are: \n",
"1. Make the`predictor.py` files executable\n",
"2. Create an ECR repository within your default region\n",
"3. Build a docker container with an identifieable name\n",
"4. Tage the image and publish to the ECR repository\n",
" \n",
"However, it is often more convenient to automate these steps. In this notebook we do exactly that using `boto3 SageMaker` API. \n",
"Following are the steps: \n",
" \n",
"* First we create a model hosting definition, by providing the S3 location to the model artifact, and ARN to the ECR image of the container.\n",
"* Using the model hosting definition, our next step is to create configuration of a hosted endpoint that will be used to serve prediciton generation requests. \n",
"* Creating the endpoint is the last step in the ML cycle, that prepares your model to serve client reqests from applications.\n",
"* We wait until the provision is completed and the endpoint in service. At this point we can send request to this endpoint and obtain gender predictions.\n"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Using Role arn:aws:iam::741855114961:role/service-role/AmazonSageMaker-ExecutionRole-20180815T114786\n",
"Model already exists, do you want to delete and create a fresh one (Y/N) ? Y\n",
"arn:aws:sagemaker:us-east-1:741855114961:model/gender-classifier-1-model Created at Thu, 16 Aug 2018 07:52:24 GMT\n"
]
}
],
"source": [
"import sagemaker\n",
"sm_role = sagemaker.get_execution_role()\n",
"print(\"Using Role {}\".format(sm_role))\n",
"acc = boto3.client('sts').get_caller_identity().get('Account')\n",
"reg = boto3.session.Session().region_name\n",
"sagemaker = boto3.client('sagemaker')\n",
"\n",
"#Check if model already exists\n",
"model_name = \"{}-model\".format(run_name)\n",
"models = sagemaker.list_models(NameContains=model_name)['Models']\n",
"model_exists = False\n",
"if len(models) > 0:\n",
" for model in models:\n",
" if model['ModelName'] == model_name:\n",
" model_exists = True\n",
" break\n",
"#Delete model, if chosen\n",
"if model_exists == True: \n",
" choice = input(\"Model already exists, do you want to delete and create a fresh one (Y/N) ? \")\n",
" if choice.upper()[0:1] == \"Y\":\n",
" sagemaker.delete_model(ModelName = model_name)\n",
" model_exists = False\n",
" else:\n",
" print(\"Model - {} already exists\".format(model_name))\n",
"\n",
"if model_exists == False: \n",
" model_response = sagemaker.create_model(\n",
" ModelName=model_name,\n",
" PrimaryContainer={\n",
" 'Image': '{}.dkr.ecr.{}.amazonaws.com/{}:latest'.format(acc, reg, run_name),\n",
" 'ModelDataUrl': 's3://{}/model/model.tar.gz'.format(s3bucketname)\n",
" },\n",
" ExecutionRoleArn=sm_role,\n",
" Tags=[\n",
" {\n",
" 'Key': 'Name',\n",
" 'Value': model_name\n",
" }\n",
" ]\n",
" )\n",
" print(\"{} Created at {}\".format(model_response['ModelArn'], \n",
" model_response['ResponseMetadata']['HTTPHeaders']['date']))"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Endpoint Configuration already exists, do you want to delete and create a fresh one (Y/N) ? Y\n",
"arn:aws:sagemaker:us-east-1:741855114961:endpoint-config/gender-classifier-1-endpoint-config Created at Thu, 16 Aug 2018 07:52:27 GMT\n"
]
}
],
"source": [
"#Check if endpoint configuration already exists\n",
"endpoint_config_name = \"{}-endpoint-config\".format(run_name)\n",
"endpoint_configs = sagemaker.list_endpoint_configs(NameContains=endpoint_config_name)['EndpointConfigs']\n",
"endpoint_config_exists = False\n",
"if len(endpoint_configs) > 0:\n",
" for endpoint_config in endpoint_configs:\n",
" if endpoint_config['EndpointConfigName'] == endpoint_config_name:\n",
" endpoint_config_exists = True\n",
" break\n",
" \n",
"#Delete endpoint configuration, if chosen\n",
"if endpoint_config_exists == True: \n",
" choice = input(\"Endpoint Configuration already exists, do you want to delete and create a fresh one (Y/N) ? \")\n",
" if choice.upper()[0:1] == \"Y\":\n",
" sagemaker.delete_endpoint_config(EndpointConfigName = endpoint_config_name)\n",
" endpoint_config_exists = False\n",
" else:\n",
" print(\"Endpoint Configuration - {} already exists\".format(endpoint_config_name))\n",
" \n",
"if endpoint_config_exists == False: \n",
" endpoint_config_response = sagemaker.create_endpoint_config(\n",
" EndpointConfigName=endpoint_config_name,\n",
" ProductionVariants=[\n",
" {\n",
" 'VariantName': 'default',\n",
" 'ModelName': model_name,\n",
" 'InitialInstanceCount': 1,\n",
" 'InstanceType': instance_type,\n",
" 'InitialVariantWeight': 1\n",
" },\n",
" ],\n",
" Tags=[\n",
" {\n",
" 'Key': 'Name',\n",
" 'Value': endpoint_config_name\n",
" }\n",
" ]\n",
" )\n",
" print(\"{} Created at {}\".format(endpoint_config_response['EndpointConfigArn'], \n",
" endpoint_config_response['ResponseMetadata']['HTTPHeaders']['date']))"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Endpoint already exists, do you want to delete and create a fresh one (Y/N) ? Y\n",
"Deleting Endpoint - gender-classifier-1-endpoint ...\n",
"Endpoint - gender-classifier-1-endpoint deleted\n",
"Creating Endpoint : gender-classifier-1-endpoint\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "326ec38feb13474881e87beb608685d1",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"FloatProgress(value=0.0, description='Progress')"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"......................................................................................"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "d4aaaed9fc294f0682b58a3be1dd6f08",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"HTML(value=' \n",
"You'll need to copy the endpoint name from the output of the cell below, to use in the Lambda function that will send request to this hosted endpoint."
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"gender-classifier-1-endpoint\n"
]
}
],
"source": [
"print(endpoint_response\n",
" ['EndpointName'])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "conda_tensorflow_p36",
"language": "python",
"name": "conda_tensorflow_p36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.4"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Request serving stack (expand to view diagram)
\n",
"First part of the file would contain the necessary imports, as ususal. "
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Overwriting byoa/predictor.py\n"
]
}
],
"source": [
"%%writefile byoa/predictor.py\n",
"# This is the file that implements a flask server to do inferences. It's the file that you will modify to\n",
"# implement the scoring for your own algorithm.\n",
"\n",
"from __future__ import print_function\n",
"\n",
"import os\n",
"import json\n",
"import pickle\n",
"from io import StringIO\n",
"import sys\n",
"import signal\n",
"import traceback\n",
"\n",
"import numpy as np\n",
"\n",
"import keras\n",
"from keras.models import Sequential\n",
"from keras.layers import Dense, Dropout\n",
"from keras.layers import Embedding\n",
"from keras.layers import LSTM\n",
"from keras.models import load_model\n",
"import flask\n",
"\n",
"import tensorflow as tf\n",
"\n",
"import pandas as pd\n",
"\n",
"from os import listdir, sep\n",
"from os.path import abspath, basename, isdir\n",
"from sys import argv"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When run within an instantiated container, SageMaker makes the trained model available locally at `/opt/ml`"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Appending to byoa/predictor.py\n"
]
}
],
"source": [
"%%writefile -a byoa/predictor.py\n",
"\n",
"prefix = '/opt/ml/'\n",
"model_path = os.path.join(prefix, 'model')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The machinery to produce inference is wrapped around in a Pythonic class structure, within a `Singleton` class, aptly named - `ScoringService`.
\n",
"All of these are conveniently encapsulated inside `build_and_push` script. We simply run it with the unique name of our production run."
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Enter run version: 1\n",
"WARNING! Using --password via the CLI is insecure. Use --password-stdin.\n",
"Login Succeeded\n",
"Sending build context to Docker daemon 25.6kB\n",
"Step 1/13 : FROM ubuntu:16.04\n",
"16.04: Pulling from library/ubuntu\n",
"\n",
"\u001b[1B9e426c26: Pulling fs layer \n",
"\u001b[1Bb260b73b: Pulling fs layer \n",
"\u001b[1B65fd1143: Pulling fs layer \n",
"\u001b[1Ba07f8222: Pulling fs layer \n",
"\u001b[1BDigest: sha256:3097ac92b852f878f802c22a38f97b097b4084dbef82893ba453ba0297d76a6a\u001b[K\u001b[5A\u001b[1K\u001b[K\u001b[5A\u001b[1K\u001b[K\u001b[5A\u001b[1K\u001b[K\u001b[5A\u001b[1K\u001b[K\u001b[5A\u001b[1K\u001b[K\u001b[5A\u001b[1K\u001b[K\u001b[5A\u001b[1K\u001b[K\u001b[5A\u001b[1K\u001b[K\u001b[5A\u001b[1K\u001b[K\u001b[5A\u001b[1K\u001b[K\u001b[5A\u001b[1K\u001b[K\u001b[5A\u001b[1K\u001b[K\u001b[5A\u001b[1K\u001b[K\u001b[5A\u001b[1K\u001b[K\u001b[5A\u001b[1K\u001b[K\u001b[5A\u001b[1K\u001b[K\u001b[5A\u001b[1K\u001b[K\u001b[5A\u001b[1K\u001b[K\u001b[5A\u001b[1K\u001b[K\u001b[5A\u001b[1K\u001b[K\u001b[5A\u001b[1K\u001b[K\u001b[5A\u001b[1K\u001b[K\u001b[5A\u001b[1K\u001b[K\u001b[5A\u001b[1K\u001b[K\u001b[5A\u001b[1K\u001b[K\u001b[5A\u001b[1K\u001b[K\u001b[5A\u001b[1K\u001b[K\u001b[4A\u001b[1K\u001b[K\u001b[3A\u001b[1K\u001b[K\u001b[2A\u001b[1K\u001b[K\u001b[1A\u001b[1K\u001b[K\n",
"Status: Downloaded newer image for ubuntu:16.04\n",
" ---> 7aa3602ab41e\n",
"Step 2/13 : MAINTAINER Binoy Das Endpoint gender-classifier-1-endpoint - InService
')"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"from ipywidgets import widgets\n",
"from IPython.display import display\n",
"\n",
"#Check if endpoint already exists\n",
"endpoint_name = \"{}-endpoint\".format(run_name)\n",
"endpoints = sagemaker.list_endpoints(NameContains=endpoint_name)['Endpoints']\n",
"endpoint_exists = False\n",
"if len(endpoints) > 0:\n",
" for endpoint in endpoints:\n",
" if endpoint['EndpointName'] == endpoint_name:\n",
" endpoint_exists = True\n",
" break\n",
" \n",
"#Delete endpoint, if chosen\n",
"if endpoint_exists == True: \n",
" choice = input(\"Endpoint already exists, do you want to delete and create a fresh one (Y/N) ? \")\n",
" if choice.upper()[0:1] == \"Y\":\n",
" sagemaker.delete_endpoint(EndpointName = endpoint_name)\n",
" print(\"Deleting Endpoint - {} ...\".format(endpoint_name))\n",
" waiter = sagemaker.get_waiter('endpoint_deleted')\n",
" waiter.wait(EndpointName=endpoint_name,\n",
" WaiterConfig = {'Delay':1,'MaxAttempts':100})\n",
" endpoint_exists = False\n",
" print(\"Endpoint - {} deleted\".format(endpoint_name))\n",
" \n",
" else:\n",
" print(\"Endpoint - {} already exists\".format(endpoint_name))\n",
" \n",
"if endpoint_exists == False: \n",
"\n",
" endpoint_response = sagemaker.create_endpoint(\n",
" EndpointName=endpoint_name,\n",
" EndpointConfigName=endpoint_config_name,\n",
" Tags=[\n",
" {\n",
" 'Key': 'string',\n",
" 'Value': endpoint_name\n",
" }\n",
" ]\n",
" )\n",
" status='Creating'\n",
" sleep = 3\n",
"\n",
" print(\"{} Endpoint : {}\".format(status,endpoint_name))\n",
" bar = widgets.FloatProgress(min=0, description=\"Progress\") # instantiate the bar\n",
" display(bar) # display the bar\n",
"\n",
" while status != 'InService' and status != 'Failed' and status != 'OutOfService': \n",
" endpoint_response = sagemaker.describe_endpoint(\n",
" EndpointName=endpoint_name\n",
" )\n",
" status = endpoint_response['EndpointStatus']\n",
" time.sleep(sleep)\n",
" bar.value = bar.value + 1 \n",
" if bar.value >= bar.max-1:\n",
" bar.max = int(bar.max*1.05)\n",
" if status != 'InService' and status != 'Failed' and status != 'OutOfService': \n",
" print(\".\", end='')\n",
"\n",
" bar.max = bar.value \n",
" html = widgets.HTML(\n",
" value=\"Endpoint {} - {}
\".format(endpoint_response['EndpointName'], status)\n",
" )\n",
" display(html)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"At the end we run a quick test to validate we are able to generate meaningful predicitions using the hosted endpoint, as we did locally using the model on the Notebbok instance."
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\n",
" \"ContentType\": \"text/csv; charset=utf-8\",\n",
" \"InvokedProductionVariant\": \"default\"\n",
"}\n",
"{\"Sophie\": \"F\", \"Mike\": \"M\", \"Tom\": \"M\", \"Andrew\": \"M\", \"John\": \"M\", \"Amanda\": \"F\", \"Kayla\": \"F\", \"Allie\": \"F\", \"Jim\": \"M\"}"
]
}
],
"source": [
"!aws sagemaker-runtime invoke-endpoint --endpoint-name \"$run_name-endpoint\" --body 'Tom,Allie,Jim,Sophie,John,Kayla,Mike,Amanda,Andrew' --content-type text/csv outfile\n",
"!cat outfile"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Head back to Module-3 of the workshop now, to the section titled - `Integration`, and follow the steps described.