{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# SageMaker Pipelines Customer Churn Prediction using Contact Centre Data\n", "\n", "---------------------------------\n", "`This notebook should work well with the Python 3 (Data Scienct) kernel in SageMaker Studio`\n", "\n", "---------------------------------\n", "\n", "Amazon SageMaker Model Building Pipelines offers machine learning (ML) application developers and operations engineers the ability to orchestrate SageMaker jobs and author reproducible ML pipelines. It also enables them to deploy custom-build models for inference in real-time with low latency, run offline inferences with Batch Transform, and track lineage of artifacts. They can institute sound operational practices in deploying and monitoring production workflows, deploying model artifacts, and tracking artifact lineage through a simple interface, adhering to safety and best practice paradigms for ML application development.\n", "\n", "The SageMaker Pipelines service supports a SageMaker Pipeline domain specific language (DSL), which is a declarative JSON specification. This DSL defines a directed acyclic graph (DAG) of pipeline parameters and SageMaker job steps. The SageMaker Python Software Developer Kit (SDK) streamlines the generation of the pipeline DSL using constructs that engineers and scientists are already familiar with." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## SageMaker Pipelines\n", "\n", "SageMaker Pipelines supports the following activities, which are demonstrated in this notebook:\n", "\n", "* Pipelines - A DAG of steps and conditions to orchestrate SageMaker jobs and resource creation.\n", "* Processing job steps - A simplified, managed experience on SageMaker to run data processing workloads, such as feature engineering, data validation, model evaluation, and model interpretation.\n", "* Training job steps - An iterative process that teaches a model to make predictions by presenting examples from a training dataset.\n", "* Conditional execution steps - A step that provides conditional execution of branches in a pipeline.\n", "* Register model steps - A step that creates a model package resource in the Model Registry that can be used to create deployable models in Amazon SageMaker.\n", "* Create model steps - A step that creates a model for use in transform steps or later publication as an endpoint.\n", "* Clarify steps - A ClarifyCheck step that conduct model explainability check which launches a processing job that runs the SageMaker Clarify prebuilt container.\n", "* Parametrized Pipeline executions - Enables variation in pipeline executions according to specified parameters." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Solution\n", "`TODO:\n", "discuss the solution and show two different pipelines`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Environment Setup\n", "\n", "Note:\n", "\n", "The following policies need to be attached to the execution role that you used to run this notebook:\n", "\n", "* AmazonSageMakerFullAccess\n", "* AmazonSageMakerFeatureStoreAccess\n", "* AmazonS3FullAccess\n", "\n", "Import libraries, setup logging, and define few variables" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "import json\n", "import os\n", "import logging\n", "from pathlib import Path\n", "\n", "import boto3\n", "import sagemaker\n", "from sagemaker.session import Session\n", "from sagemaker import get_execution_role\n", "from sagemaker.feature_store.feature_definition import FeatureDefinition\n", "from sagemaker.feature_store.feature_definition import FeatureTypeEnum\n", "from sagemaker.feature_store.feature_group import FeatureGroup\n", "\n", "from features_ingestion_pipeline.feature_ingestion_pipeline import create_pipeline\n", "from build_pipeline.model_build_pipeline import get_pipeline\n", "from batch_pipeline.batch_transform_pipeline import get_batch_pipeline\n", "\n", "from time import gmtime, strftime\n", "import time\n", "import uuid" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "role = get_execution_role()\n", "\n", "region = boto3.Session().region_name\n", "boto_session = boto3.Session(region_name=region)\n", "sagemaker_session = sagemaker.Session()\n", "\n", "sagemaker_client = boto_session.client(service_name='sagemaker', region_name=region)\n", "featurestore_runtime = boto_session.client(service_name='sagemaker-featurestore-runtime', region_name=region)\n", "\n", "# You can configure this with your own bucket name, e.g.\n", "# bucket = \n", "bucket=sagemaker.Session().default_bucket()\n", "prefix = 'DEMO-xgboost-customer-churn-connect'\n", "base_job_prefix = 'Demo-xgboost-churn-connect'\n", "\n", "s3_client = boto3.client(\"s3\")\n", "s3_uploader = sagemaker.s3.S3Uploader\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Set up a logger" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "logger = logging.getLogger(\"__name__\")\n", "logger.setLevel(logging.INFO)\n", "logger.addHandler(logging.StreamHandler())" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# data upload to s3\n", "local_path = Path(\"data\")\n", "data_uri_prefix = s3_uploader.upload(local_path.as_posix(), f\"s3://{bucket}/{prefix}/data\")\n", "input_data_url = data_uri_prefix + \"/dataset.csv\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%store input_data_url" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%store\n", "%store -r" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Setup SageMaker Feature Store\n", "\n", "SageMaker Feature Store is a SageMaker capability that makes it easy for customers to create and manage curated features for machine learning (ML) development. It erves as the single source of truth to store, retrieve, remove, track, share, discover, and control access to features. SageMaker Feature Store enables data ingestion via a high TPS API and data consumption via the online and offline stores.\n", "\n", "### Terminology\n", "* `Feature group` – A FeatureGroup is the main Feature Store resource that contains the metadata for all the data stored in Amazon SageMaker Feature Store. A feature group is a logical grouping of features, defined in the feature store, to describe records. A feature group’s definition is composed of a list of feature definitions, a record identifier name, and configurations for its online and offline store. \n", "\n", "* `Feature definition` – A FeatureDefinition consists of a name and one of the following data types: an Integral, String or Fractional. A FeatureGroup contains a list of feature definitions. \n", "\n", "* `Record identifier name` – Each feature group is defined with a record identifier name. The record identifier name must refer to one of the names of a feature defined in the feature group's feature definitions. \n", "\n", "* `Event time` – a point in time when a new event occurs that corresponds to the creation or update of a record in a feature group. All records in the feature group must have a corresponding Eventtime. It can be used to track changes to a record over time. The online store contains the record corresponding to the last Eventtime for a record identifier name, whereas the offline store contains all historic records.\n", "\n", "* `Online Store` – the low latency, high availability cache for a feature group that enables real-time lookup of records. The online store allows quick access to the latest value for a Record via the GetRecord API. A feature group contains an OnlineStoreConfig controlling where the data is stored.\n", "\n", "* `Offline store` – the OfflineStore, stores historical data in your S3 bucket. It is used when low (sub-second) latency reads are not needed. For example, when you want to store and serve features for exploration, model training, and batch inference. A feature group contains an OfflineStoreConfig controlling where the data is stored." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Define Feature Group\n", "Select Record identifier and Event time feature name. These are required parameters for feature group\n", "creation.\n", "* **Record identifier name** is the name of the feature defined in the feature group's feature definitions \n", "whose value uniquely identifies a Record defined in the feature group's feature definitions.\n", "* **Event time feature name** is the name of the EventTime feature of a Record in FeatureGroup. An EventTime \n", "is a timestamp that represents the point in time when a new event occurs that corresponds to the creation or \n", "update of a Record in the FeatureGroup. All Records in the FeatureGroup must have a corresponding EventTime.\n", "\n", "
💡Record identifier and Event time feature name are required \n", "for feature group. After filling in the values, you can choose Run Selected Cell and All Below \n", "from the Run Menu from the menu bar. \n", "
" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "record_identifier_feature_name = \"customerID\"\n", "if record_identifier_feature_name is None:\n", " raise SystemExit(\"Select a column name as the feature group record identifier.\")\n", "\n", "event_time_feature_name = \"event_time\"\n", "if event_time_feature_name is None:\n", " raise SystemExit(\"Select a column name as the event time feature name.\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Feature Definitions\n", "The following is a list of the feature names and feature types of the final dataset that will be produced \n", "when your data flow is used to process your input dataset. These are automatically generated from the \n", "step `Custom Pyspark` from `Source: Answers.Csv`. To save from a different step, go to Data Wrangler to \n", "select a new step to export.\n", "\n", "
💡 Configurable Settings \n", "\n", "1. You can select a subset of the features. By default all columns of the result dataframe will be used as \n", "features.\n", "2. You can change the Data Wrangler data type to one of the Feature Store supported types \n", "(Integral, Fractional, or String). The default type is set to String. \n", "This means that, if a column in your dataset is not a float or long type, it will default \n", "to String in your Feature Store.\n", "\n", "For Event Time features, make sure the format follows the feature store\n", "\n", " \n", " Event Time feature format\n", " \n", "\n", "
\n", "The following is a list of the feature names and data types of the final dataset that will be produced when your data flow is used to process your input dataset." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "column_schemas = [\n", " {\n", " \"name\": \"Churn_true\",\n", " \"type\": \"long\"\n", " },\n", " {\n", " \"name\": \"Account_Length\",\n", " \"type\": \"long\"\n", " },\n", " {\n", " \"name\": \"customerID\",\n", " \"type\": \"long\"\n", " },\n", " {\n", " \"name\": \"VMail_Message\",\n", " \"type\": \"long\"\n", " },\n", " {\n", " \"name\": \"Day_Mins\",\n", " \"type\": \"float\"\n", " },\n", " {\n", " \"name\": \"Day_Calls\",\n", " \"type\": \"long\"\n", " },\n", " {\n", " \"name\": \"Eve_Mins\",\n", " \"type\": \"float\"\n", " },\n", " {\n", " \"name\": \"Eve_Calls\",\n", " \"type\": \"long\"\n", " },\n", " {\n", " \"name\": \"Night_Mins\",\n", " \"type\": \"float\"\n", " },\n", " {\n", " \"name\": \"Night_Calls\",\n", " \"type\": \"long\"\n", " },\n", " {\n", " \"name\": \"Intl_Mins\",\n", " \"type\": \"float\"\n", " },\n", " {\n", " \"name\": \"Intl_Calls\",\n", " \"type\": \"long\"\n", " },\n", " {\n", " \"name\": \"CustServ_Calls\",\n", " \"type\": \"long\"\n", " },\n", " {\n", " \"name\": \"pastSenti_nut\",\n", " \"type\": \"long\"\n", " },\n", " {\n", " \"name\": \"pastSenti_pos\",\n", " \"type\": \"long\"\n", " },\n", " {\n", " \"name\": \"pastSenti_neg\",\n", " \"type\": \"long\"\n", " },\n", " {\n", " \"name\": \"mth_remain\",\n", " \"type\": \"long\"\n", " },\n", " {\n", " \"name\": \"Int_l_Plan_no\",\n", " \"type\": \"long\"\n", " },\n", " {\n", " \"name\": \"Int_l_Plan_yes\",\n", " \"type\": \"long\"\n", " },\n", " {\n", " \"name\": \"VMail_Plan_no\",\n", " \"type\": \"long\"\n", " },\n", " {\n", " \"name\": \"VMail_Plan_yes\",\n", " \"type\": \"long\"\n", " },\n", " {\n", " \"name\": \"event_time\",\n", " \"type\": \"float\"\n", " }\n", "]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Below we create the SDK input for those feature definitions. Some schema types in Data Wrangler are not \n", "supported by Feature Store. The following will create a default_FG_type set to String for these types." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "default_feature_type = FeatureTypeEnum.STRING\n", "column_to_feature_type_mapping = {\n", " \"float\": FeatureTypeEnum.FRACTIONAL,\n", " \"long\": FeatureTypeEnum.INTEGRAL\n", "}\n", "\n", "feature_definitions = [\n", " FeatureDefinition(\n", " feature_name=column_schema['name'], \n", " feature_type=column_to_feature_type_mapping.get(column_schema['type'], default_feature_type)\n", " ) for column_schema in column_schemas\n", "]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Configure Feature Group\n", "\n", "
💡 Configurable Settings \n", "\n", "1. feature_group_name: name of the feature group.\n", "1. feature_store_offline_s3_uri: SageMaker FeatureStore writes the data in the OfflineStore of a FeatureGroup to a S3 location owned by you.\n", "1. enable_online_store: controls if online store is enabled. Enabling the online store allows quick access to the latest value for a Record via the GetRecord API.\n", "1. iam_role: IAM role for executing the processing job.\n", "
" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# flow name and an unique ID for this export (used later as the processing job name for the export)\n", "flow_name = \"contact-center-data\"\n", "flow_export_id = f\"{strftime('%d-%H-%M-%S', gmtime())}-{str(uuid.uuid4())[:8]}\"\n", "flow_export_name = f\"flow-{flow_export_id}\"\n", "\n", "# feature group name, with flow_name and an unique id. You can give it a customized name\n", "feature_group_name = f\"fg-{flow_name}-{str(uuid.uuid4())[:8]}\"\n", "print(f\"Feature Group Name: {feature_group_name}\")\n", "\n", "# SageMaker FeatureStore writes the data in the OfflineStore of a FeatureGroup to a \n", "# S3 location owned by you.\n", "feature_store_offline_s3_uri = 's3://' + bucket\n", "\n", "# controls if online store is enabled. Enabling the online store allows quick access to \n", "# the latest value for a Record via the GetRecord API.\n", "enable_online_store = True" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Initialize & Create Feature Group" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "feature_store_session = Session(\n", " boto_session=boto_session,\n", " sagemaker_client=sagemaker_client,\n", " sagemaker_featurestore_runtime_client=featurestore_runtime\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Feature group is initialized and created below\n", "feature_group = FeatureGroup(\n", " name=feature_group_name, sagemaker_session=feature_store_session, feature_definitions=feature_definitions)\n", "\n", "feature_group.create(\n", " s3_uri=feature_store_offline_s3_uri,\n", " record_identifier_name=record_identifier_feature_name,\n", " event_time_feature_name=event_time_feature_name,\n", " role_arn=role,\n", " enable_online_store=enable_online_store\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def wait_for_feature_group_creation_complete(feature_group):\n", " \"\"\"Helper function to wait for the completions of creating a feature group\"\"\"\n", " response = feature_group.describe()\n", " status = response.get(\"FeatureGroupStatus\")\n", " while status == \"Creating\":\n", " print(\"Waiting for Feature Group Creation\")\n", " time.sleep(5)\n", " response = feature_group.describe()\n", " status = response.get(\"FeatureGroupStatus\")\n", "\n", " if status != \"Created\":\n", " print(f\"Failed to create feature group, response: {response}\")\n", " failureReason = response.get(\"FailureReason\", \"\")\n", " raise SystemExit(\n", " f\"Failed to create feature group {feature_group.name}, status: {status}, reason: {failureReason}\"\n", " )\n", " print(f\"FeatureGroup {feature_group.name} successfully created.\")\n", "\n", "wait_for_feature_group_creation_complete(feature_group=feature_group)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that the feature group is created, You will create a feature ingestion pipeline to run a processing job to process your \n", " data at scale and ingest the transformed data into this feature group." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "feature_group_name = feature_group.name\n", "%store feature_group_name" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Feature Ingestion Pipeline Overview\n", "\n", "The feature ingestion pipeline shows how to:\n", "\n", "* Define a set of Pipeline parameters that can be used to parametrize a SageMaker Pipeline.\n", "* Define a Processing step that uses DataWrangler processor to process the input data based on the DataWrangler flow file.\n", "* Define and create a Pipeline definition in a DAG, with the defined parameters and steps.\n", "\n", "Please see the `feature_ingestion_pipeline/create_pipeline.py` for detail.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "flow_file_path = 'features_ingestion_pipeline/contact-center-data.flow'\n", "fg_ingest_pipeline_name = f\"demo-feature-ingestion-pipeline-{strftime('%d-%m', gmtime())}\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "feature_ingestion_pipeline = create_pipeline(\n", " role,\n", " fg_ingest_pipeline_name,\n", " sagemaker_session=sagemaker_session,\n", " flow_file_path=flow_file_path,\n", " feature_group_name=feature_group_name\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Submit the pipeline definition to the SageMaker Pipeline service\n", "Note: If an existing pipeline has the same name it will be overwritten." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "feature_ingestion_pipeline.upsert(role_arn=role)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### View the entire pipeline definition\n", "Viewing the pipeline definition will all the string variables interpolated may help debug pipeline bugs. It is commented out here due to length." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# json.loads(feature_ingestion_pipeline.describe()['PipelineDefinition'])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "parameters = { \n", " \"InstanceType\": \"ml.m5.4xlarge\", \n", " \"InputDataUrl\": input_data_url\n", "}\n", "execution = feature_ingestion_pipeline.start(parameters=parameters)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "execution.wait()\n", "execution.describe()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![feature ingestion pipeline](img/feature-ingestion-pipeline.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Model Build Pipeline Overview\n", "\n", "The model build pipeline shows how to:\n", "\n", "* Define a set of Pipeline parameters that can be used to parametrize a SageMaker Pipeline.\n", "* Define a Processing step that extracts data from feature store to create the train, validation and test data sets.\n", "* Define a Training step that trains a model on the preprocessed train data set.\n", "* Define a Processing step that evaluates the trained model's performance on the test dataset.\n", "* Define a Create Model step that creates a model from the model artifacts used in training.\n", "* Define a Clarify check step that performs model explainability check.\n", "* Define a Conditional step that measures a condition based on output from prior steps and conditionally executes other steps.\n", "* Define a Register Model step that creates a model package from the estimator and model artifacts used to train the model.\n", "* Define and create a Pipeline definition in a DAG, with the defined parameters and steps.\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## A SageMaker Pipeline\n", "\n", "The pipeline that you create follows a typical machine learning (ML) application pattern of preprocessing, training, evaluation, model creation, and model registration:\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Preparation\n", "\n", "\n", "Let's start by specifying:\n", "\n", "- The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.\n", "- The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with open('data_meta.json') as file:\n", " dataset_dict = json.load(file)\n", "dataset_dict" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "script_create_dataset = './build_pipeline/create_dataset.py'\n", "script_evaluation = './build_pipeline/evaluation.py'\n", "pipeline_build_name = f\"demo-customer-churn-build-pipeline-{strftime('%d-%m', gmtime())}\"\n", "mpg_name = \"ChurnModelPackage\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pipeline_build = get_pipeline(\n", " role,\n", " pipeline_build_name,\n", " sagemaker_session=sagemaker_session,\n", " base_job_prefix=base_job_prefix,\n", " bucket=bucket,\n", " prefix=prefix,\n", " label_name=dataset_dict[\"label_name\"],\n", " features_names=dataset_dict[\"features_names\"],\n", " model_package_group_name=mpg_name,\n", " customers_fg_name=feature_group_name,\n", " script_create_dataset=script_create_dataset,\n", " script_evaluation=script_evaluation,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pipeline_build.upsert(role_arn=role)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# json.loads(feature_ingestion_pipeline.describe()['PipelineDefinition'])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "build_execution = pipeline_build.start()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "build_execution.wait()\n", "build_execution.describe()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![model build pipeline](img/model-build-pipeline.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Lineage\n", "\n", "Review the lineage of the artifacts generated by the pipeline." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "import time\n", "from sagemaker.lineage.visualizer import LineageTableVisualizer\n", "\n", "\n", "viz = LineageTableVisualizer(sagemaker.session.Session())\n", "for execution_step in reversed(build_execution.list_steps()):\n", " print(execution_step)\n", " display(viz.show(pipeline_execution_step=execution_step))\n", " time.sleep(5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Batch Inference\n", "\n", "Perform batch transform using the approved model and data stored in the offline feature store. You can choose to use a lambda step or a callback step to retrieve the latest approved model artifacts info. More details can be found in this [GitHub repo](https://github.com/aws-samples/sagemaker-pipelines-callback-step-for-batch-transform). In this example, we will use a lambda step to get the latest approved model version from the model regitry.\n", "\n", "![batch pipeline](img/batch_pipeline.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Before using the latest model version registered in the model registry, we need to make sure the batch transform is using the approved model version. In this example, we will manually approved the model version." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "list_model_packages_response = sagemaker_client.list_model_packages(\n", " ModelPackageGroupName=mpg_name\n", ")\n", "model_version_arn = list_model_packages_response[\"ModelPackageSummaryList\"][0][\n", " \"ModelPackageArn\"\n", "]\n", "print(model_version_arn)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "describe_response = sagemaker_client.describe_model_package(ModelPackageName=model_version_arn)\n", "if describe_response['ModelApprovalStatus']!='Approved':\n", " model_package_update_input_dict = {\n", " \"ModelPackageArn\": model_version_arn,\n", " \"ModelApprovalStatus\": \"Approved\",\n", " }\n", " model_package_update_response = sagemaker_client.update_model_package(**model_package_update_input_dict)\n", " print(f\"Update model package arn {model_version_arn} to approved!\")\n", "else:\n", " print(\"The latest model version is approved\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Setting up the custom IAM Role\n", "\n", "The Lambda function needs an IAM role that allows it to deploy a SageMaker Endpoint. The role ARN must be provided in the LambdaStep.\n", "\n", "A helper function in `iam_helper.py` is available to create the Lambda function role. Please note that the role uses the Amazon managed policy - `SageMakerFullAccess`. This should be replaced with an IAM policy with least privileges as per AWS IAM best practices." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import sys, importlib\n", "importlib.reload(sys.modules['batch_pipeline.batch_transform_pipeline'])\n", "from batch_pipeline.batch_transform_pipeline import get_batch_pipeline" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from iam_helper import create_lambda_role\n", "\n", "lambda_role = create_lambda_role(\"lambda-deployment-role\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "current_time = time.strftime(\"%m-%d-%H-%M-%S\", time.localtime())\n", "lambda_function_name = f\"lambdaBatchTransformPipelineLambda_{current_time}\"\n", "pipeline_batch_name = f\"demo-customer-churn-batch-pipeline\"\n", "lambda_script = \"./batch_pipeline/lambda_step_code.py\"\n", "lambda_handler = \"lambda_step_code.handler\"\n", "script_create_batch_dataset = \"./batch_pipeline/create_batch_inference_dataset.py\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pipeline_batch = get_batch_pipeline(\n", " role,\n", " pipeline_batch_name,\n", " sagemaker_session=sagemaker_session,\n", " base_job_prefix=base_job_prefix,\n", " bucket=bucket,\n", " prefix=prefix,\n", " features_names=dataset_dict[\"features_names\"],\n", " model_package_group_name=mpg_name,\n", " customers_fg_name=feature_group_name,\n", " script_create_batch_dataset=script_create_batch_dataset,\n", " lambda_role=lambda_role,\n", " lambda_function_name=lambda_function_name,\n", " lambda_script=lambda_script,\n", " lambda_handler=lambda_handler\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pipeline_batch.upsert(role_arn=role)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "batch_execution = pipeline_batch.start()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "batch_execution.wait()\n", "batch_execution.describe()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Clean Up Resources\n", "\n", "After running the demo, you should remove the resources which were created. You can also delete all the objects in the project's S3 directory by passing the keyword argument `delete_s3_objects=True`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from demo_helper import delete_project_resources" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\"\"\"\n", "delete_project_resources(\n", " sagemaker_boto_client=sagemaker_client,\n", " pipeline_name=pipeline_build_name,\n", " mpg_name=mpg_name,\n", " prefix=prefix,\n", " fg_name=feature_group_name,\n", " delete_s3_objects=False,\n", " bucket_name=bucket)\n", "\"\"\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# GitHub Resource\n", "This demo is available on GitHub: " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (Data Science)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/datascience-1.0" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.10" } }, "nbformat": 4, "nbformat_minor": 4 }