{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Orchestrating Jobs, Model Registration, and Continuous Deployment with Amazon SageMaker\n", "\n", "Amazon SageMaker offers Machine Learning application developers and Machine Learning operations engineers the ability to orchestrate SageMaker jobs and author reproducible Machine Learning pipelines, deploy custom-build models for inference in real-time with low latency or offline inferences with Batch Transform, and track lineage of artifacts. You can institute sound operational practices in deploying and monitoring production workflows, deployment of model artifacts, and track artifact lineage through a simple interface, adhering to safety and best-practice paradigms for Machine Learning application development.\n", "\n", "The SageMaker Pipelines service supports a SageMaker Machine Learning Pipeline Domain Specific Language (DSL), which is a declarative Json specification. This DSL defines a Directed Acyclic Graph (DAG) of pipeline parameters and SageMaker job steps. The SageMaker Python Software Developer Kit (SDK) streamlines the generation of the pipeline DSL using constructs that are already familiar to engineers and scientists alike.\n", "\n", "The SageMaker Model Registry is where trained models are stored, versioned, and managed. Data Scientists and Machine Learning Engineers can compare model versions, approve models for deployment, and deploy models from different AWS accounts, all from a single Model Registry. SageMaker enables customers to follow the best practices with ML Ops and getting started right. Customers are able to standup a full ML Ops end-to-end system with a single API call." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## SageMaker Pipelines\n", "\n", "Amazon SageMaker Pipelines support the following activites:\n", "\n", "* Pipelines - A Directed Acyclic Graph of steps and conditions to orchestrate SageMaker jobs and resource creation.\n", "* Processing Job steps - A simplified, managed experience on SageMaker to run data processing workloads, such as feature engineering, data validation, model evaluation, and model interpretation.\n", "* Training Job steps - An iterative process that teaches a model to make predictions by presenting examples from a training dataset.\n", "* Conditional step execution - Provides conditional execution of branches in a pipeline.\n", "* Registering Models - Creates a model package resource in the Model Registry that can be used to create deployable models in Amazon SageMaker.\n", "* Creating Model steps - Create a model for use in transform steps or later publication as an endpoint.\n", "* Parameterized Pipeline executions - Allows pipeline executions to vary by supplied parameters.\n", "* Transform Job steps - A batch transform to preprocess datasets to remove noise or bias that interferes with training or inference from your dataset, get inferences from large datasets, and run inference when you don't need a persistent endpoint." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Layout of the SageMaker ImageBuild ModelBuild ModelDeploy Project Template\n", "\n", "The template provides a starting point for bringing your SageMaker Pipeline development to production.\n", "\n", "```\n", "|-- codebuild-buildspec.yml\n", "|-- CONTRIBUTING.md\n", "|-- pipelines\n", "| |-- abalone\n", "| | |-- evaluate.py\n", "| | |-- __init__.py\n", "| | |-- pipeline.py\n", "| | `-- preprocess.py\n", "| |-- get_pipeline_definition.py\n", "| |-- __init__.py\n", "| |-- run_pipeline.py\n", "| |-- _utils.py\n", "| `-- __version__.py\n", "|-- README.md\n", "|-- sagemaker-pipelines-project.ipynb\n", "|-- setup.cfg\n", "|-- setup.py\n", "|-- tests\n", "| `-- test_pipelines.py\n", "`-- tox.ini\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A description of some of the artifacts is provided below:\n", "

\n", "Your codebuild execution instructions:\n", "```\n", "|-- codebuild-buildspec.yml\n", "```\n", "

\n", "Your pipeline artifacts, which includes a pipeline module defining the required `get_pipeline` method that returns an instance of a SageMaker pipeline, a preprocessing script that is used in feature engineering, and a model evaluation script to measure the Mean Squared Error of the model that's trained by the pipeline:\n", "\n", "```\n", "|-- pipelines\n", "| |-- abalone\n", "| | |-- evaluate.py\n", "| | |-- __init__.py\n", "| | |-- pipeline.py\n", "| | `-- preprocess.py\n", "\n", "```\n", "

\n", "Utility modules for getting pipeline definition jsons and running pipelines:\n", "\n", "```\n", "|-- pipelines\n", "| |-- get_pipeline_definition.py\n", "| |-- __init__.py\n", "| |-- run_pipeline.py\n", "| |-- _utils.py\n", "| `-- __version__.py\n", "```\n", "

\n", "Python package artifacts:\n", "```\n", "|-- setup.cfg\n", "|-- setup.py\n", "```\n", "

\n", "A stubbed testing module for testing your pipeline as you develop:\n", "```\n", "|-- tests\n", "| `-- test_pipelines.py\n", "```\n", "

\n", "The `tox` testing framework configuration:\n", "```\n", "`-- tox.ini\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### A SageMaker Pipeline\n", "\n", "The pipeline that we create follows a typical Machine Learning Application pattern of pre-processing, training, evaluation, and conditional model registration and publication, if the quality of the model is sufficient.\n", "\n", "Also, we are allowed to create the image for processing/ training/ inference containers, depending on which the SDK utilizes the Image URI to perform the model building.\n", "\n", "![A typical ML Application pipeline](img/pipeline-full.png)\n", "\n", "### Getting some constants\n", "\n", "We get some constants from the local execution environment." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import boto3\n", "import sagemaker\n", "\n", "\n", "region = boto3.Session().region_name\n", "role = sagemaker.get_execution_role()\n", "default_bucket = sagemaker.session.Session().default_bucket()\n", "\n", "# Change these to reflect your project/business name or if you want to separate ModelPackageGroup/Pipeline from the rest of your team\n", "model_package_group_name = f\"PaddleOCRModelPackageGroup-Example\"\n", "pipeline_name = f\"PaddleOCRPipeline-Example\"" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Completed 138 Bytes/568 Bytes (1.8 KiB/s) with 2 file(s) remaining\r", "upload: input/data/test.txt to s3://sagemaker-us-east-1-230755935769/PaddleOCR/input/data/test.txt\r\n", "Completed 138 Bytes/568 Bytes (1.8 KiB/s) with 1 file(s) remaining\r", "Completed 568 Bytes/568 Bytes (7.5 KiB/s) with 1 file(s) remaining\r", "upload: input/data/train.txt to s3://sagemaker-us-east-1-230755935769/PaddleOCR/input/data/train.txt\r\n" ] } ], "source": [ "!aws s3 cp --recursive input s3://$default_bucket/PaddleOCR/input" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Get the pipeline instance\n", "\n", "Here we get the pipeline instance from your pipeline module so that we can work with it." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "from paddleocr.pipeline import get_pipeline\n", "\n", "\n", "pipeline = get_pipeline(\n", " region=region,\n", " role=role,\n", " default_bucket=default_bucket,\n", " model_package_group_name=model_package_group_name,\n", " pipeline_name=pipeline_name,\n", " base_job_prefix=\"PaddleOCR\",\n", " project_id=\"SageMakerProjectId\"\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Submit the pipeline to SageMaker and start execution\n", "\n", "Let's submit our pipeline definition to the workflow service. The role passed in will be used by the workflow service to create all the jobs defined in the steps." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "scrolled": true }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "No finished training job found associated with this estimator. Please make sure this estimator is only used for building workflow config\n", "No finished training job found associated with this estimator. Please make sure this estimator is only used for building workflow config\n" ] }, { "data": { "text/plain": [ "{'PipelineArn': 'arn:aws:sagemaker:us-east-1:230755935769:pipeline/paddleocrpipeline-example',\n", " 'ResponseMetadata': {'RequestId': '2ca4e0ad-aca7-4e0b-bf8a-b4189c6cd9eb',\n", " 'HTTPStatusCode': 200,\n", " 'HTTPHeaders': {'x-amzn-requestid': '2ca4e0ad-aca7-4e0b-bf8a-b4189c6cd9eb',\n", " 'content-type': 'application/x-amz-json-1.1',\n", " 'content-length': '93',\n", " 'date': 'Wed, 11 May 2022 02:47:59 GMT'},\n", " 'RetryAttempts': 0}}" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pipeline.upsert(role_arn=role)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll start the pipeline, accepting all the default parameters.\n", "\n", "Values can also be passed into these pipeline parameters on starting of the pipeline, and will be covered later. " ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "execution = pipeline.start(\n", " parameters=dict(\n", " InputDataUrl=f\"s3://{default_bucket}/PaddleOCR/input/data\"\n", " )\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Pipeline Operations: examining and waiting for pipeline execution\n", "\n", "Now we describe execution instance and list the steps in the execution to find out more about the execution." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'PipelineArn': 'arn:aws:sagemaker:us-east-1:230755935769:pipeline/paddleocrpipeline-example',\n", " 'PipelineExecutionArn': 'arn:aws:sagemaker:us-east-1:230755935769:pipeline/paddleocrpipeline-example/execution/zp2yqfkq3vmq',\n", " 'PipelineExecutionDisplayName': 'execution-1652237280174',\n", " 'PipelineExecutionStatus': 'Executing',\n", " 'CreationTime': datetime.datetime(2022, 5, 11, 2, 48, 0, 91000, tzinfo=tzlocal()),\n", " 'LastModifiedTime': datetime.datetime(2022, 5, 11, 2, 48, 0, 91000, tzinfo=tzlocal()),\n", " 'CreatedBy': {},\n", " 'LastModifiedBy': {},\n", " 'ResponseMetadata': {'RequestId': 'b921bdc6-451e-40b7-a7fc-35749670958a',\n", " 'HTTPStatusCode': 200,\n", " 'HTTPHeaders': {'x-amzn-requestid': 'b921bdc6-451e-40b7-a7fc-35749670958a',\n", " 'content-type': 'application/x-amz-json-1.1',\n", " 'content-length': '415',\n", " 'date': 'Wed, 11 May 2022 02:47:59 GMT'},\n", " 'RetryAttempts': 0}}" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "execution.describe()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can wait for the execution by invoking `wait()` on the execution:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "execution.wait()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can list the execution steps to check out the status and artifacts:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[{'StepName': 'PaddleOCRAccuracyCond',\n", " 'StartTime': datetime.datetime(2022, 5, 11, 3, 5, 44, 522000, tzinfo=tzlocal()),\n", " 'EndTime': datetime.datetime(2022, 5, 11, 3, 5, 45, 21000, tzinfo=tzlocal()),\n", " 'StepStatus': 'Succeeded',\n", " 'AttemptCount': 0,\n", " 'Metadata': {'Condition': {'Outcome': 'False'}}},\n", " {'StepName': 'TrainPaddleOCRModel',\n", " 'StartTime': datetime.datetime(2022, 5, 11, 2, 53, 0, 655000, tzinfo=tzlocal()),\n", " 'EndTime': datetime.datetime(2022, 5, 11, 3, 5, 43, 524000, tzinfo=tzlocal()),\n", " 'StepStatus': 'Succeeded',\n", " 'AttemptCount': 0,\n", " 'Metadata': {'TrainingJob': {'Arn': 'arn:aws:sagemaker:us-east-1:230755935769:training-job/pipelines-zp2yqfkq3vmq-trainpaddleocrmodel-qmcdf0ueef'}}},\n", " {'StepName': 'GenerateOCRTrainingData',\n", " 'StartTime': datetime.datetime(2022, 5, 11, 2, 48, 1, 595000, tzinfo=tzlocal()),\n", " 'EndTime': datetime.datetime(2022, 5, 11, 2, 52, 59, 853000, tzinfo=tzlocal()),\n", " 'StepStatus': 'Succeeded',\n", " 'AttemptCount': 0,\n", " 'Metadata': {'ProcessingJob': {'Arn': 'arn:aws:sagemaker:us-east-1:230755935769:processing-job/pipelines-zp2yqfkq3vmq-generateocrtrainingd-wzlqhrlcat'}}}]" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "execution.list_steps()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Parameterized Executions\n", "\n", "We can run additional executions of the pipeline specifying different pipeline parameters. The parameters argument is a dictionary whose names are the parameter names, and whose values are the primitive values to use as overrides of the defaults.\n", "\n", "Of particular note, based on the performance of the model, we may want to kick off another pipeline execution, but this time on a compute-optimized instance type and set the model approval status automatically be \"Approved\". This means that the model package version generated by the `RegisterModel` step will automatically be ready for deployment through CI/CD pipelines, such as with SageMaker Projects." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "execution = pipeline.start(\n", " parameters=dict(\n", " TrainingInstanceType=\"ml.p3.2xlarge\",\n", " ModelApprovalStatus=\"Approved\",\n", " )\n", ")" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "execution.wait()" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[{'StepName': 'PaddleOCRAccuracyCond',\n", " 'StartTime': datetime.datetime(2022, 5, 11, 3, 21, 20, 219000, tzinfo=tzlocal()),\n", " 'EndTime': datetime.datetime(2022, 5, 11, 3, 21, 20, 604000, tzinfo=tzlocal()),\n", " 'StepStatus': 'Succeeded',\n", " 'AttemptCount': 0,\n", " 'Metadata': {'Condition': {'Outcome': 'False'}}},\n", " {'StepName': 'TrainPaddleOCRModel',\n", " 'StartTime': datetime.datetime(2022, 5, 11, 3, 11, 17, 762000, tzinfo=tzlocal()),\n", " 'EndTime': datetime.datetime(2022, 5, 11, 3, 21, 19, 701000, tzinfo=tzlocal()),\n", " 'StepStatus': 'Succeeded',\n", " 'AttemptCount': 0,\n", " 'Metadata': {'TrainingJob': {'Arn': 'arn:aws:sagemaker:us-east-1:230755935769:training-job/pipelines-8chrh9vcb76i-trainpaddleocrmodel-q6riqgqasb'}}},\n", " {'StepName': 'GenerateOCRTrainingData',\n", " 'StartTime': datetime.datetime(2022, 5, 11, 3, 6, 5, 155000, tzinfo=tzlocal()),\n", " 'EndTime': datetime.datetime(2022, 5, 11, 3, 11, 16, 753000, tzinfo=tzlocal()),\n", " 'StepStatus': 'Succeeded',\n", " 'AttemptCount': 0,\n", " 'Metadata': {'ProcessingJob': {'Arn': 'arn:aws:sagemaker:us-east-1:230755935769:processing-job/pipelines-8chrh9vcb76i-generateocrtrainingd-dji9tw8tnb'}}}]" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "execution.list_steps()" ] } ], "metadata": { "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "conda_python3", "language": "python", "name": "conda_python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.13" } }, "nbformat": 4, "nbformat_minor": 4 }