{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## Use Sagemaker Pipelines To Orchestrate End To End Cross Validation Model Training Workflow\n", "\n", "Amazon SageMaker Pipelines simplifies ML workflows orchestration across each step of the ML process, from exploration data analysis, preprocessing to model training and model deployment. \n", "With Sagemaker Pipelines, you can develop a consistent, reusable workflow that integrates with CI/CD pipeline for improved quality and reduced errors throughout development lifecycle.\n", "\n", "## SageMaker Pipelines\n", "An ML workflow built using Sagemaker Pipeline is made up of a series of Steps defined as a directed acryclic graph (DAG). The pipeline is expressed in JSON definition that captures relationships between the steps of your pipeline. Here's a terminology used in Sagemaker Pipeline for defining an ML workflow.\n", "\n", "* Pipelines - Top level definition of a pipeline. It encapsulates name, parameters, and steps. A pipeline is scoped within an account and region. \n", "* Parameters - Parameters are defined in the pipeline definition. It introduces variables that can be provided to the pipeline at execution time. Parameters support string, float and integer types. \n", "* Pipeline Steps - Defines the actions that the pipeline takes and the relationships between steps using properties. Sagemaker Pipelines support the following step types: Processing, Training, Transform, CreateModel, RegisterModel, Condition, Callback." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Notebook Overview\n", "This notebook implements a complete Cross Validation ML model workflow using a custom built docker image, HyperparameterTuner for automatic hyperparameter optimization, \n", "SKLearn framework for K fold split and model training. The workflow is defined orchestrated using Sagemaker Pipelines. \n", "Here are the main steps involved the end to end workflow:\n", " \n", "
    \n", "
  1. Defines a list of parameters, with default values to be used throughout the pipeline
  2. \n", "
  3. Defines a ProcessingStep with SKLearn processor to perform KFold cross validation splits
  4. \n", "
  5. Defines a ProcessingStep that orchestrates cross validation model training with HyperparameterTuner integration
  6. \n", "
  7. Defines a ConditionStep that validates the model performance against the baseline
  8. \n", "
  9. Defines a TrainingStep to train the model with the hyperparameters suggested by HyperparameterTuner using all the dataset
  10. \n", "
  11. Creates a Model package, defines RegisterModel to register the trained model in the previous step with Sagemaker Model Registry
  12. \n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Dataset\n", "\n", "The Iris flower data set is a multivariate data set introduced by the British statistician, eugenicist, and biologist Ronald Fisher in his 1936 [paper](https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1469-1809.1936.tb02137.x). The data set consists of 50 samples from each of 3 species of Iris:\n", "* Iris setosa \n", "* Iris virginica \n", "* Iris versicolor\n", "\n", "There are 4 features available in each sample: the length and the width of the sepals and petals measured in centimeters. \n", "\n", "Based on the combination of these four features, we are going to build a linear algorithm (SVM) to train a multiclass classification model to distinguish the species from each other." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import boto3\n", "import sagemaker\n", "\n", "region = boto3.Session().region_name\n", "sagemaker_session = sagemaker.session.Session()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Defines Pipeline Parameters\n", "With Pipeline Parameters, you can introduce variables to the pipeline that specific to the pipeline run. \n", "The supported parameter types include:\n", "\n", "ParameterString - represents a str Python type\n", "ParameterInteger - represents an int Python type\n", "ParameterFloat - represents a float Python type\n", "\n", "Additionally, parameters support default values, which can be useful for scenarios where only a subset of the defined parameters need to change. For example, for training a model that uses k fold Cross Validation method, you could provide the desired k value at pipeline execution time. \n", "\n", "Here are the parameters for the workflow used in this notebook:\n", "\n", "* ProcessingInstanceCount - number of instances for a Sagemaker Processing job in prepropcessing step.\n", "* ProcessingInstanceType - instance type used for a Sagemaker Processing job in prepropcessing step.\n", "* TrainingInstanceType - instance type used for Sagemaker Training job.\n", "* TrainingInstanceCount - number of instances for a Sagemaker Training job.\n", "* InferenceInstanceType - instance type for hosting the deployment of the Sagemaker trained model.\n", "* HPOTunerScriptInstanceType - instance type for the script processor that triggers the hyperparameter tuning job \n", "* ModelApprovalStatus - the initial approval status for the trained model in Sagemaker Model Registry\n", "* ExecutionRole - IAM role to use throughout the specific pipeline execution. \n", "* DefaultS3Bucket - default S3 bucket name as the object storage for the target pipeline execution.\n", "* BaselineModelObjectiveValue - the minimum objective metrics used for model evaluation.\n", "* S3BucketPrefix - bucket prefix for the pipeline execution.\n", "* ImageURI - docker image URI (ECR) for triggering cross validation model training with HyperparameterTuner.\n", "* KFold - the value of k to be used in k fold cross validation\n", "* MaxTrainingJobs - maximum number of model training jobs to trigger in a single hyperparameter tuner job.\n", "* MaxParallelTrainingJobs - maximum number of parallel model training jobs to trigger in a single hyperparameter tuner job.\n", "* MinimumC, MaximumC - Hyperparameter ranges for SVM 'c' parameter.\n", "* MimimumGamma, MaximumGamma - Hyperparameter ranges for SVM 'gamma' parameter. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.workflow.parameters import (\n", " ParameterInteger,\n", " ParameterString,\n", " ParameterFloat\n", ")\n", "\n", "processing_instance_count = ParameterInteger(name=\"ProcessingInstanceCount\", default_value=1)\n", "processing_instance_type = ParameterString(name=\"ProcessingInstanceType\", default_value=\"ml.m5.xlarge\")\n", "training_instance_type = ParameterString(name=\"TrainingInstanceType\", default_value=\"ml.m5.xlarge\")\n", "training_instance_count = ParameterInteger(name=\"TrainingInstanceCount\", default_value=1)\n", "inference_instance_type = ParameterString(name=\"InferenceInstanceType\", default_value=\"ml.m5.large\")\n", "hpo_tuner_instance_type = ParameterString(name=\"HPOTunerScriptInstanceType\", default_value=\"ml.t3.medium\")\n", "model_approval_status = ParameterString(name=\"ModelApprovalStatus\", default_value=\"PendingManualApproval\")\n", "role = ParameterString(name='ExecutionRole', default_value=sagemaker.get_execution_role())\n", "default_bucket = ParameterString(name=\"DefaultS3Bucket\", default_value=sagemaker_session.default_bucket())\n", "baseline_model_objective_value = ParameterFloat(name='BaselineModelObjectiveValue', default_value=0.6)\n", "bucket_prefix = ParameterString(name=\"S3BucketPrefix\", default_value=\"cross_validation_iris_classification\")\n", "image_uri = ParameterString(name=\"ImageURI\")\n", "k = ParameterInteger(name=\"KFold\", default_value=3)\n", "max_jobs = ParameterInteger(name=\"MaxTrainingJobs\", default_value=3)\n", "max_parallel_jobs = ParameterInteger(name=\"MaxParallelTrainingJobs\", default_value=1)\n", "min_c = ParameterInteger(name=\"MinimumC\", default_value=0)\n", "max_c = ParameterInteger(name=\"MaximumC\", default_value=1)\n", "min_gamma = ParameterFloat(name=\"MinimumGamma\", default_value=0.0001)\n", "max_gamma = ParameterFloat(name=\"MaximumGamma\", default_value=0.001)\n", "gamma_scaling_type = ParameterString(name=\"GammaScalingType\", default_value=\"Logarithmic\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Variables / Constants used throughout the pipeline\n", "model_package_group_name=\"IrisClassificationCrossValidatedModel\"\n", "framework_version = \"0.23-1\"\n", "s3_bucket_base_path=f\"s3://{default_bucket}/{bucket_prefix}\"\n", "s3_bucket_base_path_train = f\"{s3_bucket_base_path}/train\"\n", "s3_bucket_base_path_test = f\"{s3_bucket_base_path}/test\"\n", "s3_bucket_base_path_evaluation = f\"{s3_bucket_base_path}/evaluation\"\n", "s3_bucket_base_path_jobinfo = f\"{s3_bucket_base_path}/jobinfo\"\n", "s3_bucket_base_path_output = f\"{s3_bucket_base_path}/output\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Preprocessing Step\n", "The first step in K Fold cross validation model workflow is to split the training dataset into k batches randomly.\n", "We are going to use Sagemaker SKLearnProcessor with a preprocessing script to perform dataset splits, and upload the results to the specified S3 bucket for model training step. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.sklearn.processing import SKLearnProcessor\n", "\n", "sklearn_processor = SKLearnProcessor(\n", " framework_version=framework_version,\n", " instance_type=processing_instance_type,\n", " instance_count=processing_instance_count,\n", " base_job_name=\"kfold-crossvalidation-split\",\n", " role=role\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.processing import ProcessingInput, ProcessingOutput\n", "from sagemaker.workflow.steps import ProcessingStep\n", "\n", "step_process = ProcessingStep(\n", " name=\"PreprocessStep\",\n", " processor=sklearn_processor,\n", " outputs=[\n", " ProcessingOutput(output_name=\"train\", \n", " source=\"/opt/ml/processing/train\", \n", " destination=s3_bucket_base_path_train),\n", " ProcessingOutput(output_name=\"test\", \n", " source=\"/opt/ml/processing/test\", \n", " destination=s3_bucket_base_path_test),\n", " ],\n", " code=\"code/preprocessing.py\"\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Cross Validation Model Training Step \n", "In Cross Validation Model Training workflow, a script processor is used for orchestrating k training jobs in parallel, each of the k jobs is responsible for training a model using the specified split samples. Additionally, the script processor leverages Sagemaker HyperparameterTuner to optimize the hyper parameters and pass these values to perform k training jobs. The script processor monitors all training jobs. Once the jobs are complete, the script processor captures key metrics, including the training accuracy and the hyperparameters from the best training job, then uploads the results to the specified S3 bucket location to be used for model evaluation and model selection steps.\n", "\n", "The components involved in orchestrating the cross validation model training, hyperparameter optimizations and key metrics capture:\n", "\n", "* PropertyFile - EvaluationReport, contains the performance metrics from the HyperparameterTuner job, expressed in JSON format.\n", "* PropertyFile JobInfo, contains information about the best training job and the corresponding hyperparameters used for training, expressed in JSON format.\n", "* ScriptProcessor - A python script that orchestrates a hyperparameter tuning job for cross validation model trainings." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Custom Docker Image\n", "In order to facilitate k fold cross validation training jobs through Sagemaker Automatic Model tuning, we need to create a custom docker image to include both the python script that manages the kfold cross validation training jobs, and the actual training script that each of the k training jobs would submit. For details about adopting custom docker containers to work with Sagemaker, please follow this [link](https://docs.aws.amazon.com/sagemaker/latest/dg/docker-containers-adapt-your-own.html). The docker image used in the pipeline was built using the [Dockerfile](code/Dockerfile) included in this project. \n", "\n", "Following are the steps for working with [ECR](https://aws.amazon.com/ecr/) on authentication, image building and pushing to ECR registry for Sagemaker training: \\(follow this [link](https://docs.aws.amazon.com/AmazonECR/latest/userguide/getting-started-cli.html) for official AWS guidance for working with ECR\\)\n", "\n", "Prerequisites\n", "* [docker](https://docs.docker.com/get-docker/) \n", "* [git client](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) \n", "* [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) \n", "\n", "Note:\n", "If you use [AWS Cloud9](https://aws.amazon.com/cloud9/) as the CLI terminal, the prerequisites described above are met by default, there is no need to install any additional tools.\n", "\n", "Steps\n", "* Open a new terminal\n", "* git clone this project\n", "* cd to code directory\n", "* ./build-and-push-docker.sh [aws_acct_id] [aws_region]\n", "* capture the ECR repository name from the script after a successful run. You'll need to provide the image name at pipeline execution time. Here's an example format of an ECR repo name: ############.dkr.ecr.region.amazonaws.com/sagemaker-cross-validation-pipeline:latest" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.processing import ScriptProcessor\n", "from sagemaker.workflow.properties import PropertyFile\n", "\n", "evaluation_report = PropertyFile(\n", " name=\"EvaluationReport\", output_name=\"evaluation\", path=\"evaluation.json\"\n", ")\n", "\n", "jobinfo = PropertyFile(\n", " name=\"JobInfo\", output_name=\"jobinfo\", path=\"jobinfo.json\"\n", ")\n", "\n", "script_tuner = ScriptProcessor(\n", " image_uri=image_uri,\n", " command=[\"python3\"],\n", " instance_type=hpo_tuner_instance_type,\n", " instance_count=1,\n", " base_job_name=\"KFoldCrossValidationHyperParameterTuner\",\n", " role=role\n", ")\n", "\n", "step_cv_train_hpo = ProcessingStep(\n", " name=\"HyperParameterTuningStep\",\n", " processor=script_tuner,\n", " code=\"code/cross_validation_with_hpo.py\",\n", " outputs=[\n", " ProcessingOutput(output_name=\"evaluation\", \n", " source=\"/opt/ml/processing/evaluation\", \n", " destination=s3_bucket_base_path_evaluation),\n", " ProcessingOutput(output_name=\"jobinfo\", \n", " source=\"/opt/ml/processing/jobinfo\", \n", " destination=s3_bucket_base_path_jobinfo)\n", " ],\n", " job_arguments=[\"-k\", str(k),\n", " \"--image-uri\", image_uri, \n", " \"--train\", s3_bucket_base_path_train, \n", " \"--test\", s3_bucket_base_path_test,\n", " \"--instance-type\", training_instance_type,\n", " \"--instance-count\", str(training_instance_count),\n", " \"--output-path\", s3_bucket_base_path_output,\n", " \"--max-jobs\", str(max_jobs),\n", " \"--max-parallel-jobs\" , str(max_parallel_jobs),\n", " \"--min-c\", str(min_c),\n", " \"--max-c\", str(max_c),\n", " \"--min-gamma\", str(min_gamma), \n", " \"--max-gamma\", str(max_gamma),\n", " \"--gamma-scaling-type\", str(gamma_scaling_type),\n", " \"--region\", str(region)],\n", " property_files=[evaluation_report],\n", " depends_on=['PreprocessStep'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Model Selection Step\n", "Model selection is the final step in cross validation model training workflow. Based on the metrics and hyperparameters acquired from the cross validation steps orchestrated through ScriptProcessor, \n", "a Training Step is defined to train a model with the same algorithm used in cross validation training, with all available training data. The model artifacts created from the training process will be used \n", "for model registration, deployment and inferences. \n", "\n", "Components involved in the model selection step:\n", " \n", "* SKLearn Estimator - A Sagemaker Estimator used in training a final model.\n", "* TrainingStep - Workflow step that triggers the model selection process.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.inputs import TrainingInput\n", "from sagemaker.workflow.steps import TrainingStep\n", "from sagemaker.sklearn.estimator import SKLearn\n", "\n", "sklearn_estimator = SKLearn(\"scikit_learn_iris.py\", \n", " framework_version=framework_version, \n", " instance_type=training_instance_type,\n", " py_version='py3', \n", " source_dir=\"code\",\n", " output_path=s3_bucket_base_path_output,\n", " role=role)\n", "\n", "step_model_selection = TrainingStep(\n", " name=\"ModelSelectionStep\",\n", " estimator=sklearn_estimator,\n", " inputs={\n", " \"train\": TrainingInput(\n", " s3_data=f'{step_process.arguments[\"ProcessingOutputConfig\"][\"Outputs\"][0][\"S3Output\"][\"S3Uri\"]}/all',\n", " content_type=\"text/csv\"\n", " ),\n", " \"jobinfo\": TrainingInput(\n", " s3_data=f\"{s3_bucket_base_path_jobinfo}\",\n", " content_type=\"application/json\"\n", " )\n", " }\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Register Model With Model Registry\n", "Once the model selection step is complete, the trained model artifact can be registered with Sagemaker Model Registry.\n", "Model registry catalogs the trained model to enable model versioning, performance metrics and approval status captures. Additionally, models versioned in the ModelRegistry can be deployed through CI/CD. Here's a link for more information about Model Registry, https://docs.aws.amazon.com/sagemaker/latest/dg/model-registry.html\n", "\n", "Components involved in registering a trained model with Model Registry:\n", "* Model - Model object that contains metadata for the trained model. \n", "* CreateModelInput - An object that encapsulates the parameters used to create a Sagemaker Model.\n", "* CreateModelStep - Workflow Step that creates a Sagemaker Model\n", "* ModelMetrics - Captures metadata, including metrics statistics, data constraints, bias and explainability for the trained model.\n", "* RegisterModel - Workflow Step that registers model Model Registry." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.model import Model\n", "\n", "model = Model(\n", " image_uri=sklearn_estimator.image_uri,\n", " model_data=step_model_selection.properties.ModelArtifacts.S3ModelArtifacts,\n", " sagemaker_session=sagemaker_session,\n", " role=role,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.model_metrics import MetricsSource, ModelMetrics\n", "from sagemaker.workflow.step_collections import RegisterModel\n", "\n", "\n", "model_metrics = ModelMetrics(\n", " model_statistics=MetricsSource(\n", " s3_uri=\"{}/evaluation.json\".format(\n", " step_cv_train_hpo.arguments[\"ProcessingOutputConfig\"][\"Outputs\"][0][\"S3Output\"][\"S3Uri\"]\n", " ),\n", " content_type=\"application/json\",\n", " )\n", ")\n", "\n", "step_register_model = RegisterModel(\n", " name=\"RegisterModelStep\",\n", " estimator=sklearn_estimator,\n", " model_data=step_model_selection.properties.ModelArtifacts.S3ModelArtifacts,\n", " content_types=[\"text/csv\"],\n", " response_types=[\"text/csv\"],\n", " inference_instances=[\"ml.t2.medium\", \"ml.m5.xlarge\"],\n", " transform_instances=[\"ml.m5.xlarge\"],\n", " model_package_group_name=model_package_group_name,\n", " approval_status=model_approval_status,\n", " model_metrics=model_metrics,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Condition Step\n", "Sagemaker Pipelines supports condition steps for evaluating the conditions of step properties to determine the next action.\n", "In the context of cross validation model workflow, a condition step is defined to evaluate model metrics captured in the Cross Validation Training Step to determine whether \n", "the model selection step should take place. This step evaluates a ConditionGreaterThanOrEqualTo based on a given baseline model objective value to determine the next steps.\n", "\n", "Components involved in defining a Condition Step:\n", "\n", "ConditionGreaterThanOrEqualTo - A condition that defines the evaluation criteria for the given model objective value and model performance metrics captured in the evaluation report. This condition returns True if the model performance metrics is greater or equals to the baseline model objective value, False otherwise.\n", "ConditionStep - Workflow Step that performs the evaluation based on the criteria defined in ConditionGreaterThanOrEqualTo" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.workflow.conditions import ConditionGreaterThanOrEqualTo\n", "from sagemaker.workflow.condition_step import (\n", " ConditionStep,\n", " JsonGet,\n", ")\n", "\n", "cond_gte = ConditionGreaterThanOrEqualTo(\n", " left=JsonGet(\n", " step=step_cv_train_hpo,\n", " property_file=evaluation_report,\n", " json_path=\"multiclass_classification_metrics.accuracy.value\",\n", " ),\n", " right=baseline_model_objective_value,\n", ")\n", "\n", "step_cond = ConditionStep(\n", " name=\"ModelEvaluationStep\",\n", " conditions=[cond_gte],\n", " if_steps=[step_model_selection, step_register_model],\n", " else_steps=[],\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Define A Pipeline\n", "With Pipeline components defined, we can create Sagemaker Pipeline by associating the Parameters, Steps and Conditions created in this notebook.\n", "The pipeline definition encodes a pipeline using a directed acyclic graph (DAG) with relationships between each step of the pipeline. \n", "The structure of a pipeline's DAG is determined by either data dependencies between steps, or custom dependencies defined in the Steps.\n", "For CrossValidation training pipline, relationships between the components in the DAG are specified in the depends_on attribute of the Steps.\n", "\n", "A pipeline instance is composed of a name, parameters, and steps ." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.workflow.pipeline import Pipeline\n", "from sagemaker.workflow.pipeline_experiment_config import PipelineExperimentConfig\n", "from sagemaker.workflow.execution_variables import ExecutionVariables\n", "\n", "pipeline_name = f\"CrossValidationTrainingPipeline\"\n", "pipeline = Pipeline(\n", " name=pipeline_name,\n", " parameters=[\n", " processing_instance_count,\n", " processing_instance_type,\n", " training_instance_type,\n", " training_instance_count,\n", " inference_instance_type,\n", " hpo_tuner_instance_type,\n", " model_approval_status,\n", " role,\n", " default_bucket,\n", " baseline_model_objective_value,\n", " bucket_prefix,\n", " image_uri,\n", " k,\n", " max_jobs,\n", " max_parallel_jobs,\n", " min_c,\n", " max_c,\n", " min_gamma,\n", " max_gamma,\n", " gamma_scaling_type\n", " ], \n", " pipeline_experiment_config=PipelineExperimentConfig(\n", " ExecutionVariables.PIPELINE_NAME,\n", " ExecutionVariables.PIPELINE_EXECUTION_ID),\n", " steps=[step_process, step_cv_train_hpo, step_cond],\n", ")\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Examine Pipeline Definition\n", "Before triggering a pipeline run, it's a good practice to examine the JSON pipeline definition to ensure that it's well-formed." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import json\n", "json.loads(pipeline.definition())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Pipeline Creation\n", "Submit the pipeline definition to the SageMaker Pipelines service to create a pipeline if it doesn't exist, or update the pipeline if it does. The role passed in is used by SageMaker Pipelines to create all of the jobs defined in the steps." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pipeline.upsert(role_arn=role)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Trigger Pipeline Execution\n", "After creating a pipeline definition, you can submit it to SageMaker to start your execution, optionally provides the parameters specific for the run." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Before triggering the pipeline, make sure to override the ImageURI parameter value with \n", "# one created the previous step.\n", "execution = pipeline.start(\n", " parameters=dict(\n", " BaselineModelObjectiveValue=0.8,\n", " MinimumC=0,\n", " MaximumC=1,\n", " MaxTrainingJobs=3,\n", " ImageURI=\"041158455166.dkr.ecr.us-east-1.amazonaws.com/sagemaker-cross-validation-pipeline:latest\"\n", " ))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Examine a Pipeline Execution\n", "Examine the pipeline execution at runtime by using sagemaker SDK" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "execution.describe()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Wait For The Pipeline Execution To Complete \n", "Pipeline execution supports waiting for the job to complete synchrounously" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "execution.wait()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (Data Science)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/datascience-1.0" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.10" } }, "nbformat": 4, "nbformat_minor": 4 }