{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## Iris Pipeline Metrics Visualization Example" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this example notebook, we utilize iris dataset, trained with `sklearn` models, and evaluate with `accuracy` and `roc auc score` metrics. We can visualize the metrics in Pipeline Run page." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Build Pipeline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "1. Run the following command to load Kubeflow Pipelines SDK" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import kfp\n", "from kfp import components\n", "from kfp import dsl" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "2. Build Kubeflow Pipeline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If anyone want to build image like `chuckshow/iris-pipeline-metrics:latest` for customization, we provide our `Dockerfile` within directory `src/pipeline_scalar_metrics`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "@dsl.pipeline(\n", " name='IRIS Classification pipeline metrics',\n", " description='IRIS Classification using LR in SKLEARN to visualize metrics'\n", ")\n", "def iris_pipeline():\n", " op = dsl.ContainerOp(name='metrics_vis',\n", " image='chuckshow/iris-pipeline-metrics:latest',\n", " command=['python'],\n", " arguments=['model.py'],\n", " file_outputs={\n", " 'mlpipeline-metrics': '/mlpipeline-metrics.json'\n", " }\n", " )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "3. Compile and deploy your pipeline" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "kfp.compiler.Compiler().compile(iris_pipeline, 'iris-classification-pipeline-metrics.zip')\n", "client = kfp.Client()\n", "aws_experiment = client.create_experiment(name='aws')\n", "my_run = client.run_pipeline(aws_experiment.id, 'iris-classification-pipeline-metrics', \n", " 'iris-classification-pipeline-metrics.zip')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### View Metrics" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To see a visualization of the metrics:\n", "\n", "1. Open the Experiments page in the Kubeflow Pipelines UI.\n", "2. Click one of your experiments. The Runs page opens showing the top two metrics, where top is determined by prevalence (that is, the metrics with the highest count) and then by metric name. The metrics appear as columns for each run.\n", "\n", "The following example shows the accuracy-score and roc-auc-score metrics for pipeline run within an experiment:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![metrics-visualization](./images/metricsvis.png)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.9" } }, "nbformat": 4, "nbformat_minor": 4 }