{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Operationalize end-to-end Amazon Personalize model deployment process using AWS Step Functions Data Science SDK\n", "\n", "1. [Introduction](#Introduction)\n", "2. [Setup](#Setup)\n", "3. [Task-States](#Task-States)\n", "4. [Wait-States](#Wait-States)\n", "5. [Choice-States](#Choice-States)\n", "6. [Workflow](#Workflow)\n", "7. [Generate-Recommendations](#Generate-Recommendations)\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Introduction\n", "\n", "This notebook describes using the AWS Step Functions Data Science SDK to create and manage an Amazon Personalize workflow. The Step Functions SDK is an open source library that allows data scientists to easily create and execute machine learning workflows using AWS Step Functions. For more information on Step Functions SDK, see the following.\n", "* [AWS Step Functions](https://aws.amazon.com/step-functions/)\n", "* [AWS Step Functions Developer Guide](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html)\n", "* [AWS Step Functions Data Science SDK](https://aws-step-functions-data-science-sdk.readthedocs.io)\n", "\n", "In this notebook we will use the SDK to create steps to create Personalize resources, link them together to create a workflow, and execute the workflow in AWS Step Functions. \n", "\n", "For more information, on Amazon Personalize see the following.\n", "\n", "* [Amazon Personalize](https://aws.amazon.com/personalize/)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Setup" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Import required modules from the SDK" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#import sys\n", "#!{sys.executable} -m pip install --upgrade stepfunctions" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import boto3\n", "import json\n", "import numpy as np\n", "import pandas as pd\n", "import time\n", "\n", "personalize = boto3.client('personalize')\n", "personalize_runtime = boto3.client('personalize-runtime')\n", "\n", "\n", "import stepfunctions\n", "import logging\n", "\n", "from stepfunctions.steps import *\n", "from stepfunctions.workflow import Workflow\n", "\n", "stepfunctions.set_stream_logger(level=logging.INFO)\n", "\n", "workflow_execution_role = \"\" # paste the StepFunctionsWorkflowExecutionRole ARN from above" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Setup S3 location and filename" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "bucket = \"\" # replace with the name of your S3 bucket\n", "filename = \"\" # replace with a name that you want to save the dataset under" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Setup IAM Roles\n", "\n", "#### Create an execution role for Step Functions\n", "\n", "You need an execution role so that you can create and execute workflows in Step Functions.\n", "\n", "1. Go to the [IAM console](https://console.aws.amazon.com/iam/)\n", "2. Select **Roles** and then **Create role**.\n", "3. Under **Choose the service that will use this role** select **Step Functions**\n", "4. Choose **Next** until you can enter a **Role name**\n", "5. Enter a name such as `StepFunctionsWorkflowExecutionRole` and then select **Create role**\n", "\n", "\n", "Attach a policy to the role you created. The following steps attach a policy that provides full access to Step Functions, however as a good practice you should only provide access to the resources you need. \n", "\n", "1. Under the **Permissions** tab, click **Add inline policy**\n", "2. Enter the following in the **JSON** tab\n", "\n", "```json\n", "{\n", " \"Version\": \"2012-10-17\",\n", " \"Statement\": [\n", " \n", " {\n", " \"Effect\": \"Allow\",\n", " \"Action\": [\n", " \"personalize:*\"\n", " ],\n", " \"Resource\": \"*\"\n", " }, \n", "\n", " {\n", " \"Effect\": \"Allow\",\n", " \"Action\": [\n", " \"lambda:InvokeFunction\"\n", " ],\n", " \"Resource\": \"*\"\n", " },\n", " {\n", " \"Effect\": \"Allow\",\n", " \"Action\": [\n", " \"iam:PassRole\"\n", " ],\n", " \"Resource\": \"*\",\n", " },\n", " {\n", " \"Effect\": \"Allow\",\n", " \"Action\": [\n", " \"events:PutTargets\",\n", " \"events:PutRule\",\n", " \"events:DescribeRule\"\n", " ],\n", " \"Resource\": \"*\"\n", " }\n", " ]\n", "}\n", "```\n", "\n", "3. Choose **Review policy** and give the policy a name such as `StepFunctionsWorkflowExecutionPolicy`\n", "4. Choose **Create policy**. You will be redirected to the details page for the role.\n", "5. Copy the **Role ARN** at the top of the **Summary**\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lambda_state_role = LambdaStep(\n", " state_id=\"create bucket and role\",\n", " parameters={ \n", " \"FunctionName\": \"stepfunction_create_personalize_role\", #replace with the name of the function you created\n", " \"Payload\": { \n", " \"bucket\": bucket\n", " }\n", " },\n", " result_path='$'\n", " \n", ")\n", "\n", "lambda_state_role.add_retry(Retry(\n", " error_equals=[\"States.TaskFailed\"],\n", " interval_seconds=5,\n", " max_attempts=1,\n", " backoff_rate=4.0\n", "))\n", "\n", "lambda_state_role.add_catch(Catch(\n", " error_equals=[\"States.TaskFailed\"],\n", " next_step=Fail(\"CreateRoleTaskFailed\")\n", "))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Attach Policy to S3 Bucket" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "s3 = boto3.client(\"s3\")\n", "\n", "policy = {\n", " \"Version\": \"2012-10-17\",\n", " \"Id\": \"PersonalizeS3BucketAccessPolicy\",\n", " \"Statement\": [\n", " {\n", " \"Sid\": \"PersonalizeS3BucketAccessPolicy\",\n", " \"Effect\": \"Allow\",\n", " \"Principal\": {\n", " \"Service\": \"personalize.amazonaws.com\"\n", " },\n", " \"Action\": [\n", " \"s3:GetObject\",\n", " \"s3:ListBucket\"\n", " ],\n", " \"Resource\": [\n", " \"arn:aws:s3:::{}\".format(bucket),\n", " \"arn:aws:s3:::{}/*\".format(bucket)\n", " \n", " ]\n", " }\n", " ]\n", "}\n", "\n", "s3.put_bucket_policy(Bucket=bucket, Policy=json.dumps(policy))\n", "\n", "# AmazonPersonalizeFullAccess provides access to any S3 bucket with a name that includes \"personalize\" or \"Personalize\" \n", "# if you would like to use a bucket with a different name, please consider creating and attaching a new policy\n", "# that provides read access to your bucket or attaching the AmazonS3ReadOnlyAccess policy to the role\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Create Personalize Role\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "iam = boto3.client(\"iam\")\n", "\n", "role_name = \"\" # Create a personalize role\n", "\n", "\n", "assume_role_policy_document = {\n", " \"Version\": \"2012-10-17\",\n", " \"Statement\": [\n", " {\n", " \"Effect\": \"Allow\",\n", " \"Principal\": {\n", " \"Service\": \"personalize.amazonaws.com\"\n", " },\n", " \"Action\": \"sts:AssumeRole\"\n", " }\n", " ]\n", "}\n", "\n", "create_role_response = iam.create_role(\n", " RoleName = role_name,\n", " AssumeRolePolicyDocument = json.dumps(assume_role_policy_document)\n", ")\n", "\n", "\n", "\n", "policy_arn = \"arn:aws:iam::aws:policy/service-role/AmazonPersonalizeFullAccess\"\n", "iam.attach_role_policy(\n", " RoleName = role_name,\n", " PolicyArn = policy_arn\n", ")\n", "\n", "time.sleep(60) # wait for a minute to allow IAM role policy attachment to propagate\n", "\n", "role_arn = create_role_response[\"Role\"][\"Arn\"]\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Data-Preparation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Download, Prepare, and Upload Training Data" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pwd" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!wget -N http://files.grouplens.org/datasets/movielens/ml-100k.zip\n", "!unzip -o ml-100k.zip\n", "data = pd.read_csv('./ml-100k/u.data', sep='\\t', names=['USER_ID', 'ITEM_ID', 'RATING', 'TIMESTAMP'])\n", "pd.set_option('display.max_rows', 5)\n", "data\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data = data[data['RATING'] > 2] # keep only movies rated 2 and above\n", "data2 = data[['USER_ID', 'ITEM_ID', 'TIMESTAMP']] \n", "data2.to_csv(filename, index=False)\n", "\n", "boto3.Session().resource('s3').Bucket(bucket).Object(filename).upload_file(filename)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Task-States" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Lambda Task state\n", "\n", "A `Task` State in Step Functions represents a single unit of work performed by a workflow. Tasks can call Lambda functions and orchestrate other AWS services. See [AWS Service Integrations](https://docs.aws.amazon.com/step-functions/latest/dg/concepts-service-integrations.html) in the *AWS Step Functions Developer Guide*.\n", "\n", "The following creates a [LambdaStep](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/compute.html#stepfunctions.steps.compute.LambdaStep) called `lambda_state`, and then configures the options to [Retry](https://docs.aws.amazon.com/step-functions/latest/dg/concepts-error-handling.html#error-handling-retrying-after-an-error) if the Lambda function fails.\n", "\n", "#### Create a Lambda functions\n", "\n", "The Lambda task states in this workflow uses Lambda function **(Python 3.x)** that returns a Personalize resources such as Schema, Datasetgroup, Dataset, Solution, SolutionVersion, etc. Create the following functions in the [Lambda console](https://console.aws.amazon.com/lambda/).\n", "\n", "1. stepfunction-create-schema\n", "2. stepfunctioncreatedatagroup\n", "3. stepfunctioncreatedataset\n", "4. stepfunction-createdatasetimportjob\n", "5. stepfunction_select-recipe_create-solution\n", "6. stepfunction_create_solution_version\n", "7. stepfunction_getsolution_metric_create_campaign\n", "\n", "Copy/Paste the corresponding lambda function code from ./Lambda/ folder in the repo\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Create Schema" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lambda_state_schema = LambdaStep(\n", " state_id=\"create schema\",\n", " parameters={ \n", " \"FunctionName\": \"stepfunction-create-schema\", #replace with the name of the function you created\n", " \"Payload\": { \n", " \"input\": \"personalize-stepfunction-schema263\"\n", " }\n", " },\n", " result_path='$' \n", ")\n", "\n", "lambda_state_schema.add_retry(Retry(\n", " error_equals=[\"States.TaskFailed\"],\n", " interval_seconds=5,\n", " max_attempts=1,\n", " backoff_rate=4.0\n", "))\n", "\n", "lambda_state_schema.add_catch(Catch(\n", " error_equals=[\"States.TaskFailed\"],\n", " next_step=Fail(\"CreateSchemaTaskFailed\")\n", "))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Create Datasetgroup" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lambda_state_datasetgroup = LambdaStep(\n", " state_id=\"create dataset Group\",\n", " parameters={ \n", " \"FunctionName\": \"stepfunctioncreatedatagroup\", #replace with the name of the function you created\n", " \"Payload\": { \n", " \"input\": \"personalize-stepfunction-dataset-group\", \n", " \"schemaArn.$\": '$.Payload.schemaArn'\n", " }\n", " },\n", "\n", " result_path='$'\n", ")\n", "\n", "\n", "\n", "lambda_state_datasetgroup.add_retry(Retry(\n", " error_equals=[\"States.TaskFailed\"],\n", " interval_seconds=5,\n", " max_attempts=1,\n", " backoff_rate=4.0\n", "))\n", "\n", "\n", "lambda_state_datasetgroup.add_catch(Catch(\n", " error_equals=[\"States.TaskFailed\"],\n", " next_step=Fail(\"CreateDataSetGroupTaskFailed\")\n", "))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Create Dataset" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lambda_state_createdataset = LambdaStep(\n", " state_id=\"create dataset\",\n", " parameters={ \n", " \"FunctionName\": \"stepfunctioncreatedataset\", #replace with the name of the function you created\n", "# \"Payload\": { \n", "# \"schemaArn.$\": '$.Payload.schemaArn',\n", "# \"datasetGroupArn.$\": '$.Payload.datasetGroupArn',\n", " \n", " \n", "# }\n", " \n", " \"Payload\": { \n", " \"schemaArn.$\": '$.schemaArn',\n", " \"datasetGroupArn.$\": '$.datasetGroupArn', \n", " } \n", " \n", " \n", " },\n", " result_path = '$'\n", ")\n", "\n", "lambda_state_createdataset.add_retry(Retry(\n", " error_equals=[\"States.TaskFailed\"],\n", " interval_seconds=5,\n", " max_attempts=1,\n", " backoff_rate=4.0\n", "))\n", "\n", "lambda_state_createdataset.add_catch(Catch(\n", " error_equals=[\"States.TaskFailed\"],\n", " next_step=Fail(\"CreateDataSetTaskFailed\")\n", "))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Create Dataset Import Job" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lambda_state_datasetimportjob = LambdaStep(\n", " state_id=\"create dataset import job\",\n", " parameters={ \n", " \"FunctionName\": \"stepfunction-createdatasetimportjob\", #replace with the name of the function you created\n", " \"Payload\": { \n", " \"datasetimportjob\": \"stepfunction-createdatasetimportjob\",\n", " \"dataset_arn.$\": '$.Payload.dataset_arn',\n", " \"datasetGroupArn.$\": '$.Payload.datasetGroupArn',\n", " \"bucket_name\": bucket,\n", " \"file_name\": filename,\n", " \"role_arn\": role_arn\n", " \n", " }\n", " },\n", "\n", " result_path = '$'\n", ")\n", "\n", "lambda_state_datasetimportjob.add_retry(Retry(\n", " error_equals=[\"States.TaskFailed\"],\n", " interval_seconds=5,\n", " max_attempts=1,\n", " backoff_rate=4.0\n", "))\n", "\n", "lambda_state_datasetimportjob.add_catch(Catch(\n", " error_equals=[\"States.TaskFailed\"],\n", " next_step=Fail(\"DatasetImportJobTaskFailed\")\n", "))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Create Receipe and Solution" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lambda_state_select_receipe_create_solution = LambdaStep(\n", " state_id=\"select receipe and create solution\",\n", " parameters={ \n", " \"FunctionName\": \"stepfunction_select-recipe_create-solution\", #replace with the name of the function you created\n", " \"Payload\": { \n", " #\"dataset_group_arn.$\": '$.Payload.datasetGroupArn' \n", " \"dataset_group_arn.$\": '$.datasetGroupArn'\n", " }\n", " },\n", " result_path = '$'\n", ")\n", "\n", "lambda_state_select_receipe_create_solution.add_retry(Retry(\n", " error_equals=[\"States.TaskFailed\"],\n", " interval_seconds=5,\n", " max_attempts=1,\n", " backoff_rate=4.0\n", "))\n", "\n", "lambda_state_select_receipe_create_solution.add_catch(Catch(\n", " error_equals=[\"States.TaskFailed\"],\n", " next_step=Fail(\"DatasetReceiptCreateSolutionTaskFailed\")\n", "))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Create Solution Version" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lambda_create_solution_version = LambdaStep(\n", " state_id=\"create solution version\",\n", " parameters={ \n", " \"FunctionName\": \"stepfunction_create_solution_version\", \n", " \"Payload\": { \n", " \"solution_arn.$\": '$.Payload.solution_arn' \n", " }\n", " },\n", " result_path = '$'\n", ")\n", "\n", "lambda_create_solution_version.add_retry(Retry(\n", " error_equals=[\"States.TaskFailed\"],\n", " interval_seconds=5,\n", " max_attempts=1,\n", " backoff_rate=4.0\n", "))\n", "\n", "lambda_create_solution_version.add_catch(Catch(\n", " error_equals=[\"States.TaskFailed\"],\n", " next_step=Fail(\"CreateSolutionVersionTaskFailed\")\n", "))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Create Campaign" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lambda_create_campaign = LambdaStep(\n", " state_id=\"create campaign\",\n", " parameters={ \n", " \"FunctionName\": \"stepfunction_getsolution_metric_create_campaign\", \n", " \"Payload\": { \n", " #\"solution_version_arn.$\": '$.Payload.solution_version_arn' \n", " \"solution_version_arn.$\": '$.solution_version_arn'\n", " }\n", " },\n", " result_path = '$'\n", ")\n", "\n", "lambda_create_campaign.add_retry(Retry(\n", " error_equals=[\"States.TaskFailed\"],\n", " interval_seconds=5,\n", " max_attempts=1,\n", " backoff_rate=4.0\n", "))\n", "\n", "lambda_create_campaign.add_catch(Catch(\n", " error_equals=[\"States.TaskFailed\"],\n", " next_step=Fail(\"CreateCampaignTaskFailed\")\n", "))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Wait-States\n", "\n", "#### A `Wait` state in Step Functions waits a specific amount of time. See [Wait](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/states.html#stepfunctions.steps.states.Wait) in the AWS Step Functions Data Science SDK documentation." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Wait for Schema to be ready" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "wait_state_schema = Wait(\n", " state_id=\"Wait for create schema - 5 secs\",\n", " seconds=5\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Wait for Datasetgroup to be ready" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "wait_state_datasetgroup = Wait(\n", " state_id=\"Wait for datasetgroup - 30 secs\",\n", " seconds=30\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Wait for Dataset to be ready" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "wait_state_dataset = Wait(\n", " state_id=\"wait for dataset - 30 secs\",\n", " seconds=30\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Wait for Dataset Import Job to be ACTIVE" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "wait_state_datasetimportjob = Wait(\n", " state_id=\"Wait for datasetimportjob - 30 secs\",\n", " seconds=30\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Wait for Receipe to ready" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "wait_state_receipe = Wait(\n", " state_id=\"Wait for receipe - 30 secs\",\n", " seconds=30\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Wait for Solution Version to be ACTIVE" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "wait_state_solutionversion = Wait(\n", " state_id=\"Wait for solution version - 60 secs\",\n", " seconds=60\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Wait for Campaign to be ACTIVE" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "wait_state_campaign = Wait(\n", " state_id=\"Wait for Campaign - 30 secs\",\n", " seconds=30\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "### Check status of the lambda task and take action accordingly" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### If a state fails, move it to `Fail` state. See [Fail](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/states.html#stepfunctions.steps.states.Fail) in the AWS Step Functions Data Science SDK documentation." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### check datasetgroup status" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lambda_state_datasetgroupstatus = LambdaStep(\n", " state_id=\"check dataset Group status\",\n", " parameters={ \n", " \"FunctionName\": \"stepfunction_waitforDatasetGroup\", #replace with the name of the function you created\n", " \"Payload\": { \n", " \"input.$\": '$.Payload.datasetGroupArn',\n", " \"schemaArn.$\": '$.Payload.schemaArn'\n", " }\n", " },\n", " result_path = '$'\n", ")\n", "\n", "lambda_state_datasetgroupstatus.add_retry(Retry(\n", " error_equals=[\"States.TaskFailed\"],\n", " interval_seconds=5,\n", " max_attempts=1,\n", " backoff_rate=4.0\n", "))\n", "\n", "lambda_state_datasetgroupstatus.add_catch(Catch(\n", " error_equals=[\"States.TaskFailed\"],\n", " next_step=Fail(\"DatasetGroupStatusTaskFailed\")\n", "))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### check dataset import job status" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lambda_state_datasetimportjob_status = LambdaStep(\n", " state_id=\"check dataset import job status\",\n", " parameters={ \n", " \"FunctionName\": \"stepfunction_waitfordatasetimportjob\", #replace with the name of the function you created\n", " \"Payload\": { \n", " \"dataset_import_job_arn.$\": '$.Payload.dataset_import_job_arn',\n", " \"datasetGroupArn.$\": '$.Payload.datasetGroupArn'\n", " }\n", " },\n", " result_path = '$'\n", ")\n", "\n", "lambda_state_datasetimportjob_status.add_retry(Retry(\n", " error_equals=[\"States.TaskFailed\"],\n", " interval_seconds=5,\n", " max_attempts=1,\n", " backoff_rate=4.0\n", "))\n", "\n", "lambda_state_datasetimportjob_status.add_catch(Catch(\n", " error_equals=[\"States.TaskFailed\"],\n", " next_step=Fail(\"DatasetImportJobStatusTaskFailed\")\n", "))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### check solution version status" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\n", "solutionversion_succeed_state = Succeed(\n", " state_id=\"The Solution Version ready?\"\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lambda_state_solutionversion_status = LambdaStep(\n", " state_id=\"check solution version status\",\n", " parameters={ \n", " \"FunctionName\": \"stepfunction_waitforSolutionVersion\", #replace with the name of the function you created\n", " \"Payload\": { \n", " \"solution_version_arn.$\": '$.Payload.solution_version_arn' \n", " }\n", " },\n", " result_path = '$'\n", ")\n", "\n", "lambda_state_solutionversion_status.add_retry(Retry(\n", " error_equals=[\"States.TaskFailed\"],\n", " interval_seconds=5,\n", " max_attempts=1,\n", " backoff_rate=4.0\n", "))\n", "\n", "lambda_state_solutionversion_status.add_catch(Catch(\n", " error_equals=[\"States.TaskFailed\"],\n", " next_step=Fail(\"SolutionVersionStatusTaskFailed\")\n", "))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### check campaign status" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lambda_state_campaign_status = LambdaStep(\n", " state_id=\"check campaign status\",\n", " parameters={ \n", " \"FunctionName\": \"stepfunction_waitforCampaign\", #replace with the name of the function you created\n", " \"Payload\": { \n", " \"campaign_arn.$\": '$.Payload.campaign_arn' \n", " }\n", " },\n", " result_path = '$'\n", ")\n", "\n", "lambda_state_campaign_status.add_retry(Retry(\n", " error_equals=[\"States.TaskFailed\"],\n", " interval_seconds=5,\n", " max_attempts=1,\n", " backoff_rate=4.0\n", "))\n", "\n", "lambda_state_campaign_status.add_catch(Catch(\n", " error_equals=[\"States.TaskFailed\"],\n", " next_step=Fail(\"CampaignStatusTaskFailed\")\n", "))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Choice-States\n", "\n", "Now, attach branches to the Choice state you created earlier. See *Choice Rules* in the [AWS Step Functions Data Science SDK documentation](https://aws-step-functions-data-science-sdk.readthedocs.io) ." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Chain together steps for the define the workflow path\n", "\n", "The following cell links together the steps you've created above into a sequential group. The new path sequentially includes the Lambda state, Wait state, and the Succeed state that you created earlier.\n", "\n", "#### After chaining together the steps for the workflow path, we will define and visualize the workflow." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "create_campaign_choice_state = Choice(\n", " state_id=\"Is the Campaign ready?\"\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "create_campaign_choice_state.add_choice(\n", " rule=ChoiceRule.StringEquals(variable=lambda_state_campaign_status.output()['Payload']['status'], value='ACTIVE'),\n", " next_step=Succeed(\"CampaignCreatedSuccessfully\") \n", ")\n", "create_campaign_choice_state.add_choice(\n", " ChoiceRule.StringEquals(variable=lambda_state_campaign_status.output()['Payload']['status'], value='CREATE PENDING'),\n", " next_step=wait_state_campaign\n", ")\n", "create_campaign_choice_state.add_choice(\n", " ChoiceRule.StringEquals(variable=lambda_state_campaign_status.output()['Payload']['status'], value='CREATE IN_PROGRESS'),\n", " next_step=wait_state_campaign\n", ")\n", "\n", "create_campaign_choice_state.default_choice(next_step=Fail(\"CreateCampaignFailed\"))\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "solutionversion_choice_state = Choice(\n", " state_id=\"Is the Solution Version ready?\"\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "solutionversion_succeed_state = Succeed(\n", " state_id=\"The Solution Version ready?\"\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "solutionversion_choice_state.add_choice(\n", " rule=ChoiceRule.StringEquals(variable=lambda_state_solutionversion_status.output()['Payload']['status'], value='ACTIVE'),\n", " next_step=solutionversion_succeed_state \n", ")\n", "solutionversion_choice_state.add_choice(\n", " ChoiceRule.StringEquals(variable=lambda_state_solutionversion_status.output()['Payload']['status'], value='CREATE PENDING'),\n", " next_step=wait_state_solutionversion\n", ")\n", "solutionversion_choice_state.add_choice(\n", " ChoiceRule.StringEquals(variable=lambda_state_solutionversion_status.output()['Payload']['status'], value='CREATE IN_PROGRESS'),\n", " next_step=wait_state_solutionversion\n", ")\n", "\n", "solutionversion_choice_state.default_choice(next_step=Fail(\"create_solution_version_failed\"))\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "datasetimportjob_succeed_state = Succeed(\n", " state_id=\"The Solution Version ready?\"\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "datasetimportjob_choice_state = Choice(\n", " state_id=\"Is the DataSet Import Job ready?\"\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "datasetimportjob_choice_state.add_choice(\n", " rule=ChoiceRule.StringEquals(variable=lambda_state_datasetimportjob_status.output()['Payload']['status'], value='ACTIVE'),\n", " next_step=datasetimportjob_succeed_state \n", ")\n", "datasetimportjob_choice_state.add_choice(\n", " ChoiceRule.StringEquals(variable=lambda_state_datasetimportjob_status.output()['Payload']['status'], value='CREATE PENDING'),\n", " next_step=wait_state_datasetimportjob\n", ")\n", "datasetimportjob_choice_state.add_choice(\n", " ChoiceRule.StringEquals(variable=lambda_state_datasetimportjob_status.output()['Payload']['status'], value='CREATE IN_PROGRESS'),\n", " next_step=wait_state_datasetimportjob\n", ")\n", "\n", "\n", "datasetimportjob_choice_state.default_choice(next_step=Fail(\"dataset_import_job_failed\"))\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "datasetgroupstatus_choice_state = Choice(\n", " state_id=\"Is the DataSetGroup ready?\"\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Workflow" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Define Workflow\n", "\n", "In the following cell, you will define the step that you will use in our workflow. Then you will create, visualize and execute the workflow. \n", "\n", "Steps relate to states in AWS Step Functions. For more information, see [States](https://docs.aws.amazon.com/step-functions/latest/dg/concepts-states.html) in the *AWS Step Functions Developer Guide*. For more information on the AWS Step Functions Data Science SDK APIs, see: https://aws-step-functions-data-science-sdk.readthedocs.io. \n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Dataset workflow" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Dataset_workflow_definition=Chain([lambda_state_schema,\n", " wait_state_schema,\n", " lambda_state_datasetgroup,\n", " wait_state_datasetgroup,\n", " lambda_state_datasetgroupstatus\n", " ])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Dataset_workflow = Workflow(\n", " name=\"Dataset-workflow\",\n", " definition=Dataset_workflow_definition,\n", " role=workflow_execution_role\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Dataset_workflow.render_graph()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "DatasetWorkflowArn = Dataset_workflow.create()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### DatasetImportWorkflow" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "DatasetImport_workflow_definition=Chain([lambda_state_createdataset,\n", " wait_state_dataset,\n", " lambda_state_datasetimportjob,\n", " wait_state_datasetimportjob,\n", " lambda_state_datasetimportjob_status,\n", " datasetimportjob_choice_state\n", " ])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "DatasetImport_workflow = Workflow(\n", " name=\"DatasetImport-workflow\",\n", " definition=DatasetImport_workflow_definition,\n", " role=workflow_execution_role\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "DatasetImport_workflow.render_graph()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "DatasetImportflowArn = DatasetImport_workflow.create()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Recepie and Solution workflow" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Create_receipe_sol_workflow_definition=Chain([lambda_state_select_receipe_create_solution,\n", " wait_state_receipe,\n", " lambda_create_solution_version,\n", " wait_state_solutionversion,\n", " lambda_state_solutionversion_status,\n", " solutionversion_choice_state\n", " ])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Create_receipe_sol_workflow = Workflow(\n", " name=\"Create_receipe_sol-workflow\",\n", " definition=Create_receipe_sol_workflow_definition,\n", " role=workflow_execution_role\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Create_receipe_sol_workflow.render_graph()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "CreateReceipeArn = Create_receipe_sol_workflow.create()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Create Campaign Workflow" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Create_Campaign_workflow_definition=Chain([lambda_create_campaign,\n", " wait_state_campaign,\n", " lambda_state_campaign_status,\n", " wait_state_datasetimportjob,\n", " create_campaign_choice_state\n", " ])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Campaign_workflow = Workflow(\n", " name=\"Campaign-workflow\",\n", " definition=Create_Campaign_workflow_definition,\n", " role=workflow_execution_role\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Campaign_workflow.render_graph()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "CreateCampaignArn = Campaign_workflow.create()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Main workflow" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "call_dataset_workflow_state = Task(\n", " state_id=\"DataSetWorkflow\",\n", " resource=\"arn:aws:states:::states:startExecution.sync:2\",\n", " parameters={\n", " \"Input\": \"true\",\n", " #\"StateMachineArn\": \"arn:aws:states:us-east-1:444602785259:stateMachine:Dataset-workflow\",\n", " \"StateMachineArn\": DatasetWorkflowArn\n", " }\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "call_datasetImport_workflow_state = Task(\n", " state_id=\"DataSetImportWorkflow\",\n", " resource=\"arn:aws:states:::states:startExecution.sync:2\",\n", " parameters={\n", " \"Input\":{\n", " \"schemaArn.$\": \"$.Output.Payload.schemaArn\",\n", " \"datasetGroupArn.$\": \"$.Output.Payload.datasetGroupArn\"\n", " },\n", " \"StateMachineArn\": DatasetImportflowArn,\n", " }\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "call_receipe_solution_workflow_state = Task(\n", " state_id=\"ReceipeSolutionWorkflow\",\n", " resource=\"arn:aws:states:::states:startExecution.sync:2\",\n", " parameters={\n", " \"Input\":{\n", " \"datasetGroupArn.$\": \"$.Output.Payload.datasetGroupArn\"\n", "\n", " },\n", " \"StateMachineArn\": CreateReceipeArn\n", " }\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "call_campaign_solution_workflow_state = Task(\n", " state_id=\"CampaignWorkflow\",\n", " resource=\"arn:aws:states:::states:startExecution.sync:2\",\n", " parameters={\n", " \"Input\":{\n", " \"solution_version_arn.$\": \"$.Output.Payload.solution_version_arn\"\n", "\n", " },\n", " \"StateMachineArn\": CreateCampaignArn\n", " }\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Main_workflow_definition=Chain([call_dataset_workflow_state,\n", " call_datasetImport_workflow_state,\n", " call_receipe_solution_workflow_state,\n", " call_campaign_solution_workflow_state\n", " ])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Main_workflow = Workflow(\n", " name=\"Main-workflow\",\n", " definition=Main_workflow_definition,\n", " role=workflow_execution_role\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Main_workflow.render_graph()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Main_workflow.create()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Main_workflow_execution = Main_workflow.execute()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Main_workflow_execution = Workflow(\n", " name=\"Campaign_Workflow\",\n", " definition=path1,\n", " role=workflow_execution_role\n", ")\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#Main_workflow_execution.render_graph()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Create and execute the workflow\n", "\n", "In the next cells, we will create the branching happy workflow in AWS Step Functions with [create](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.html#stepfunctions.workflow.Workflow.create) and execute it with [execute](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.html#stepfunctions.workflow.Workflow.execute).\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#personalize_workflow.create()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#personalize_workflow_execution = happy_workflow.execute()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Review the workflow progress\n", "\n", "Review the workflow progress with the [render_progress](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.html#stepfunctions.workflow.Execution.render_progress).\n", "\n", "Review the execution history by calling [list_events](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.html#stepfunctions.workflow.Execution.list_events) to list all events in the workflow execution." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Main_workflow_execution.render_progress()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Main_workflow_execution.list_events(html=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Generate-Recommendations" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Now that we have a successful campaign, let's generate recommendations for the campaign" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Select a User and an Item" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "items = pd.read_csv('./ml-100k/u.item', sep='|', usecols=[0,1], encoding='latin-1')\n", "items.columns = ['ITEM_ID', 'TITLE']\n", "\n", "\n", "user_id, item_id, rating, timestamp = data.sample().values[0]\n", "\n", "user_id = int(user_id)\n", "item_id = int(item_id)\n", "\n", "print(\"user_id\",user_id)\n", "print(\"items\",items)\n", "\n", "\n", "item_title = items.loc[items['ITEM_ID'] == item_id].values[0][-1]\n", "print(\"USER: {}\".format(user_id))\n", "print(\"ITEM: {}\".format(item_title))\n", "print(\"ITEM ID: {}\".format(item_id))\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "wait_recommendations = Wait(\n", " state_id=\"Wait for recommendations - 10 secs\",\n", " seconds=10\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Lambda Task" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lambda_state_get_recommendations = LambdaStep(\n", " state_id=\"get recommendations\",\n", " parameters={ \n", " \"FunctionName\": \"stepfunction_getRecommendations\", \n", " \"Payload\": { \n", " \"campaign_arn\": 'arn:aws:personalize:us-east-1:261602857181:campaign/stepfunction-campaign', \n", " \"user_id\": user_id, \n", " \"item_id\": item_id \n", " }\n", " },\n", " result_path = '$'\n", ")\n", "\n", "lambda_state_get_recommendations.add_retry(Retry(\n", " error_equals=[\"States.TaskFailed\"],\n", " interval_seconds=5,\n", " max_attempts=1,\n", " backoff_rate=4.0\n", "))\n", "\n", "lambda_state_get_recommendations.add_catch(Catch(\n", " error_equals=[\"States.TaskFailed\"],\n", " next_step=Fail(\"GetRecommendationTaskFailed\")\n", " #next_step=recommendation_path \n", "))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Create a Succeed State" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "workflow_complete = Succeed(\"WorkflowComplete\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "recommendation_path = Chain([ \n", "lambda_state_get_recommendations,\n", "wait_recommendations,\n", "workflow_complete\n", "])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Define, Create, Render, and Execute Recommendation Workflow\n", "\n", "In the next cells, we will create a workflow in AWS Step Functions with [create](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.html#stepfunctions.workflow.Workflow.create) and execute it with [execute](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.html#stepfunctions.workflow.Workflow.execute)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "recommendation_workflow = Workflow(\n", " name=\"Recommendation_Workflow4\",\n", " definition=recommendation_path,\n", " role=workflow_execution_role\n", ")\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "recommendation_workflow.render_graph()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "recommendation_workflow.create()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "recommendation_workflow_execution = recommendation_workflow.execute()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Review progress\n", "\n", "Review workflow progress with the [render_progress](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.html#stepfunctions.workflow.Execution.render_progress).\n", "\n", "Review execution history by calling [list_events](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.html#stepfunctions.workflow.Execution.list_events) to list all events in the workflow execution." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "recommendation_workflow_execution.render_progress()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [], "source": [ "recommendation_workflow_execution.list_events(html=True)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "item_list = recommendation_workflow_execution.get_output()['Payload']['item_list']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Get Recommendations" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "item_list = recommendation_workflow_execution.get_output()['Payload']['item_list']\n", "\n", "print(\"Recommendations:\")\n", "for item in item_list:\n", " np.int(item['itemId'])\n", " item_title = items.loc[items['ITEM_ID'] == np.int(item['itemId'])].values[0][-1]\n", " print(item_title)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Clean up Amazon Personalize resources" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Make sure to clean up the Amazon Personalize and the state machines created blog. Login to Amazon Personalize console and delete resources such as Dataset Groups, Dataset, Solutions, Receipts, and Campaign. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Clean up State Machine resources" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Campaign_workflow.delete()\n", "\n", "recommendation_workflow.delete()\n", "\n", "Main_workflow.delete()\n", "\n", "Create_receipe_sol_workflow.delete()\n", "\n", "DatasetImport_workflow.delete()\n", "\n", "Dataset_workflow.delete()\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "conda_python3", "language": "python", "name": "conda_python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.5" } }, "nbformat": 4, "nbformat_minor": 2 }