{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# **Amazon Lookout for Equipment** - Demonstration on an anonymized expander dataset\n", "*Part 2: Dataset creation*\n", "\n", "**Change bucket name here:** this Notebook should have a role that allows access to your S3 bucket to the Lookout for Equipment service. At this stage, it needs to read data from this S3 location to ingest the data. It will need to write data in this bucket at the inference scheduling phase.\n", "\n", "**Note:** If you haven't created an IAM role for Amazon Lookout for Equipment, first please follow these [**set of instructions to create an IAM role**](https://github.com/dast1/l4e_iam_role_configuration/blob/main/configure_IAM_role.md)." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "BUCKET = ''\n", "PREFIX = 'data/training-data/expander/'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Initialization\n", "---\n", "Following the data preparation notebook, this repository should now be structured as follow:\n", "```\n", "/lookout-equipment-demo\n", "|\n", "+-- data/\n", "| |\n", "| +-- labelled-data/\n", "| | \\-- labels.csv\n", "| |\n", "| \\-- training-data/\n", "| \\-- expander/\n", "| |-- subsystem-01\n", "| | \\-- subsystem-01.csv\n", "| |\n", "| |-- subsystem-02\n", "| | \\-- subsystem-02.csv\n", "| |\n", "| |-- ...\n", "| |\n", "| \\-- subsystem-24\n", "| \\-- subsystem-24.csv\n", "|\n", "+-- dataset/\n", "| |-- labels.csv\n", "| |-- tags_description.csv\n", "| |-- timeranges.txt\n", "| \\-- timeseries.zip\n", "|\n", "+-- notebooks/\n", "| |-- 1_data_preparation.ipynb\n", "| |-- 2_dataset_creation.ipynb <<< This notebook <<<\n", "| |-- 3_model_training.ipynb\n", "| |-- 4_model_evaluation.ipynb\n", "| \\-- 5_inference_scheduling.ipynb\n", "|\n", "+-- utils/\n", " |-- lookout_equipment_utils.py\n", " \\-- lookoutequipment.json\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Notebook configuration update\n", "Amazon Lookout for Equipment being a very recent service, we need to make sure that we have access to the latest version of the AWS Python packages. If you see a `pip` dependency error, check that the `boto3` version is ok: if it's greater than 1.17.48 (the first version that includes the `lookoutequipment` API), you can discard this error and move forward with the next cell:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "boto3 version: 1.17.53 (should be >= 1.17.48 to include Lookout for Equipment API)\n" ] }, { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "!pip install --quiet --upgrade boto3 tqdm sagemaker\n", "\n", "import boto3\n", "print(f'boto3 version: {boto3.__version__} (should be >= 1.17.48 to include Lookout for Equipment API)')\n", "\n", "# Restart the current notebook to ensure we take into account the previous updates:\n", "from IPython.core.display import HTML\n", "HTML(\"\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Imports" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "import boto3\n", "import os\n", "import pandas as pd\n", "import pprint\n", "import sagemaker\n", "import sys\n", "import time\n", "import warnings\n", "\n", "from datetime import datetime\n", "\n", "# Helper functions for managing Lookout for Equipment API calls:\n", "sys.path.append('../utils')\n", "import lookout_equipment_utils as lookout" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Parameters" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "warnings.filterwarnings('ignore')\n", "\n", "DATA = os.path.join('..', 'data')\n", "LABEL_DATA = os.path.join(DATA, 'labelled-data')\n", "TRAIN_DATA = os.path.join(DATA, 'training-data', 'expander')\n", "\n", "ROLE_ARN = sagemaker.get_execution_role()\n", "REGION_NAME = boto3.session.Session().region_name\n", "DATASET_NAME = 'lookout-demo-training-dataset'" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "# List of the directories from the training data \n", "# directory: each directory corresponds to a subsystem:\n", "components = []\n", "for root, dirs, files in os.walk(f'{TRAIN_DATA}'):\n", " for subsystem in dirs:\n", " components.append(subsystem)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create a dataset\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Create data schema" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First we need to setup the schema of your dataset. In the below cell, please define `DATASET_COMPONENT_FIELDS_MAP`. `DATASET_COMPONENT_FIELDS_MAP` is a Python dictonary (hashmap). The key of each entry in the dictionary is the `Component` name, and the value of each entry is a list of column names. The column names must exactly match the header in your csv files. The order of the column names also need to exactly match. As an example, if we want to create the data schema for the example we are using here, the dictionary will look like this:\n", "\n", "```json\n", "DATASET_COMPONENT_FIELDS_MAP = {\n", " \"Component1\": ['Timestamp', 'Tag1', 'Tag2',...],\n", " \"Component2\": ['Timestamp', 'Tag1', 'Tag2',...]\n", " ...\n", " \"ComponentN\": ['Timestamp', 'Tag1', 'Tag2',...]\n", "}\n", "```\n", "\n", "Make sure the component name **matches exactly** the name of the folder in S3 (everything is **case sensitive**):\n", "```json\n", "DATASET_COMPONENT_FIELDS_MAP = {\n", " \"subsystem-01\": ['Timestamp', 'signal-026', 'signal-027',... , 'signal-092'],\n", " \"subsystem-02\": ['Timestamp', 'signal-022', 'signal-023',... , 'signal-096'],\n", " ...\n", " \"subsystem-24\": ['Timestamp', 'signal-083'],\n", "}\n", "```" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "DATASET_COMPONENT_FIELDS_MAP = dict()\n", "for subsystem in components:\n", " subsystem_tags = ['Timestamp']\n", " for root, _, files in os.walk(f'{TRAIN_DATA}/{subsystem}'):\n", " for file in files:\n", " fname = os.path.join(root, file)\n", " current_subsystem_df = pd.read_csv(fname, nrows=1)\n", " subsystem_tags = subsystem_tags + current_subsystem_df.columns.tolist()[1:]\n", "\n", " DATASET_COMPONENT_FIELDS_MAP.update({subsystem: subsystem_tags})\n", " \n", " \n", "lookout_dataset = lookout.LookoutEquipmentDataset(\n", " dataset_name=DATASET_NAME,\n", " component_fields_map=DATASET_COMPONENT_FIELDS_MAP,\n", " region_name=REGION_NAME,\n", " access_role_arn=ROLE_ARN\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you want to use the console, the following string can be used to configure the **dataset schema**:\n", "\n", "![dataset_schema](../assets/dataset-schema.png)" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'Components': [{'Columns': [{'Name': 'Timestamp', 'Type': 'DATETIME'},\n", " {'Name': 'signal-067', 'Type': 'DOUBLE'}],\n", " 'ComponentName': 'subsystem-19'},\n", " {'Columns': [{'Name': 'Timestamp', 'Type': 'DATETIME'},\n", " {'Name': 'signal-099', 'Type': 'DOUBLE'}],\n", " 'ComponentName': 'subsystem-18'},\n", " {'Columns': [{'Name': 'Timestamp', 'Type': 'DATETIME'},\n", " {'Name': 'signal-016', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-031', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-032', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-033', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-044', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-045', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-103', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-104', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-105', 'Type': 'DOUBLE'}],\n", " 'ComponentName': 'subsystem-09'},\n", " {'Columns': [{'Name': 'Timestamp', 'Type': 'DATETIME'},\n", " {'Name': 'signal-094', 'Type': 'DOUBLE'}],\n", " 'ComponentName': 'subsystem-14'},\n", " {'Columns': [{'Name': 'Timestamp', 'Type': 'DATETIME'},\n", " {'Name': 'signal-083', 'Type': 'DOUBLE'}],\n", " 'ComponentName': 'subsystem-24'},\n", " {'Columns': [{'Name': 'Timestamp', 'Type': 'DATETIME'},\n", " {'Name': 'signal-084', 'Type': 'DOUBLE'}],\n", " 'ComponentName': 'subsystem-21'},\n", " {'Columns': [{'Name': 'Timestamp', 'Type': 'DATETIME'},\n", " {'Name': 'signal-089', 'Type': 'DOUBLE'}],\n", " 'ComponentName': 'subsystem-15'},\n", " {'Columns': [{'Name': 'Timestamp', 'Type': 'DATETIME'},\n", " {'Name': 'signal-013', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-014', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-015', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-017', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-061', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-062', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-082', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-114', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-115', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-122', 'Type': 'DOUBLE'}],\n", " 'ComponentName': 'subsystem-07'},\n", " {'Columns': [{'Name': 'Timestamp', 'Type': 'DATETIME'},\n", " {'Name': 'signal-097', 'Type': 'DOUBLE'}],\n", " 'ComponentName': 'subsystem-22'},\n", " {'Columns': [{'Name': 'Timestamp', 'Type': 'DATETIME'},\n", " {'Name': 'signal-090', 'Type': 'DOUBLE'}],\n", " 'ComponentName': 'subsystem-16'},\n", " {'Columns': [{'Name': 'Timestamp', 'Type': 'DATETIME'},\n", " {'Name': 'signal-026', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-027', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-028', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-029', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-040', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-041', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-055', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-056', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-068', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-069', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-070', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-075', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-086', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-087', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-092', 'Type': 'DOUBLE'}],\n", " 'ComponentName': 'subsystem-01'},\n", " {'Columns': [{'Name': 'Timestamp', 'Type': 'DATETIME'},\n", " {'Name': 'signal-020', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-021', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-036', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-037', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-058', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-073', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-095', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-110', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-116', 'Type': 'DOUBLE'}],\n", " 'ComponentName': 'subsystem-03'},\n", " {'Columns': [{'Name': 'Timestamp', 'Type': 'DATETIME'},\n", " {'Name': 'signal-001', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-002', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-003', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-004', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-046', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-047', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-077', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-081', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-106', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-107', 'Type': 'DOUBLE'}],\n", " 'ComponentName': 'subsystem-05'},\n", " {'Columns': [{'Name': 'Timestamp', 'Type': 'DATETIME'},\n", " {'Name': 'signal-042', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-043', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-051', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-052', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-053', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-054', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-100', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-101', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-102', 'Type': 'DOUBLE'}],\n", " 'ComponentName': 'subsystem-11'},\n", " {'Columns': [{'Name': 'Timestamp', 'Type': 'DATETIME'},\n", " {'Name': 'signal-005', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-006', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-007', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-008', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-048', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-049', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-078', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-109', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-120', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-121', 'Type': 'DOUBLE'}],\n", " 'ComponentName': 'subsystem-08'},\n", " {'Columns': [{'Name': 'Timestamp', 'Type': 'DATETIME'},\n", " {'Name': 'signal-080', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-091', 'Type': 'DOUBLE'}],\n", " 'ComponentName': 'subsystem-13'},\n", " {'Columns': [{'Name': 'Timestamp', 'Type': 'DATETIME'},\n", " {'Name': 'signal-098', 'Type': 'DOUBLE'}],\n", " 'ComponentName': 'subsystem-23'},\n", " {'Columns': [{'Name': 'Timestamp', 'Type': 'DATETIME'},\n", " {'Name': 'signal-018', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-019', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-030', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-034', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-035', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-059', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-060', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-066', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-072', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-079', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-111', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-112', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-113', 'Type': 'DOUBLE'}],\n", " 'ComponentName': 'subsystem-04'},\n", " {'Columns': [{'Name': 'Timestamp', 'Type': 'DATETIME'},\n", " {'Name': 'signal-108', 'Type': 'DOUBLE'}],\n", " 'ComponentName': 'subsystem-17'},\n", " {'Columns': [{'Name': 'Timestamp', 'Type': 'DATETIME'},\n", " {'Name': 'signal-022', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-023', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-024', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-025', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-038', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-039', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-057', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-064', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-074', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-085', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-088', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-093', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-096', 'Type': 'DOUBLE'}],\n", " 'ComponentName': 'subsystem-02'},\n", " {'Columns': [{'Name': 'Timestamp', 'Type': 'DATETIME'},\n", " {'Name': 'signal-065', 'Type': 'DOUBLE'}],\n", " 'ComponentName': 'subsystem-12'},\n", " {'Columns': [{'Name': 'Timestamp', 'Type': 'DATETIME'},\n", " {'Name': 'signal-009', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-010', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-011', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-012', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-050', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-063', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-117', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-118', 'Type': 'DOUBLE'},\n", " {'Name': 'signal-119', 'Type': 'DOUBLE'}],\n", " 'ComponentName': 'subsystem-06'},\n", " {'Columns': [{'Name': 'Timestamp', 'Type': 'DATETIME'},\n", " {'Name': 'signal-071', 'Type': 'DOUBLE'}],\n", " 'ComponentName': 'subsystem-10'},\n", " {'Columns': [{'Name': 'Timestamp', 'Type': 'DATETIME'},\n", " {'Name': 'signal-076', 'Type': 'DOUBLE'}],\n", " 'ComponentName': 'subsystem-20'}]}\n" ] } ], "source": [ "import pprint\n", "pp = pprint.PrettyPrinter(depth=5)\n", "pp.pprint(eval(lookout_dataset.dataset_schema))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Create the dataset" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Dataset \"lookout-demo-training-dataset-v4\" does not exist, creating it...\n", "\n" ] }, { "data": { "text/plain": [ "{'DatasetName': 'lookout-demo-training-dataset-v4',\n", " 'DatasetArn': 'arn:aws:lookoutequipment:eu-west-1:123031033346:dataset/lookout-demo-training-dataset-v4/29dbcdb2-2d9f-4dcc-8922-a06fda7e4b63',\n", " 'Status': 'CREATED',\n", " 'ResponseMetadata': {'RequestId': '10458a57-87c7-400a-873e-02fefe116b12',\n", " 'HTTPStatusCode': 200,\n", " 'HTTPHeaders': {'x-amzn-requestid': '10458a57-87c7-400a-873e-02fefe116b12',\n", " 'content-type': 'application/x-amz-json-1.0',\n", " 'content-length': '210',\n", " 'date': 'Fri, 16 Apr 2021 20:03:23 GMT'},\n", " 'RetryAttempts': 0}}" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "lookout_dataset.create()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The dataset is now created, but it is empty and ready to receive some timeseries data that we will ingest from the S3 location prepared in the previous notebook:\n", "\n", "![dataset_schema](../assets/dataset-created.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Ingest data into a dataset\n", "---\n", "Let's double check the values of all the parameters that will be used to ingest some data into an existing Lookout for Equipment dataset:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "('arn:aws:iam::123031033346:role/service-role/AmazonSageMaker-ExecutionRole-20210128T070865',\n", " 'sagemaker-lookout-equipment-demo',\n", " 'data4/training-data/expander/',\n", " 'lookout-demo-training-dataset-v4')" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ROLE_ARN, BUCKET, PREFIX, DATASET_NAME" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Launch the ingestion job in the Lookout for Equipment dataset:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "response = lookout_dataset.ingest_data(BUCKET, PREFIX)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The ingestion is launched. With this amount of data (around 1.5 GB), it should take between 5-10 minutes:\n", "\n", "![dataset_schema](../assets/dataset-ingestion-in-progress.png)" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "=====Polling Data Ingestion Status=====\n", "\n", "2021-04-16 20:03:50 | IN_PROGRESS\n", "2021-04-16 20:04:50 | IN_PROGRESS\n", "2021-04-16 20:05:50 | IN_PROGRESS\n", "2021-04-16 20:06:50 | IN_PROGRESS\n", "2021-04-16 20:07:50 | IN_PROGRESS\n", "2021-04-16 20:08:50 | IN_PROGRESS\n", "2021-04-16 20:09:50 | IN_PROGRESS\n", "2021-04-16 20:10:50 | IN_PROGRESS\n", "2021-04-16 20:11:50 | SUCCESS\n", "\n", "=====End of Polling Data Ingestion Status=====\n" ] } ], "source": [ "# Get the ingestion job ID and status:\n", "data_ingestion_job_id = response['JobId']\n", "data_ingestion_status = response['Status']\n", "\n", "# Wait until ingestion completes:\n", "print(\"=====Polling Data Ingestion Status=====\\n\")\n", "lookout_client = lookout.get_client(region_name=REGION_NAME)\n", "print(str(pd.to_datetime(datetime.now()))[:19], \"| \", data_ingestion_status)\n", "\n", "while data_ingestion_status == 'IN_PROGRESS':\n", " time.sleep(60)\n", " describe_data_ingestion_job_response = lookout_client.describe_data_ingestion_job(JobId=data_ingestion_job_id)\n", " data_ingestion_status = describe_data_ingestion_job_response['Status']\n", " print(str(pd.to_datetime(datetime.now()))[:19], \"| \", data_ingestion_status)\n", " \n", "print(\"\\n=====End of Polling Data Ingestion Status=====\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The ingestion should now be complete as can be seen in the console:\n", "\n", "![dataset_schema](../assets/dataset-ingestion-done.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Conclusion\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this notebook, we created a **Lookout for Equipment dataset** and ingested the S3 data previously uploaded into this dataset. **Move now to the next notebook to train a model based on these data.**" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "# We'll just persist this dataset name to collect it from the next notebook in this series:\n", "dataset_fname = os.path.join(DATA, 'dataset_name.txt')\n", "with open(dataset_fname, 'w') as f:\n", " f.write(DATASET_NAME)" ] } ], "metadata": { "kernelspec": { "display_name": "conda_python3", "language": "python", "name": "conda_python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.13" } }, "nbformat": 4, "nbformat_minor": 4 }