{ "cells": [ { "cell_type": "markdown", "id": "ac764592", "metadata": {}, "source": [ "# **re:Mars - Anomaly detection workshop** - From deep space to shop floor\n", "\n", "\n", " \n", " \n", " \n", " \n", "
\n", " \n", " \n", " \n", " \n", "

\n", " In this second part of this workshop, we are going to use Amazon Lookout for Equipment. This service analyzes the data from the sensors on your equipment (e.g. pressure in a generator, flow rate of a compressor, revolutions per minute of fans), to automatically train a machine learning model based on just your data, for your equipment – with no machine learning (ML) expertise required. Lookout for Equipment uses your unique ML model to analyze incoming sensor data in real-time and accurately identify early warning signs that could lead to machine failures. This means you can detect equipment abnormalities with speed and precision, quickly diagnose issues, take action to reduce expensive downtime, and reduce false alerts.\n", "

\n", "

\n", "If you're interested about knowing more about this service, you can check out the Time Series Analysis on AWS book, written by one of the authors of this workshop. It contains 6 chapters dedicated to Amazon Lookout for Equipment and will give you solid foundation on how to setup an end-to-end anomaly detection pipeline:\n", "

\n", "
" ] }, { "cell_type": "markdown", "id": "52948ee9", "metadata": {}, "source": [ "# **Initialization**\n", "---" ] }, { "cell_type": "markdown", "id": "0ae59fc4", "metadata": {}, "source": [ "### Notebook configuration update" ] }, { "cell_type": "code", "execution_count": null, "id": "86887316", "metadata": {}, "outputs": [], "source": [ "!pip install --quiet --upgrade tqdm lookoutequipment" ] }, { "cell_type": "markdown", "id": "94f83078", "metadata": {}, "source": [ "### Imports" ] }, { "cell_type": "code", "execution_count": null, "id": "13ca8bb8", "metadata": {}, "outputs": [], "source": [ "import boto3\n", "import config\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "import os\n", "import pandas as pd\n", "import sagemaker\n", "import shutil\n", "import sys\n", "import zipfile\n", "\n", "from botocore.client import ClientError\n", "from datetime import datetime\n", "from dateutil.relativedelta import relativedelta\n", "from matplotlib.gridspec import GridSpec\n", "from tqdm import tqdm\n", "\n", "# SDK / toolbox for managing Lookout for Equipment API calls:\n", "import lookoutequipment as lookout" ] }, { "cell_type": "markdown", "id": "a596ffd1", "metadata": {}, "source": [ "### Parameters" ] }, { "cell_type": "code", "execution_count": null, "id": "af6a43e6", "metadata": {}, "outputs": [], "source": [ "# Loading configuration:\n", "PREFIX_TRAINING = config.PREFIX_TRAINING\n", "PREFIX_LABEL = config.PREFIX_LABEL\n", "DATASET_NAME = config.DATASET_NAME\n", "MODEL_NAME = config.MODEL_NAME\n", "EQUIPMENT = config.EQUIPMENT\n", "\n", "BUCKET = sagemaker.Session().default_bucket()\n", "ROLE_ARN = sagemaker.get_execution_role()\n", "RAW_DATA = os.path.join('..', 'dataset')\n", "TMP_DATA = os.path.join('..', 'data', 'interim')\n", "PROCESSED_DATA = os.path.join('..', 'data', 'processed')\n", "LABEL_DATA = os.path.join(PROCESSED_DATA, 'label-data')\n", "TRAIN_DATA = os.path.join(PROCESSED_DATA, 'training-data')\n", "INFERENCE_DATA = os.path.join(PROCESSED_DATA, 'inference-data')\n", "\n", "os.makedirs(TMP_DATA, exist_ok=True)\n", "os.makedirs(RAW_DATA, exist_ok=True)\n", "os.makedirs(PROCESSED_DATA, exist_ok=True)\n", "os.makedirs(LABEL_DATA, exist_ok=True)\n", "os.makedirs(TRAIN_DATA, exist_ok=True)\n", "os.makedirs(INFERENCE_DATA, exist_ok=True)\n", "\n", "# AWS Look & Feel definition for Matplotlib\n", "plt.style.use('../utils/aws_matplotlib_template.py')\n", "prop_cycle = plt.rcParams['axes.prop_cycle']\n", "colors = prop_cycle.by_key()['color']\n", "\n", "%matplotlib inline\n", "BUCKET" ] }, { "cell_type": "markdown", "id": "5af1bb0a", "metadata": {}, "source": [ "# **Dataset overview**\n", "---\n", "For this workshop, we are going to switch from space data to a multivariate water pump dataset. This dataset contains 50 sensors (flow rates, pressures, temperatures...) and several system failures are known. Let's first load and visualize the content of this dataset..." ] }, { "cell_type": "markdown", "id": "8e421dca", "metadata": {}, "source": [ "### **Loading dataset**" ] }, { "cell_type": "code", "execution_count": null, "id": "94ca02f3", "metadata": {}, "outputs": [], "source": [ "ARCHIVE_PATH = os.path.join(RAW_DATA, 'sensors.zip')\n", "DEST_PATH = os.path.join(TMP_DATA, 'sensors.csv')\n", "DEST_DIR = os.path.dirname(DEST_PATH)\n", "\n", "print(\"Extracting data archive\")\n", "zip_ref = zipfile.ZipFile(ARCHIVE_PATH, 'r')\n", "zip_ref.extractall(DEST_DIR + '/')\n", "zip_ref.close()\n", "\n", "print(\"Loading known labels\")\n", "_ = shutil.copy(src=os.path.join(RAW_DATA, 'status.csv'), dst=os.path.join(TMP_DATA, 'status.csv'))" ] }, { "cell_type": "code", "execution_count": null, "id": "2858bc37", "metadata": {}, "outputs": [], "source": [ "pump_df = pd.read_csv(DEST_PATH)\n", "pump_df['Timestamp'] = pd.to_datetime(pump_df['Timestamp'], format='%Y-%m-%d %H:%M:%S')\n", "pump_df = pump_df.set_index('Timestamp')\n", "\n", "status_df = pd.read_csv(os.path.join(TMP_DATA, 'status.csv'))\n", "status_df['Timestamp'] = pd.to_datetime(status_df['Timestamp'])\n", "status_df = status_df.set_index('Timestamp')\n", "\n", "pump_df['machine_status'] = status_df['machine_status']\n", "del status_df\n", "\n", "pump_df.head()" ] }, { "cell_type": "markdown", "id": "05635570", "metadata": {}, "source": [ "### **Dataset visualization**" ] }, { "cell_type": "code", "execution_count": null, "id": "ce2db5fd", "metadata": {}, "outputs": [], "source": [ "plt.rcParams['hatch.linewidth'] = 0.5\n", "plt.rcParams['lines.linewidth'] = 0.5\n", "\n", "fig = plt.figure(figsize=(24,4))\n", "ax1 = fig.add_subplot(1,1,1)\n", "ax1.set_yticks([])\n", "plot1 = ax1.plot(pump_df['sensor_00'], label='sensor_00', alpha=0.7)\n", "ax1 = ax1.twinx()\n", "ax1.grid(False)\n", "ax1.set_yticks([])\n", "plot2 = ax1.plot(pump_df['sensor_34'], label='sensor_34', color=colors[1], alpha=0.7)\n", "\n", "ax2 = ax1.twinx()\n", "plot3 = ax2.fill_between(\n", " x=pump_df.index, y1=0.0, y2=pump_df['machine_status'], \n", " color=colors[5], linewidth=0.0, edgecolor='#000000', alpha=0.5, hatch=\"//////\", \n", " label='Broken pump'\n", ")\n", "ax2.grid(False)\n", "ax2.set_yticks([])\n", "\n", "labels = [plot1[0].get_label(), plot2[0].get_label(), plot3.get_label()]\n", "plt.legend(handles=[plot1[0], plot2[0], plot3], labels=labels, loc='lower center', ncol=3, bbox_to_anchor=(0.5, -.3))\n", "plt.title('Industrial pump sensor data')\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "508b2afd", "metadata": {}, "source": [ "On the preceding plot, you can see a couple signals (`sensor_00` in dark gray and `sensor_34` in light blue). Feel free to update the previous cell and rerun it to visualize other signals. The areas highlighted in red are time periods where system failures are known to happen.\n", "\n", "Use the following cells to get a better view from the sensors available in this dataset:" ] }, { "cell_type": "code", "execution_count": null, "id": "2247cb7d", "metadata": {}, "outputs": [], "source": [ "for index, f in enumerate(list(pump_df.columns)[:3]):\n", " fig = plt.figure(figsize=(24,3))\n", " ax1 = fig.add_subplot(1,1,1)\n", " ax1.plot(pump_df[f], color=colors[index % len(colors)])\n", " ax1.set_title(f)\n", " \n", "plt.show()" ] }, { "cell_type": "markdown", "id": "c394c984", "metadata": {}, "source": [ "### **Preparing time series data**\n", "We are going to generate a single CSV file for each sensor and put it into its own folder. Although Amazon Lookout for Equipment can detect your data structure by itself, this approach can be useful when you don't want to align the timestamps for every sensors (which you would have to do, should you want to build a single CSV file with all your sensors). In this case, the data is already nicely formatted, but this is a worthy trick that can save you a lot of data preparation hassle:" ] }, { "cell_type": "code", "execution_count": null, "id": "8e09f40a", "metadata": {}, "outputs": [], "source": [ "features = list(pump_df.columns)[:-1]\n", "\n", "for tag in tqdm(features):\n", " os.makedirs(os.path.join(TRAIN_DATA, EQUIPMENT, tag), exist_ok=True)\n", " fname = os.path.join(TRAIN_DATA, EQUIPMENT, tag, 'tag_data.csv')\n", " tag_df = pump_df[[tag]]\n", " tag_df.to_csv(fname)\n", " \n", "# Let's now load our training data and labels to Amazon S3, so that \n", "# Lookout for Equipment can access them to train and evaluate a model:\n", "train_s3_path = f's3://{BUCKET}/{PREFIX_TRAINING}{EQUIPMENT}/'\n", "!aws s3 cp --recursive $TRAIN_DATA/water-pump $train_s3_path" ] }, { "cell_type": "markdown", "id": "e76c5b46", "metadata": {}, "source": [ "# **Data ingestion**\n", "---" ] }, { "cell_type": "markdown", "id": "17b5e6f3", "metadata": {}, "source": [ "Let's double check the values of all the parameters that will be used to ingest some data into an existing Lookout for Equipment dataset:" ] }, { "cell_type": "code", "execution_count": null, "id": "5c895e39", "metadata": {}, "outputs": [], "source": [ "ROLE_ARN, BUCKET, PREFIX_TRAINING + EQUIPMENT + '/', DATASET_NAME" ] }, { "cell_type": "code", "execution_count": null, "id": "78a0c4b0", "metadata": {}, "outputs": [], "source": [ "lookout_dataset = lookout.LookoutEquipmentDataset(\n", " dataset_name=DATASET_NAME,\n", " component_root_dir=f'{TRAIN_DATA}/{EQUIPMENT}',\n", " access_role_arn=ROLE_ARN\n", ")\n", "lookout_dataset.create()\n", "response = lookout_dataset.ingest_data(BUCKET, PREFIX_TRAINING + EQUIPMENT + '/')" ] }, { "cell_type": "markdown", "id": "21f23f35", "metadata": {}, "source": [ "We use the following cell to monitor the ingestion process. This process should take between **10 and 15 minutes** given the amount of data we have to ingest (~350 MB). During ingestion, Lookout for Equipment will prepare the data so that it can be processed by multiple algorithms (deep learning, statistical and traditional machine learning ones). It will also grade them to issue a data quality report:" ] }, { "cell_type": "code", "execution_count": null, "id": "3f014939", "metadata": {}, "outputs": [], "source": [ "lookout_dataset.poll_data_ingestion(sleep_time=60)" ] }, { "cell_type": "markdown", "id": "7dbf769e", "metadata": {}, "source": [ "We created a **Lookout for Equipment dataset** and ingested the S3 data previously uploaded into this dataset.\n", "\n", "Don't forget to checkout the [**Lookout for Equipment console**](https://console.aws.amazon.com/lookoutequipment/home), where you will be able to visualize the data quality report for your ingested data.\n", "\n", "**Let's now train a model based on these data.**" ] }, { "cell_type": "markdown", "id": "8d34f45b", "metadata": {}, "source": [ "# **Model training**\n", "---\n", "When training our model, we are going to define a train/evaluation split. Run the following cell to use the first 6 months for training purpose and the remaining data (~1.5 year) for evaluation (backtesting):" ] }, { "cell_type": "code", "execution_count": null, "id": "93ab8c6d", "metadata": {}, "outputs": [], "source": [ "# Configuring time ranges:\n", "training_start = pd.to_datetime('2018-04-01 00:00:00')\n", "training_end = pd.to_datetime('2018-10-31 23:59:00')\n", "evaluation_start = pd.to_datetime('2018-11-01 00:00:00')\n", "evaluation_end = pd.to_datetime('2020-04-30 23:59:00')\n", "\n", "print(f' Training period | from {training_start} to {training_end}')\n", "print(f'Evaluation period | from {evaluation_start} to {evaluation_end}')\n", "\n", "print(training_end - training_start)\n", "print(evaluation_end - evaluation_start)" ] }, { "cell_type": "code", "execution_count": null, "id": "aec7011a", "metadata": {}, "outputs": [], "source": [ "lookout_model = lookout.LookoutEquipmentModel(model_name=MODEL_NAME, dataset_name=DATASET_NAME)\n", "lookout_model.set_time_periods(evaluation_start, evaluation_end, training_start, training_end)\n", "lookout_model.set_target_sampling_rate(sampling_rate='PT5M')\n", "lookout_model.train()" ] }, { "cell_type": "markdown", "id": "ea516e5a", "metadata": {}, "source": [ "The following method encapsulate a call to the [**DescribeModel**](https://docs.aws.amazon.com/lookout-for-equipment/latest/ug/API_DescribeModel.html) API and collect the model progress by looking at the `Status` field retrieved from this call. This training should take around 25-30 minutes to run:" ] }, { "cell_type": "code", "execution_count": null, "id": "20f9c231", "metadata": {}, "outputs": [], "source": [ "lookout_model.poll_model_training(sleep_time=60)" ] }, { "cell_type": "markdown", "id": "2f3519da", "metadata": {}, "source": [ "In this section, we use the dataset created earlier and trained an Amazon Lookout for Equipment model.\n", "\n", "From here, we will **extract the evaluation data** for this model and use it to perform further analysis on the model results." ] }, { "cell_type": "markdown", "id": "42b87267", "metadata": {}, "source": [ "# **Model evaluation**\n", "---" ] }, { "cell_type": "markdown", "id": "4382bd87", "metadata": {}, "source": [ "After the model is trained, we can extract the backtesting results and visualize the anomalies detected by Lookout for Equipment over the evaluation period. Although evaluating your model is optional (you don't need to do this to deploy and use the model), this section will give you some pointers on how to post-process and visualize the data provided by Amazon Lookout for Equipment:" ] }, { "cell_type": "code", "execution_count": null, "id": "ce297e51", "metadata": {}, "outputs": [], "source": [ "LookoutDiagnostics = lookout.LookoutEquipmentAnalysis(model_name=MODEL_NAME, tags_df=pump_df)\n", "LookoutDiagnostics.set_time_periods(evaluation_start, evaluation_end, training_start, training_end)\n", "predicted_ranges = LookoutDiagnostics.get_predictions()" ] }, { "cell_type": "code", "execution_count": null, "id": "d69ded4f", "metadata": {}, "outputs": [], "source": [ "tags_list = list(pump_df.columns)[:-1]\n", "custom_colors = {'labels': colors[9], 'predictions': colors[5]}\n", " \n", "TSViz = lookout.plot.TimeSeriesVisualization(\n", " timeseries_df=pump_df,\n", " data_format='tabular'\n", ")\n", "TSViz.add_signal([tags_list[0]])\n", "TSViz.add_predictions([predicted_ranges])\n", "TSViz.add_train_test_split(evaluation_start)\n", "fig, axis = TSViz.plot(fig_width=24, colors=custom_colors)" ] }, { "cell_type": "markdown", "id": "e5abca3a", "metadata": {}, "source": [ "# **Helper functions**\n", "---\n", "The following functions are used to build the post-processing visualizations:" ] }, { "cell_type": "code", "execution_count": null, "id": "9513d984", "metadata": {}, "outputs": [], "source": [ "def plot_detected_events(group_id, start, end, signal):\n", " custom_colors = {\n", " 'labels': colors[9],\n", " 'predictions': colors[5]\n", " }\n", " \n", " TSViz = lookout.plot.TimeSeriesVisualization(\n", " timeseries_df=pump_df.loc[start:end, :],\n", " data_format='tabular'\n", " )\n", " TSViz.add_signal([signal])\n", " TSViz.add_predictions([predicted_ranges])\n", " TSViz.legend_format = {\n", " 'loc': 'upper right',\n", " 'framealpha': 0.4,\n", " 'ncol': 3\n", " }\n", " fig, axis = TSViz.plot(fig_width=24, colors=custom_colors)\n", "\n", " for ax in axis:\n", " ax.set_xlim((start, end))\n", "\n", "def plot_signal_importance_evolution(group_id, plot_start, plot_end, signal):\n", " fig = plt.figure(figsize=(24,10))\n", " gs = GridSpec(nrows=4, ncols=1, height_ratios=[0.5, 0.3, 0.1, 0.6])\n", " df = expanded_results_v3.loc[plot_start:plot_end, :].copy()\n", "\n", " ax0 = fig.add_subplot(gs[0])\n", " ax0.plot(pump_df.loc[plot_start:plot_end, signal], color=colors[9], linewidth=1.0, label=signal)\n", " ax0.legend(loc='upper right', fontsize=12)\n", " ax0.set_xlim((plot_start, evaluation_end))\n", "\n", " ax1 = fig.add_subplot(gs[1])\n", " ax1.plot(predictions_df.rolling(60*24).sum(), label='Number of daily\\nevent detected')\n", " ax1.legend(loc='upper left', fontsize=12)\n", " ax1.set_xlim((plot_start, evaluation_end))\n", "\n", " ax3 = fig.add_subplot(gs[2])\n", " plot_ranges(predictions_df, 'Detected events', colors[5], ax3)\n", " ax3.set_xlim((plot_start, evaluation_end))\n", "\n", " bar_width = 1.0\n", " num_top_signals = 5\n", " ax4 = fig.add_subplot(gs[3])\n", " bottom_values = np.zeros((len(df.index),))\n", " current_tags_list = list(df.sum().sort_values(ascending=False).head(num_top_signals).index)\n", " for tag in current_tags_list:\n", " plt.bar(x=df.index, height=df[tag], bottom=bottom_values, alpha=0.8, width=bar_width, label=tag.split('\\\\')[0])\n", " bottom_values += df[tag].values\n", "\n", " all_other_tags = [t for t in df.columns if t not in current_tags_list]\n", " all_other_tags_contrib = df[all_other_tags].sum(axis='columns')\n", " plt.bar(x=df.index, height=all_other_tags_contrib, bottom=bottom_values, alpha=0.8, width=bar_width, label=f'All the others\\n({len(all_other_tags)} signals)', color='#CCCCCC')\n", "\n", " ax4.legend(loc='lower center', ncol=4, bbox_to_anchor=(0.5, -0.40))\n", " ax4.set_xlabel('Signal importance evolution', fontsize=12)\n", " ax4.set_xlim((plot_start, evaluation_end))\n", "\n", " plt.show()\n", " \n", " return current_tags_list\n", "\n", "def plot_top_signals_time_series(group_id, plot_start, plot_end, top_tags_list):\n", " fig = plt.figure(figsize=(24,6 * len(top_tags_list)))\n", " gs = GridSpec(nrows=2 * len(top_tags_list), ncols=1, height_ratios=[1.0, 0.2] *len(top_tags_list))\n", "\n", " for index, signal in enumerate(top_tags_list):\n", " ax = fig.add_subplot(gs[index*2])\n", " ax.plot(pump_df.loc[plot_start:plot_end, signal], color=colors[index * 2], linewidth=1.0)\n", " ax.set_title(f'Signal: {signal}')\n", "\n", " ax = fig.add_subplot(gs[index*2 + 1])\n", " plot_ranges(predictions_df, '', colors[5], ax)\n", " ax.set_xlim((plot_start, plot_end))\n", "\n", " plt.show()\n", "\n", "def plot_histograms(group_id):\n", " fig = TSViz.plot_histograms(freq='60min', fig_width=24, start=training_start, end=evaluation_end, top_n=4)\n", "\n", "def plot_ranges(range_df, range_title, color, ax):\n", " ax.plot(range_df['Label'], color=color)\n", " ax.fill_between(range_df.index, \n", " y1=range_df['Label'], \n", " y2=0, \n", " alpha=0.1, \n", " color=color)\n", " ax.axes.get_xaxis().set_ticks([])\n", " ax.axes.get_yaxis().set_ticks([])\n", " ax.set_xlabel(range_title, fontsize=12)" ] }, { "cell_type": "markdown", "id": "5c7be7be", "metadata": {}, "source": [ "# **Extracting additional insights**\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "9760f925", "metadata": {}, "outputs": [], "source": [ "predicted_ranges" ] }, { "cell_type": "code", "execution_count": null, "id": "a48a1061", "metadata": {}, "outputs": [], "source": [ "predicted_ranges['duration'] = pd.to_datetime(predicted_ranges['end']) - pd.to_datetime(predicted_ranges['start'])\n", "predicted_ranges['duration'] = predicted_ranges['duration'].dt.total_seconds() / 3600\n", "predictions_df = TSViz._convert_ranges(predicted_ranges, default_freq='1min')\n", "\n", "expanded_results = []\n", "for index, row in predicted_ranges.iterrows():\n", " new_row = dict()\n", " new_row.update({'start': row['start']})\n", " new_row.update({'end': row['end']})\n", " new_row.update({'prediction': 1.0})\n", " \n", " diagnostics = pd.DataFrame(row['diagnostics'])\n", " diagnostics = dict(zip(diagnostics['name'], diagnostics['value']))\n", " new_row = {**new_row, **diagnostics}\n", " \n", " expanded_results.append(new_row)\n", " \n", "expanded_results = pd.DataFrame(expanded_results)\n", "\n", "df_list = []\n", "for index, row in expanded_results.iterrows():\n", " new_index = pd.date_range(start=row['start'], end=row['end'], freq='5T')\n", " new_df = pd.DataFrame(index=new_index)\n", " \n", " for tag in tags_list:\n", " new_df[tag] = row[f'{tag}\\\\{tag}']\n", " \n", " df_list.append(new_df)\n", " \n", "expanded_results_v2 = pd.concat(df_list, axis='index')\n", "expanded_results_v2 = expanded_results_v2.reindex(predictions_df.index)\n", "\n", "freq = '1D'\n", "expanded_results_v3 = expanded_results_v2.resample(freq).mean()\n", "expanded_results_v3 = expanded_results_v3.replace(to_replace=np.nan, value=0.0)" ] }, { "cell_type": "code", "execution_count": null, "id": "c66cd274", "metadata": {}, "outputs": [], "source": [ "plot_start = evaluation_start\n", "plot_end = evaluation_end" ] }, { "cell_type": "code", "execution_count": null, "id": "aef3e858", "metadata": {}, "outputs": [], "source": [ "plot_detected_events('event_group', plot_start, plot_end, tags_list[0])" ] }, { "cell_type": "code", "execution_count": null, "id": "45beada6", "metadata": {}, "outputs": [], "source": [ "top_tags_list = plot_signal_importance_evolution('event_group', plot_start, plot_end, tags_list[0])" ] }, { "cell_type": "code", "execution_count": null, "id": "78724f30", "metadata": {}, "outputs": [], "source": [ "plot_top_signals_time_series('event_group', plot_start, plot_end, top_tags_list)" ] }, { "cell_type": "code", "execution_count": null, "id": "db700aef", "metadata": {}, "outputs": [], "source": [ "plot_histograms('event_group')" ] }, { "cell_type": "markdown", "id": "f816459d", "metadata": {}, "source": [ "# **Conclusion and Call to Action**\n", "---" ] }, { "cell_type": "markdown", "id": "6f993ff0", "metadata": {}, "source": [ "In this second part, you learned how to use a managed service, Amazon Lookout for Equipment, to train an anomaly detection model on a water pump multivariate time series dataset.\n", "\n", "You also learn how to further post-process and interpret the raw results from such anomaly detection models.\n", "\n", "As a next step, we recommend that you add the known failure periods and retrain a new model to see the impact of the forewarning time. Note that training time will be closer to 1 hour in this case.\n", "* If you want to know how to point to a label file when training a model, check out the [**SDK Documentation**](https://amazon-lookout-for-equipment-sdk.readthedocs.io/en/latest/generated/src.lookoutequipment.model.LookoutEquipmentModel.html#src.lookoutequipment.model.LookoutEquipmentModel)\n", "* Use the following cell to prepare the label data accordingly: the following cell compresses the machine status column (one row per timestamp) into a shorter table where each row can be associated to a time range." ] }, { "cell_type": "code", "execution_count": null, "id": "485a2ece", "metadata": {}, "outputs": [], "source": [ "range_df = pump_df[['machine_status']].copy()\n", "range_df['BROKEN'] = False\n", "range_df.loc[range_df['machine_status'] == 1.0, 'BROKEN'] = True\n", "\n", "range_df['Next Status'] = range_df['BROKEN'].shift(-1)\n", "range_df['Start Range'] = (range_df['BROKEN'] == False) & (range_df['Next Status'] == True)\n", "range_df['End Range'] = (range_df['BROKEN'] == True) & (range_df['Next Status'] == False)\n", "range_df.iloc[0,3] = range_df.iloc[0,1]\n", "range_df = range_df[(range_df['Start Range'] == True) | (range_df['End Range'] == True)]\n", "\n", "labels_df = pd.DataFrame(columns=['start', 'end'])\n", "for index, row in range_df.iterrows():\n", " if row['Start Range']:\n", " start = index\n", "\n", " if row['End Range']:\n", " end = index\n", " labels_df = labels_df.append({\n", " 'start': start + relativedelta(hours=-12),\n", " 'end': end + relativedelta(hours=+12)\n", " }, ignore_index=True)\n", " \n", "labels_df['start'] = pd.to_datetime(labels_df['start'])\n", "labels_df['end'] = pd.to_datetime(labels_df['end'])\n", "labels_df['start'] = labels_df['start'].dt.strftime('%Y-%m-%dT%H:%M:%S.%f')\n", "labels_df['end'] = labels_df['end'].dt.strftime('%Y-%m-%dT%H:%M:%S.%f')\n", "\n", "labels_fname = os.path.join(LABEL_DATA, 'labels.csv')\n", "labels_df.to_csv(labels_fname, header=None, index=None)\n", " \n", "labels_df" ] } ], "metadata": { "kernelspec": { "display_name": "conda_python3", "language": "python", "name": "conda_python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.12" }, "toc-autonumbering": false, "toc-showcode": false, "toc-showtags": false }, "nbformat": 4, "nbformat_minor": 5 }