{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Pre-Process AI4I 2020 Predictive Maintenance dataset\n",
"\n",
"#### In this notebook we download, explore and pre-process the dataset from UCI Data Repository, so that it can later used for training a ML model. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Dataset Description:\n",
"\n",
"The dataset we use here for predictive maintenance comes from the UCI Data Repository and consists of a synthetic dataset containing machine failures due to features such as air temperature, process temperature, rotation speed,torque and tool wear. Additional details about this dataset can be found here: \n",
"\n",
"https://archive.ics.uci.edu/ml/datasets/AI4I+2020+Predictive+Maintenance+Dataset\n",
"\n",
"The machine failure consists of five independent failure modes\n",
"\n",
"Tool wear failure (TWF): the tool will be replaced of fail at a randomly selected tool wear time between 200 – 240 mins (120 times in our dataset). At this point in time, the tool is replaced 69 times, and fails 51 times (randomly assigned).
\n",
"Heat dissipation failure (HDF): heat dissipation causes a process failure, if the difference between air- and process temperature is below 8.6 K and the tool’s rotational speed is below 1380 rpm. This is the case for 115 data points..
\n",
"Power failure (PWF): the product of torque and rotational speed (in rad/s) equals the power required for the process. If this power is below 3500 W or above 9000 W, the process fails, which is the case 95 times in our dataset..
\n",
"Overstrain failure (OSF): if the product of tool wear and torque exceeds 11,000 minNm for the L product variant (12,000 M, 13,000 H), the process fails due to overstrain. This is true for 98 datapoints..
\n",
"Random failures (RNF): each process has a chance of 0,1 % to fail regardless of its process parameters. This is the case for only 5 datapoints, less than could be expected for 10,000 datapoints in our dataset..
\n",
"\n",
"If at least one of the above failure modes is true, the process fails and the 'machine failure' label is set to 1\n",
"\n",
"The 'machine failure' label that indicates, whether the machine has failed for a particular datapoint for any of the following failure modes are true."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Import Libraries"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"import numpy as np\n",
"import os\n",
"import seaborn as sb\n",
"import matplotlib.pyplot as mplt\n",
"from sklearn.preprocessing import LabelEncoder\n",
"from sklearn.model_selection import train_test_split\n",
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Set up Paths and Directories"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Preprocess the input dataset for Machine learning training\n",
"try:\n",
" os.makedirs('training_data')\n",
"except Exception as e:\n",
" print(\"'training_data' directory already exists\")\n",
" \n",
"try:\n",
" os.makedirs('test_data')\n",
"except Exception as e:\n",
" print(\"'test_data' directory already exists\")\n",
" \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Download the dataset"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"! curl --insecure https://archive.ics.uci.edu/ml/machine-learning-databases/00601/ai4i2020.csv --output ai4i2020.csv"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"cols = [\"UDI\", \"Product_ID\", \"type\", \"air_temperature\", \"process_temperature\", \"rotational_speed\", \"torque\", \"tool_wear\", \"machine_failure\", \"TWF\", \"HDF\", \"PWF\", \"OSF\", \"RNF\"]\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"origdf = pd.read_csv('ai4i2020.csv', sep=',', encoding = 'utf-8', skiprows=1, names = cols)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"origdf.head(10)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note the following: \n",
"\n",
"1. Our goal is to infer\\predict the label \"Machine failure\" based on features such as sensor readings (temperature, speed etc) and other contextual information (for example Type)\n",
"2. Machine failure is indicated by '1' and '0' indicates no failure"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We have a feature 'type' which depicts product type as L, M, or H for low (50% of all products), medium (30%) and high (20%) as product quality variants and a variant-specific serial number. \n",
"However since this is categorical we need to encode it for use in our model. We will use Sklearn's Label Encoder to achieve this."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"type_encoder = LabelEncoder()\n",
"type_encoder.fit(origdf['type'])\n",
"type_values = type_encoder.transform(origdf['type'])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Insert the encoded feature into the dataframe and drop the original 'type' feature. We also drop the UDI & Product_ID since they are just identifiers and do not provide much value from a feature perspective."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"origdf.insert(2, \"type_enc\", type_values, True)\n",
"origdf = origdf.drop(columns = ['UDI', 'type', 'Product_ID'])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"origdf.head(10)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Data Exploration"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Class Imbalance\n",
"\n",
"Lets start by taking a look at how each class for our label \"Machine Failure\" appears in the dataset. This is important to understand if there is any Class Imbalance in the dataset."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"sb.countplot(origdf.machine_failure)\n",
"print(\"Percent of Failure samples = {:.1f}\".format(len(origdf[origdf['machine_failure']==1])/len(origdf)*100))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"CLASS IMBALANCE occurs when one class is much less prevalent than the others within the the label. Even though this is a synthetic dataset, within Predictive Maintaince use cases, this is a common occurence as machine failures are less common compared to non-failures.\n",
"\n",
"Usually this is a challenge for ML models as the model may not have enough samples from the less common class to fully understand the patterns and accurately generate predictions. \n",
"\n",
"Depending on a few considerations and the use case, there are a number of techniques available such as up-sampling, down-sampling, SMOTE etc to deal with Class Imbalance issue. The finer details and impementation of these are beyond the scope of this post, but in real world use cases Class Imbalance should be addressed for building useful models."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Feature Correlations\n",
"\n",
"Now let's explore correlations between our features and the label within the dataset. This helps in identifying features that are important for generating predictions and therefore should be retained in the training dataset.\n",
"\n",
"We start by plotting a heatmap."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"corr = origdf.corr()\n",
"\n",
"mplt.figure(figsize= (15, 15))\n",
"sb.heatmap(np.abs(corr))\n",
"mplt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"There does seem to be certain degree of correlation between the failure modes TWF, HDF, PWF, OSF and system readings such as temperature, torque, rotational speed etc. This is expected as the modes are derived based on the underlying readings.\n",
"\n",
"Next let's try to identify what features correlate strongly with the failure label."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"index_list = list(corr['machine_failure'].dropna().index)\n",
"val_list = np.argsort(np.abs(corr['machine_failure'].dropna().values))[::-1]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We get a ordered list of most corelated fesatures to the failure label and remove the label itself."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"top_corrs = [index_list[x] for x in val_list[:15]]\n",
"reduceddat = origdf[top_corrs]\n",
"reduceddat.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The failure modes features TWF, HDF, PWF, OSF have stronger correlation to the failure label. Howerver for our use case, since the system readings will be streamed, we will assume features TWF, HDF, PWF, OSF are not available during inference. Additionally, we want the model to be able predict failures based on the foundational systematic features. Therefore we will be remove these from the dataset. We will also remove RNF for similiar reasons."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"Dropping columns...\")\n",
"processdf = origdf.drop(columns = ['TWF', 'HDF', 'PWF', 'OSF', 'RNF'])\n",
"print(\"replace na with 0\")\n",
"processdf=processdf.replace('na',0)\n",
"print(\"Changing data types to numeric...\")\n",
"processdf = processdf.apply(pd.to_numeric) \n",
"print(\"Shape of the processed dataset ={}\".format(processdf.shape))\n",
" \n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"processdf.head(5)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Rearranging the columns so that the label is at position 1."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"finaldf = processdf[['machine_failure','air_temperature', 'process_temperature', 'rotational_speed', 'torque', 'tool_wear','type_enc']]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"finaldf.head(5)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Data Split & Export"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Splitting the dataset into train and test sets and exporting locally to training and test folders."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X_train, X_test = train_test_split(finaldf, test_size=0.2, random_state = 1234)\n",
"X_train.to_csv('training_data/train.csv', index = False, header = None)\n",
"X_test.to_csv('test_data/test.csv', index=False, header=None)\n",
"finaldf.to_csv('fulldataset.csv', index=False, header=None)\n",
"print(\"Shape of Training data = {}\".format(X_train.shape))\n",
"print(\"Shape of Test data = {}\".format(X_test.shape))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"instance_type": "ml.c5.large",
"kernelspec": {
"display_name": "conda_python3",
"language": "python",
"name": "conda_python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.13"
}
},
"nbformat": 4,
"nbformat_minor": 4
}