{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Sound anomaly detection\n", "*Context*\n", "\n", "## Introduction\n", "---\n", "Industrial companies have been collecting a massive amount of time series data about their operating processes, manufacturing production lines and industrial equipment. They sometime store years of data in historian systems or in their factory information system at large. Whereas they are looking to prevent equipment breakdown that would stop a production line, avoid catastrophic failures in a power generation facility or improving their end product quality by adjusting their process parameters, having the ability to process time series data is a challenge that modern cloud technologies are up to. However, everything is not about cloud itself: your factory edge capability must allow you to stream the appropriate data to the cloud (bandwidth, connectivity, protocol compatibility, putting data in context...).\n", "\n", "What if had a frugal way to qualify your equipment health with few data? This would definitely help leveraging robust and easier to maintain edge-to-cloud blueprints. In this post, we are going to focus on a tactical approach industrial companies can use to help them reduce the impact of machine breakdowns by reducing how unpredictable they are.\n", "\n", "Most times, machine failures are tackled by either reactive action (stop the line and repair...) or costly preventive maintenance where you have to build the proper replacement parts inventory and schedule regular maintenance activities. Skilled machine operators are the most valuable assets in such settings: years of experience allow them to develop a fine knowledge of how the machinery should operate, they become expert listeners and can to detect unusual behavior and sounds in rotating and moving machines. However, production lines are becoming more and more automated, and augmenting these machine operators with AI-generated insights is a way to maintain and develop the fine expertise needed to prevent reactive-only postures when dealing with machine breakdowns.\n", "\n", "In this post we are going to compare and contrast two different approaches to identify a malfunctioning machine, providing we have sound recordings from its operation: we will start by building a neural network based on an autoencoder architecture and we will then use an image-based approach where we will feed images of sound (namely spectrograms) to an image based automated ML classification feature." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Solution overview\n", "---\n", "In this example, we are going to use sounds recorded in an industrial environment to perform anomaly detection on industrial equipment.\n", "\n", "To achieve this, we are going to explore and leverage the MIMII dataset for anomaly detection purpose: this is a sound dataset for **M**alfunctioning **I**ndustrial **M**achine **I**nvestigation and **I**nspection (MIMII). You can download it from **https://zenodo.org/record/3384388**: it contains sounds from several types of industrial machines (valves, pumps, fans and slide rails). In this example, we are going to focus on the **fans**. **[This paper](https://arxiv.org/abs/1909.09347)** describes the sound capture procedure.\n", "\n", "We walk you through the following steps using Jupyter notebooks provided with this blog post:\n", "\n", "1. The first one will focus on *data exploration* to get familiar with sound data: these data are particular time series data and exploring them requires specific approaches.\n", "2. We will then use Amazon SageMaker to *build an* *autoencoder* that will be used as a classifier able to discriminate between normal and abnormal sounds.\n", "3. Last, we are going to take on a more novel approach in the last part of this work: we are going to *transform the sound files into spectrogram images* and feed them directly to an *image classifier*. We will use Amazon Rekognition Custom Labels to perform this classification task and leverage Amazon SageMaker for the data preprocessing and to drive the Custom Labels training and evaluation process.\n", "\n", "Both approaches requires an equal amount of effort to complete: although the models obtained in the end are not comparable, this will give you an idea of how much of a kick start you may get when using an applied AI service." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Introducting the machine sound dataset\n", "---\n", "You can follow this data exploration work with the first companion notebook from **[this repository](https://github.com/michaelhoarau/sound-anomaly-detection)**. Each recording contains 8 channels, one for each microphone that was used to record a given machine sound. In this experiment, we will only focus on the recordings of the first microphone. The first thing we can do is to plot the waveforms of a normal and abnormal signals next to each other:\n", "\n", "![Waveforms](pictures/waveforms.png)\n", "\n", "Each signal is 10 seconds long and apart from the larger amplitude of the abnormal signal and some pattern that are more irregular, it’s difficult to distinguish between these two signals. In the companion notebook, you will also be able to listen to some of the sounds: most of the time, the differences are small, especially if you put them in a context of a very noisy environment.\n", "\n", "A first approach could be to leverage the **[Fourier transform](https://en.wikipedia.org/wiki/Fourier_transform)**, which is a mathematical operator that decompose a function of time (or a signal) into its underlying frequencies. The Fourier transform is a function of frequency and its amplitude represents how much of a given frequency is present in the original signal. However, a sound signal is highly non-stationary (i.e. their statistics change over time). For a given time period, the frequency decomposition will be different from another time period. As a consequence, it will be rather meaningless to compute a single Fourier transform over the entire signal (however short they are in our case). We will need to call the short-time Fourier transform (STFT) for help: the STFT is obtained by computing the Fourier transform for successive frames in a signal.\n", "\n", "If we plot the amplitude of each frequency present in the first 64 ms of the first signal of both the normal and abnormal dataset, we obtain the following plot:\n", "\n", "![Short Fourier Transform](pictures/stft.png)\n", "\n", "We now have a tool to discretize our time signals into the frequency domain which brings us one step closer to be able to visualize them in this domain. For each signal we will now:\n", "\n", "1. Slice the signal in successive time frames\n", "2. Compute an STFT for each time frame\n", "3. Extract the amplitude of each frequency as a function of time\n", "4. Most sounds we can hear as humans, are concentrated in a very small range (**both** in frequency and amplitude range). The next step is then to take a log scale for both the frequency and the amplitude: for the amplitude, we obtain this by converting the color axis to Decibels (which is the equivalent of applying a log scale to the sound amplitudes)\n", "5. Plot the result on a spectrogram: a spectrogram has three dimensions: we keep time on the horizontal axis, put frequency on the vertical axis and use the amplitude to a color axis (in dB).\n", "\n", "The picture below shows the frequency representation of the signals plotted earlier:\n", "\n", "![Spectrograms](pictures/spectrograms.png)\n", "\n", "We can now see that these images have interesting features that we can easily uncover with our naked eyes: this is exactly the kind of features that a neural network can try to uncover and structure. We will now build two types of feature extractor based on this analysis and feed them to different type of architectures." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Building a custom autoencoder architecture\n", "---\n", "The **[autoencoder architecture](https://en.wikipedia.org/wiki/Autoencoder)** is a neural network with the same number of neurons in the input and the output layers. This kind of architecture learns to generate the “identity” transformation between inputs and outputs. The second notebook of our series will go through these different steps:\n", "\n", "1. Build the dataset: to feed the spectrogram to an autoencoder, we will build a tabular dataset and upload it to Amazon S3.\n", "2. Create a TensorFlow autoencoder model, train it in script mode by using the TensorFlow / Keras existing container\n", "3. Evaluate the model to obtain a confusion matrix highlighting the classification performance between normal and abnormal sounds.\n", "\n", "### Build a dataset\n", "We are using the **[librosa library](https://librosa.org/doc/latest/index.html)** which is a python package for audio analysis. A features extraction function based on steps to generate the spectrogram described earlier is central to the dataset generation process.\n", "\n", "```python\n", "def extract_signal_features(signal, sr, n_mels=64, frames=5, n_fft=1024, hop_length=512):\n", " # Compute a spectrogram (using Mel scale):\n", " mel_spectrogram = librosa.feature.melspectrogram(\n", " y=signal,\n", " sr=sr,\n", " n_fft=n_fft,\n", " hop_length=hop_length,\n", " n_mels=n_mels\n", " )\n", " \n", " # Convert to decibel (log scale for amplitude):\n", " log_mel_spectrogram = librosa.power_to_db(mel_spectrogram, ref=np.max)\n", " \n", " # Generate an array of vectors as features for the current signal:\n", " features_vector_size = log_mel_spectrogram.shape[1] - frames + 1\n", " \n", " # Build N sliding windows (=frames) and concatenate\n", " # them to build a feature vector:\n", " features = np.zeros((features_vector_size, dims), np.float32)\n", " for t in range(frames):\n", " features[:, n_mels*t:n_mels*(t+1)] = log_mel_spectrogram[:, t:t+features_vector_size].T\n", " \n", " return features\n", "```\n", "\n", "Note that we will train our autoencoder only on the normal signals: our model will learn how to reconstruct these signals (“learning the identity transformation”). The main idea is to leverage this for classification later; when we feed this trained model with abnormal sounds, the reconstruction error will be a lot higher than when trying to reconstruct normal sounds. Using an error threshold, we will then be able to discriminate abnormal and normal sounds.\n", "\n", "### Create the autoencoder\n", "To build our autoencoder, we use Keras and assemble a simple autoencoder architecture with 3 hidden layers:\n", "\n", "```python\n", "from tensorflow.keras import Input\n", "from tensorflow.keras.models import Model\n", "from tensorflow.keras.layers import Dense\n", "\n", "def autoencoder_model(input_dims):\n", " inputLayer = Input(shape=(input_dims,))\n", " h = Dense(64, activation=\"relu\")(inputLayer)\n", " h = Dense(64, activation=\"relu\")(h)\n", " h = Dense(8, activation=\"relu\")(h)\n", " h = Dense(64, activation=\"relu\")(h)\n", " h = Dense(64, activation=\"relu\")(h)\n", " h = Dense(input_dims, activation=None)(h)\n", "\n", " return Model(inputs=inputLayer, outputs=h)\n", "```\n", "\n", "We put this in a training script (model.py) and use the SageMaker TensorFlow estimator to configure our training job and launch the training:\n", "\n", "```python\n", "tf_estimator = TensorFlow(\n", " base_job_name='sound-anomaly',\n", " entry_point='model.py',\n", " source_dir='./autoencoder/',\n", " role=role,\n", " instance_count=1, \n", " instance_type='ml.p3.2xlarge',\n", " framework_version='2.2',\n", " py_version='py37',\n", " hyperparameters={\n", " 'epochs': 30,\n", " 'batch-size': 512,\n", " 'learning-rate': 1e-3,\n", " 'n_mels': n_mels,\n", " 'frame': frames\n", " },\n", " debugger_hook_config=False\n", ")\n", "\n", "tf_estimator.fit({'training': training_input_path})\n", "```\n", "\n", "Training over 30 epochs will take few minutes on a p3.2xlarge instance: at this stage, this will cost you a few cents. If you plan to use a similar approach on the whole MIMII dataset or use hyperparameter tuning, you can even further reduce this training cost by using Spot Training (check out **[this sample](https://github.com/aws-samples/amazon-sagemaker-managed-spot-training)** on how you can leverage Managed Training Spot and get a 70% discount in the process).\n", "\n", "### Evaluate the model\n", "Let’s now deploy the autoencoder behind a SageMaker endpoint: this operation will create a SageMaker endpoint and will continue to cost you as long as you let it leave. Do not forger to shut it down at the end of this experiment!\n", "\n", "```python\n", "tf_endpoint_name = 'sound-anomaly-'+time.strftime(\"%Y-%m-%d-%H-%M-%S\", time.gmtime())\n", "tf_predictor = tf_estimator.deploy(\n", " initial_instance_count=1,\n", " instance_type='ml.c5.large',\n", " endpoint_name=tf_endpoint_name\n", ")\n", "print(f'Endpoint name: {tf_predictor.endpoint_name}')\n", "```\n", "\n", "Our test dataset has an equal share of normal and abnormal sounds. We will loop through this dataset and send each test file to this endpoint. As our model is an autoencoder, we will evaluate how good the model is at reconstructing the input. The higher the reconstruction error, the greater the chance that we have identified an anomaly:\n", "\n", "```python\n", "y_true = test_labels\n", "reconstruction_errors = []\n", "\n", "for index, eval_filename in tqdm(enumerate(test_files), total=len(test_files)):\n", " # Load signal\n", " signal, sr = sound_tools.load_sound_file(eval_filename)\n", "\n", " # Extract features from this signal:\n", " eval_features = sound_tools.extract_signal_features(\n", " signal, \n", " sr, \n", " n_mels=n_mels, \n", " frames=frames, \n", " n_fft=n_fft, \n", " hop_length=hop_length\n", " )\n", " \n", " # Get predictions from our autoencoder:\n", " prediction = tf_predictor.predict(eval_features)['predictions']\n", " \n", " # Estimate the reconstruction error:\n", " mse = np.mean(np.mean(np.square(eval_features - prediction), axis=1))\n", " reconstruction_errors.append(mse)\n", "```\n", "\n", "In the plot below, we can see that the distribution of reconstruction error for normal and abnormal signals differs significantly. The overlap between these histograms means we have to compromise:\n", "\n", "![Reconstruction Error Histograms](pictures/reconstruction_error_histograms.png)\n", "\n", "Let's explore the recall-precision tradeoff for a reconstruction error threshold varying between 5.0 and 10.0 (this encompasses most of the overlap we can see above). First, let's visualize how this threshold range separates our signals on a scatter plot of all the testing samples:\n", "\n", "![threshold_range_exploration](pictures/threshold_range_exploration.png)\n", "\n", "If we plot the number of samples flagged as false positives and false negatives we can see that the best compromise is to use a threshold set around 6.3 for the reconstruction error (assuming we are not looking at minimizing either the false positive or false negatives occurrences):\n", "\n", "![reconstruction_error_threshold](pictures/reconstruction_error_threshold.png)\n", "\n", "For this threshold (6.3), we obtain the confusion matrix below:\n", "\n", "![confusion_matrix](pictures/confusion_matrix_autoencoder.png)\n", "\n", "The metrics associated to this matrix are the following:\n", "\n", "* Precision: 92.1%\n", "* Recall: 92.1%\n", "* Accuracy: 88.5%\n", "* F1 Score: 92.1%\n", "\n", "### Cleanup\n", "Let’s not forget to delete our Endpoint to prevent any cost to continue incurring by using the **delete_endpoint()** API.\n", "\n", "### Autoencoder improvement and further exploration\n", "\n", "The spectrogram approach requires defining the spectrogram square dimensions (e.g. the number of Mel cell defined in the data exploration notebook) which is a heuristic. In contrast, deep learning networks with a CNN encoder can learn the best representation to perform the task at hands (anomaly detection). Further steps to investigate to improve on this first result could be:\n", "\n", "* Experimenting with several more or less complex autoencoder architectures, training for a longer time, performing hyperparameter tuning with different optimizer, tuning the data preparation sequence (e.g. sound discretization parameters), etc.\n", "* Leveraging high resolution spectrograms and feeding them to a CNN encoder to uncover the most appropriate representation of the sound.\n", "* Using end-to-end model architecture with encoder-decoder that have been known to give good results on waveform datasets.\n", "* Using deep learning models with multi-context temporal and channel (8 microphones) attention weights .\n", "* Experimenting with time distributed 2D convolution layers can be used to encode features across the 8 channels: these encoded features could then be fed as sequences across time steps to an LSTM or GRU layer. From there, multiplicative sequence attention weights can then be learnt on the output sequence from the RNN layer.\n", "* Exploring the appropriate image representation for multi-variate time series signal that are not waveform: replacing spectrograms with Markov transition fields, recurrence plots or network graphs could then be used to achieve the same goals for non-sound time-based signals." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Using Amazon Rekognition Custom Labels\n", "---\n", "### Build a dataset\n", "Previously, we had to train our autoencoder on only normal signals. In this case, we will build a more traditional split of training, and testing dataset. Based on the fans sound database this will yield:\n", "\n", "* **4440 signals** for the training dataset, including:\n", " * 3260 normal signals\n", " * 1180 abnormal signals\n", "\n", "* **1110 signals** for the testing dataset including:\n", " * 815 normal signals\n", " * 295 abnormal signals\n", "\n", "We will generate and store the spectrogram of each signal and upload them in either a train or test bucket.\n", "\n", "### Create a Rekognition Custom Labels\n", "\n", "The first step is to create a Custom Labels project:\n", "\n", "```python\n", "# Initialization, get a Rekognition client:\n", "PROJECT_NAME = 'sound-anomaly-detection'\n", "reko = boto3.client(\"rekognition\")\n", "\n", "# Let's try to create a Rekognition project:\n", "try:\n", " project_arn = reko.create_project(ProjectName=PROJECT_NAME)['ProjectArn']\n", " \n", "# If the project already exists, we get its ARN:\n", "except reko.exceptions.ResourceInUseException:\n", " # List all the existing project:\n", " print('Project already exists, collecting the ARN.')\n", " reko_project_list = reko.describe_projects()\n", " \n", " # Loop through all the Rekognition projects:\n", " for project in reko_project_list['ProjectDescriptions']:\n", " # Get the project name (the string after the first delimiter in the ARN)\n", " project_name = project['ProjectArn'].split('/')[1]\n", " \n", " # Once we find it, we store the ARN and break out of the loop:\n", " if (project_name == PROJECT_NAME):\n", " project_arn = project['ProjectArn']\n", " break\n", " \n", "print(project_arn)\n", "```\n", "\n", "We need to tell Amazon Rekognition where to find the training data, testing data and where to output its results:\n", "\n", "```python\n", "TrainingData = {\n", " 'Assets': [{ \n", " 'GroundTruthManifest': {\n", " 'S3Object': { \n", " 'Bucket': ,\n", " 'Name': f'{}/manifests/train.manifest'\n", " }\n", " }\n", " }]\n", "}\n", "\n", "TestingData = {\n", " 'AutoCreate': True\n", "}\n", "\n", "OutputConfig = { \n", " 'S3Bucket': ,\n", " 'S3KeyPrefix': f'{}/output'\n", "}\n", "```\n", "\n", "Now we can create a project version: creating a project version will build and train a model within this Rekognition project for the data previously configured. Project creation can fail, if the bucket you selected cannot be accessed by Rekognition. Make sure the right Bucket Policy is applied to your bucket (check the notebooks to see the recommended policy).\n", "\n", "Let’s now create a project version: this will launch a new model training and you will then have to wait for the model to be trained. This should take around 1 hour (less than $1 from a cost perspective):\n", "\n", "```python\n", "version = 'experiment-1'\n", "VERSION_NAME = f'{PROJECT_NAME}.{version}'\n", "\n", "# Let's try to create a new project version in the current project:\n", "try:\n", " project_version_arn = reko.create_project_version(\n", " ProjectArn=project_arn, # Project ARN\n", " VersionName=VERSION_NAME, # Name of this version\n", " OutputConfig=OutputConfig, # S3 location for the output artefact\n", " TrainingData=TrainingData, # S3 location of the manifest describing the training data\n", " TestingData=TestingData # S3 location of the manifest describing the validation data\n", " )['ProjectVersionArn']\n", " \n", "# If a project version with this name already exists, we get its ARN:\n", "except reko.exceptions.ResourceInUseException:\n", " # List all the project versions (=models) for this project:\n", " print('Project version already exists, collecting the ARN:', end=' ')\n", " reko_project_versions_list = reko.describe_project_versions(ProjectArn=project_arn)\n", " \n", " # Loops through them:\n", " for project_version in reko_project_versions_list['ProjectVersionDescriptions']:\n", " # Get the project version name (the string after the third delimiter in the ARN)\n", " project_version_name = project_version['ProjectVersionArn'].split('/')[3]\n", "\n", " # Once we find it, we store the ARN and break out of the loop:\n", " if (project_version_name == VERSION_NAME):\n", " project_version_arn = project_version['ProjectVersionArn']\n", " break\n", " \n", "print(project_version_arn)\n", "status = reko.describe_project_versions(\n", " ProjectArn=project_arn,\n", " VersionNames=[project_version_arn.split('/')[3]]\n", ")['ProjectVersionDescriptions'][0]['Status']\n", "```\n", "\n", "### Evaluate the model\n", "\n", "First, we will deploy our model by using the ARN collected before: again, this will deploy an endpoint that will cost you around $4 per hour. Don’t forget to decommission it once you’re done!\n", "\n", "```python\n", "# Start the model\n", "print('Starting model: ' + model_arn)\n", "response = client.start_project_version(ProjectVersionArn=model_arn, MinInferenceUnits=min_inference_units)\n", "\n", "# Wait for the model to be in the running state:\n", "project_version_running_waiter = client.get_waiter('project_version_running')\n", "project_version_running_waiter.wait(ProjectArn=project_arn, VersionNames=[version_name])\n", "\n", "# Get the running status\n", "describe_response=client.describe_project_versions(ProjectArn=project_arn, VersionNames=[version_name])\n", "for model in describe_response['ProjectVersionDescriptions']:\n", " print(\"Status: \" + model['Status'])\n", " print(\"Message: \" + model['StatusMessage'])\n", "```\n", "\n", "Once the model is running you can start querying it for predictions: in the notebook, you will find a function *get_results()* that will query a given model with a list of pictures sitting in a given path. This will take a few minutes to run all the test samples and will cost less than $1 (for the ~3,000 test samples):\n", "\n", "```python\n", "predictions_ok = rt.get_results(project_version_arn, BUCKET, s3_path=f'{BUCKET}/{PREFIX}/test/normal', label='normal', verbose=True)\n", "predictions_ko = rt.get_results(project_version_arn, BUCKET, s3_path=f'{BUCKET}/{PREFIX}/test/abnormal', label='abnormal', verbose=True)\n", "\n", "def get_results(project_version_arn, bucket, s3_path, label=None, verbose=True):\n", " \"\"\"\n", " Sends a list of pictures located in an S3 path to\n", " the endpoint to get the associated predictions.\n", " \"\"\"\n", "\n", " fs = s3fs.S3FileSystem()\n", " data = {}\n", " predictions = pd.DataFrame(columns=['image', 'normal', 'abnormal'])\n", " \n", " for file in fs.ls(path=s3_path, detail=True, refresh=True):\n", " if file['Size'] > 0:\n", " image = '/'.join(file['Key'].split('/')[1:])\n", " if verbose == True:\n", " print('.', end='')\n", "\n", " labels = show_custom_labels(project_version_arn, bucket, image, 0.0)\n", " for L in labels:\n", " data[L['Name']] = L['Confidence']\n", " \n", " predictions = predictions.append(pd.Series({\n", " 'image': file['Key'].split('/')[-1],\n", " 'abnormal': data['abnormal'],\n", " 'normal': data['normal'],\n", " 'ground truth': label\n", " }), ignore_index=True)\n", " \n", " return predictions\n", " \n", "def show_custom_labels(model, bucket, image, min_confidence):\n", " # Call DetectCustomLabels from the Rekognition API: this will give us the list \n", " # of labels detected for this picture and their associated confidence level:\n", " reko = boto3.client('rekognition')\n", " try:\n", " response = reko.detect_custom_labels(\n", " Image={'S3Object': {'Bucket': bucket, 'Name': image}},\n", " MinConfidence=min_confidence,\n", " ProjectVersionArn=model\n", " )\n", " \n", " except Exception as e:\n", " print(f'Exception encountered when processing {image}')\n", " print(e)\n", " \n", " # Returns the list of custom labels for the image passed as an argument:\n", " return response['CustomLabels']\n", "```\n", "\n", "Let’s plot the confusion matrix associated to this test set:\n", "\n", "![confusion_matrix_rekognition](pictures/confusion_matrix_rekognition.png)\n", "\n", "The metrics associated to this matrix are the following:\n", "\n", "* Precision: 100.0%\n", "* Recall: 99.8%\n", "* Accuracy: 99.8%\n", "* F1 Score: 99.9%\n", "\n", "Without any effort (and no ML knowledge!), we get impressive results. With so low false positives and false negatives, we can leverage such a model in even the most challenging industrial context.\n", "\n", "### Cleanup\n", "\n", "We need to stop the running model as we will continue to incur costs while the endpoint is live:\n", "\n", "```python\n", "print('Stopping model:' + model_arn)\n", "\n", "# Stop the model:\n", "try:\n", " reko = boto3.client('rekognition')\n", " response = reko.stop_project_version(ProjectVersionArn=model_arn)\n", " status = response['Status']\n", " print('Status: ' + status)\n", "\n", "except Exception as e: \n", " print(e) \n", "\n", "print('Done.')\n", "```" ] } ], "metadata": { "kernelspec": { "display_name": "conda_tensorflow2_p36", "language": "python", "name": "conda_tensorflow2_p36" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.10" } }, "nbformat": 4, "nbformat_minor": 4 }