{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Modeling and forecasting of twitter volume timeseries\n", "After understanding our data in the previous section, [descriptive statistics](../part2/descriptive_stats.ipynb), we now want to quickly run a time-series forecast using [gluonts](https://github.com/awslabs/gluon-ts).\n", "In this example we use the same dataset as before and create first a baseline (seasonal naive estimator). Afterwards we create and train a [DeepAR](https://arxiv.org/abs/1704.04110) model and compare it to the baseline. \n", "\n", "Let's first check that GluonTS is installed and that we have the correct MXNet version:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import mxnet\n", "import gluonts\n", "print(mxnet.__version__)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If your MXNet version is not 1.4.1 or GluonTS is not installed, then please uncomment and execute the following lines." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#! pip install mxnet==1.4.1\n", "#! pip install gluonts" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "from mxnet import gpu, cpu\n", "from mxnet.context import num_gpus\n", "from gluonts.dataset.util import to_pandas\n", "from gluonts.model.deepar import DeepAREstimator\n", "from gluonts.model.simple_feedforward import SimpleFeedForwardEstimator\n", "from gluonts.dataset.common import ListDataset\n", "from gluonts.trainer import Trainer\n", "from gluonts.evaluation.backtest import make_evaluation_predictions, backtest_metrics\n", "import pathlib\n", "import json\n", "import boto3\n", "import csv" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Setting up hyperparameters\n", "Here we just set the number of epochs and rely on default values for the rest of the parameters in order to make the example more understandable." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "EPOCHS = 20" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Loading the data" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "url = \"https://raw.githubusercontent.com/numenta/NAB/master/data/realTweets/Twitter_volume_AMZN.csv\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = pd.read_csv(filepath_or_buffer=url, header=0, index_col=0)\n", "df.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df[:100].plot(figsize=(10,5))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Plotting forecast helper function\n", "Often it is interesting to tune or evaluate the model by looking at error metrics on a hold-out set. For other machine learning tasks such as classification, one typically does this by randomly separating examples into train/test sets. For forecasting it is important to do this train/test split in time rather than by series.\n", "\n", "The below function plots a forecast for a given data and a given predictor. Let's dive deeper into components of this funciton.\n", "`from gluonts.model` includes a number of implemented models. Each model has an estimator. An estimator accepts a series of models and hyperparameters. Parameters include a trainer that accepts optimization parameters. The estimator also accepts parameters such as context (CPU, GPU), number of layers, context length, time-series frequency, and prediction length amongst others. The context length defines how many past time steps will be taken into account to make a prediction. The default value is the prediction_length.\n", "\n", "The estimator has a `train` method that is used for fitting the data. The `train` method returns a predictor that can be used to forecast based on input data.\n", "\n", "`plot_forecast` function accepts a `predictor` object and an iterable dataset to plot the data and the forecast.\n", "It will call `make_evaluation_predictions` that takes the predictor and test_data and creates the forecasts." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from pylab import rcParams\n", "rcParams['figure.figsize'] = 15, 8\n", "def plot_forecast(predictor, test_data):\n", " prediction_intervals = (50.0, 90.0)\n", " legend = [\"observations\", \"median prediction\"] + [f\"{k}% prediction interval\" for k in prediction_intervals][::-1]\n", " forecast_it, ts_it = make_evaluation_predictions(\n", " dataset=test_data, \n", " predictor=predictor, \n", " num_samples=100, \n", " )\n", " fig, ax = plt.subplots(1, 1, figsize=(10, 7))\n", " list(ts_it)[0][-336:].plot(ax=ax) \n", " list(forecast_it)[0].plot(prediction_intervals, color='g')\n", " plt.grid(which=\"both\")\n", " plt.legend(legend, loc=\"upper left\")\n", " plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Dataset\n", "`gluonts.dataset.common` has a class `ListDataset`. GluonTS does not require this specific format for a custom dataset that a user may have. The only requirements for a custom dataset are to be iterable and have a \"target\" and a \"start\" field. To make this more clear, assume the common case where a dataset is in the form of a `numpy.array` and the index of the time series in a `pandas.Timestamp`. \n", "\n", "In this example we are using a `gluonts.dataset.common.ListDataset`. A `ListDataset` consist of a list of of dictionaries with the following format:\n", "```\n", "{'start': Timestamp('2019-07-26 00:00:00', freq='D'),\n", " 'cat': [5, 4, 42, 17, 0, 0, 0],\n", " 'target': array([0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 2., 0.], dtype=float32)},\n", " {'start': Timestamp('2019-07-26 00:00:00', freq='D'),\n", " 'cat': [8, 7, 32, 13, 0, 0, 0],\n", " 'target': array([4., 3., 5., 2., 5., 2., 3., 7., 4., 3., 3., 2.], dtype=float32)}\n", "```\n", "Each dictionary contains one time series and we need to pass *start* as `pandas.index` and a *target* as an iterable set of timestamp values from our pandas dataframe. We can also indicate categorical features in the field `cat`.\n", "\n", "In the followng cell we build a training dataset ending at April 5th, 2015 and a test dataset that will be used forecast the hour following the midnight on April 15th, 2015. GluonTS requires the full timeseries to be in the test dataset. So test and train data will start at February 26 2015. GluonTS will then cut out the `n` last elements from test dataset, in order to predict those. `n` is equal the prediction length. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "training_data = ListDataset([{\"start\": df.index[0], \n", " \"target\": df.value[: \"2015-04-05 00:00:00\"]}], \n", " freq=\"5min\")\n", "\n", "test_data = ListDataset([{\"start\": df.index[0], \n", " \"target\": df.value[:\"2015-04-15 00:00:00\"]}], \n", " freq=\"5min\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create a baseline: seasonal naive predictor\n", "\n", "Before training complex deep learning models, it is best to come up with simple base models. Such a baseline could for instance be: that the future volumes will be the same like in the last 5 minutes. This naive estimation works remarkably well for many economic and financial time series. Because a naïve forecast is optimal when data follows a random walk, these are also called random walk forecasts\n", "\n", "GluonTS provides a [seasonal naive predictor](https://gluon-ts.mxnet.io/api/gluonts/gluonts.model.seasonal_naive.html). The seasonal naive method sets each forecast to be equal to the last observed value from the same season. So the model assumes that the data has a fixed seasonality (in this case, 300 time steps correspond to nearly a day), and produces forecasts by copying past observations based on it.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from gluonts.model.seasonal_naive import *\n", "from gluonts.evaluation import Evaluator\n", "\n", "naive_predictor = SeasonalNaivePredictor(freq='5min', \n", " prediction_length=36,\n", " season_length=300)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Below we are using [gluonts.evaluation.Evaluator](https://gluon-ts.mxnet.io/api/gluonts/gluonts.evaluation.html) to create an aggregated evaluation metrics of the model we have trained. It produces some commonly used error metrics such as MSE, MASE, symmetric MAPE, RMSE, and (weighted) quantile losses. \n", "\n", "The Evaluator returns both a dictionary and a pandas DataFrame. You can use the python dictionary, first output, or the pandas DataFrame, the second output, depending on what you would like to do. The dictionary item includes more values." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "forecast_it_baseline, ts_it_baseline = make_evaluation_predictions(test_data, naive_predictor, num_samples=100)\n", "forecasts_baseline = list(forecast_it_baseline)\n", "tss_baseline = list(ts_it_baseline)\n", "evaluator = Evaluator()\n", "agg_metrics_baseline, item_metrics = evaluator(iter(tss_baseline), iter(forecasts_baseline), num_series=len(test_data))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we plot the forecasts and we can see that the naive estimator just copies the values from last day. It will also just give single point forecast." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_forecast(naive_predictor, test_data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Print the baseline metrics:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "agg_metrics_baseline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Which metric to choose depends on your business use case. For instance is underestimation more problematic than overestimation?\n", "\n", "For instance, the lower the Root-Mean-Squared Error (RMSE) the better - a value of 0 would indicate a perfect fit to the data. But RMSE is dependent on the scale of the data being used. Dividing the RMSE by the range of the data, gives an average error as a proportion of the data's scale. This is called the Normalized Root-Mean-Squared Error (NRMSE). However, the RMSE and NRMSE are very sensitive to outliers. \n", "\n", "Percentage errors like MAPE, sMAPE are unit-free and are frequently used to compare forecast performances between data sets.\n", "\n", "## DeepAR \n", "\n", "Amazon SageMaker DeepAR is a methodology for producing accurate probabilistic forecasts, based on training an auto-regressive recurrent network model on a large number of related time series. DeepAR produces more accurate forecasts than other state-of-the-art methods, while requiring minimal manual work.\n", "\n", "* The DeepAR algorithm first tailors a `Long Short-Term Memory` ([LSTM](https://en.wikipedia.org/wiki/Long_short-term_memory))-based recurrent neural network architecture to the data. DeepAR then produces probabilistic forecasts in the form of `Monte Carlo` simulation. \n", "* `Monte Carlo` samples are empirically generated pseudo-observations that can be used to compute consistent quantile estimates for all sub-ranges in the prediction horizon.\n", "* DeepAR also uses item-similarity to handle the `Cold Start` problem, which is to make predictions for items with little or no history at all.\n", "\n", "In this notebook you will learn how to use GluonTS to train a DeepAR model on your own dataset and a very detailed understanding of the implementation details of DeepAR won't be necessary. But if you you would like to learn more about DeepAR, then check out [this](../part4/deepar_details.ipynb) notebook or the [paper](https://arxiv.org/abs/1704.04110).\n", "\n", "To train a DeepAR in GluonTS, we first need to create an estimator object. An estimator object represents the network, contains a trainer, which in turn includes batch size, initializer, context, learning rate and other training specific hyperparameters. The estimator object also includes frequency of timestamp, prediction length to express how many steps we want to predict, and structural parameters such as number of layers. The estimator also crucially includes a `train` method. The train method is used to fit a model to a given dataset and returns a predictor object, which can be used to predict/forecast values.\n", "\n", "The frequnecy parameter needs to be the same as accepted frequencies by pandas. For more information on pandas use of frequency please refer to the [documentation of pandas date_range.](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.date_range.html)\n", "\n", "Finally the `prediction_length` is set to 36. We aim to predict tweets for the next 3 hours and as the data has `freq=5min`, we opt 36 steps, which is 36x5min = 180min or three hours." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from gluonts.distribution import DistributionOutput, StudentTOutput\n", "\n", "deepar_estimator = DeepAREstimator(freq=\"5min\", \n", " prediction_length=36,\n", " distr_output=StudentTOutput(),\n", " trainer=Trainer(epochs=EPOCHS))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "DeepAR has a lot of different hyperparameters and in [lab 4](../part4/twitter_volume_sagemaker.ipynb) we will tune some of them. In this notebook we will just use the default values.\n", "\n", "\n", "We simply call `train` method of the `deepar_estimator` we just created and pass our iterable training data to the train method. The output is a predictor object. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "deepar_predictor = deepar_estimator.train(training_data=training_data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Test the model\n", "\n", "We use the `plot_forecast` function that was implemented earlier in this notebook and pass predictor object and test data. You will notice the green print in the forecast in different confidence intervals." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_forecast(predictor=deepar_predictor, test_data=test_data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Save the model\n", "Both training and prediction networks can be saved using `estimator.serialize_prediction_net` and `estimator.serialize` respectively." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "os.makedirs('deepar', exist_ok=True)\n", "deepar_predictor.serialize_prediction_net(pathlib.Path('deepar'))\n", "deepar_predictor.serialize(pathlib.Path('deepar'))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Evaluation\n", "Below we are using `gluonts.evaluation.Evaluator` to create an aggregated evaluation metrics of the model we have trained. The `Evaluator` accepts predictions and calculates multiple evaluation metrics such as \"MSE\" and \"Quantile Loss\". The `Evaluator` returns both a dictionary and a pandas DataFrame. You can use the python dictionary, first output, or the pandas DataFrame, the second output, depending on what you would like to do. The dictionary item includes more values." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from gluonts.evaluation.backtest import make_evaluation_predictions\n", "from gluonts.evaluation import Evaluator\n", "\n", "forecast_it, ts_it = make_evaluation_predictions(dataset=test_data, \n", " predictor=deepar_predictor, \n", " num_samples=100)\n", "deepar_agg_metrics, item_metrics = Evaluator(quantiles=[0.1, 0.5, 0.9])(\n", " ts_it, \n", " forecast_it, \n", " num_series=len(training_data))\n", "deepar_agg_metrics" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see that DeepAR produces much better predictions than the naive estimator. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(deepar_agg_metrics[\"MSE\"],agg_metrics_baseline[\"MSE\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you still have time left you can proceed to the next section, where you will train a multi layer perceptron." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Additional: Comparison with Mulitlayer Perceptron\n", "We now use another estimator, `SimpleFeedForwardEstimator`, to make the same forecast. This model is using a simple MLP or a feed forward network to reach the same goal. At the end we shall compare the results of the models." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mlp_estimator = SimpleFeedForwardEstimator(freq=\"5min\", \n", " prediction_length=36, \n", " trainer=Trainer(epochs=EPOCHS))\n", "mlp_predictor = mlp_estimator.train(training_data=training_data)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_forecast(predictor=mlp_predictor, test_data=test_data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "The code snippet below, is using the same mechanism we have used before for evaluation, except the function accepts data and a list of predictors as well as a textual name for the predictors to use as column name in the pandas DataFrame output. It then loops over predictors, performs evaluation, converts the evaluation dictionary into a pandas DataFrame, and appends the output of evaluation to a dataframe as as a new column." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from gluonts.evaluation.backtest import make_evaluation_predictions\n", "from gluonts.evaluation import Evaluator\n", "def evaluat_models_from_dict(data, predictors, predictor_names, num_samples=100):\n", " '''\n", " Comparing results of multiple models.\n", " Parameters:\n", " data: the dataset on which we are performing the evaluation.\n", " predictors: A list of predictor objects\n", " predictor_names: A list of textual names for the predictors that have an ordered one-to-one\n", " relationship with the predictors.\n", " num_samples (default=100): what sample size from the evaluation dataset.\n", " Output: pandas dataframe to an evaluation column per predictor.\n", " '''\n", " df = pd.DataFrame()\n", " for (predictor, predictor_name) in zip(predictors, predictor_names):\n", " forecast_it, ts_it = make_evaluation_predictions(data, \n", " predictor=predictor, \n", " num_samples=num_samples)\n", " deepar_agg_metrics, item_metrics = Evaluator(quantiles=[0.1, 0.5, 0.9])(\n", " ts_it, \n", " forecast_it, \n", " num_series=len(data))\n", " \n", " evaluation = pd.DataFrame.from_dict(deepar_agg_metrics, orient='index', columns=[predictor_name])\n", " if df.empty:\n", " df = evaluation.copy()\n", " else:\n", " df.insert(loc=len(df.columns), column=predictor_name, value=evaluation.values)\n", " return df\n", "evaluat_models_from_dict(data=test_data, \n", " predictors=[deepar_predictor, mlp_predictor, naive_predictor], \n", " predictor_names = ['deepar', 'mlp', 'naive predictor'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Additional: Accessing weights and model parameters\n", "You can get access to the network structure and parameters. `DeepARNetwork` is derived from `mxnet.gluon.block.HybridBlock`. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "gluonts.model.deepar._network.DeepARTrainingNetwork.__bases__[0].__bases__[0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can now call `DeepARTrainingNetwork.collect_params()`, which returns a `mxnet.gluon.parameter.ParameterDict` object. for more information how to query `ParameterDict`, plese refer to [mxnet documentation.](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.ParameterDict)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "deepar_predictor.prediction_net.collect_params() " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "conda_python3", "language": "python", "name": "conda_python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.5" } }, "nbformat": 4, "nbformat_minor": 4 }