{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Stock Price Prediction Using Deutsche Börse Dataset on SageMaker DeepAR \n",
"\n",
"\n",
"## Overview\n",
"In this project we use Deutsche Börse Public Dataset consisting of trade data aggregated to one minute intervals from the Xetra trading systems\n",
"\n",
"We obtain the data from public S3 bucket, hosted by Deutsche Börse under AWS Open Data Registry, aggregate the data within a given period, and prepare training, test and validation sets.\n",
"We then upload the prepared dat set to our own S3 bucket. This data location is then used with SageMaker provided DeepAR algorithm, to predict future movement of stock prices.\n",
"\n",
"Following diagrams provides an high level overview of the project architecture:\n",
""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import io\n",
"import boto3\n",
"import sagemaker\n",
"import datetime\n",
"import pandas as pd\n",
"from pandas.plotting import autocorrelation_plot\n",
"from pandas.plotting import scatter_matrix\n",
"import matplotlib\n",
"import matplotlib.pyplot as plt\n",
"%matplotlib inline\n",
"\n",
"# Define IAM role and session\n",
"role = sagemaker.get_execution_role()\n",
"session = sagemaker.Session()\n",
"\n",
"s3_data_key = 'dbg-stockdata/source'\n",
"s3_bucket = session.default_bucket()\n",
"processed_filename = 'dbg_processed'\n",
"\n",
"source_bucket = \"s3://deutsche-boerse-xetra-pds\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1. Obtaining Data\n",
"\n",
"Since the DBG dataset is partitioned into separate folders by date, we generate a series of scripts to download the data files, in parallel.\n",
"\n",
"DBG dataset contains trading data beginning June-17, 2017. However we chose to download last 6 months of data, so that the processing time is manageable within the scope of this workshop session.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"start_date = datetime.datetime.strptime('2017-07-01','%Y-%m-%d')\n",
"\n",
"end_date = datetime.datetime.strptime('2018-10-31','%Y-%m-%d')\n",
"\n",
"\n",
"end_date_str = end_date.strftime('%Y-%m-%d')\n",
"start_date_str = start_date.strftime('%Y-%m-%d')\n",
"\n",
"print(\"Download date range: {} - {}\".format(start_date_str, end_date_str))\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We store all files in a local folder, prior to pre-processing the data.
\n",
"After processing, we store the processed data set in another folder, which we'll then upload to an our SageMaker S3 bucket.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"download_folder = '../data/download'\n",
"processed_folder = '../data/processed'\n",
"! mkdir -p {download_folder}\n",
"! mkdir -p {processed_folder}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We then generate the download script in script folder and execute it to have the files downloaded.
\n",
"Running bash script makes the download more reliable than running loops from Pythin runtime."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"download_script = download_folder + '/download.sh'\n",
"dates = list(pd.date_range(start_date_str, end_date_str, freq='D').strftime('%Y-%m-%d'))\n",
"with open(download_script, 'w') as f:\n",
" f.write(\"#!/bin/bash\\n\")\n",
" f.write(\"\\nset -euo pipefail\\n\")\n",
" for date in dates:\n",
" success_file = os.path.join(download_folder, date, 'success')\n",
"\n",
" f.write(\"\"\"\n",
"if [ ! -f {success_file} ]; then\n",
" echo \"Getting DBG dataset for date {date}\" \n",
" mkdir -p {download_folder}/{date}\n",
" aws s3 sync {source_bucket}/{date} {download_folder}/{date} --no-sign-request\n",
" touch {success_file} \n",
"else\n",
" echo \"DBG dataset for date {date} already exists\"\n",
"fi\\n\"\"\".format(success_file=success_file, date=date, source_bucket=source_bucket, download_folder=download_folder))\n",
" \n",
"! chmod +x {download_script} "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"After creating the script, we run the script from shell, using Jupyter magic command.\n",
"\n",
"Downloading data should take about half an hour to an hour to complete depending on the dat range chosen.\n",
"\n",
"In order to avoid this process, you can alternative use the last cell in this notebook to obtain the prepared data from an S3 bucket accompanying this workshop, and load directly to your own S3 bucket."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"! {download_script}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. Data Preparation\n",
"\n",
"To prepare the data, we collect day wise tading data from separate files, filter out any data falling out of trading window, and not pertaining to common stock trading.\n",
"\n",
"### 2.1. Load Data\n",
"As first step we load all csv files in to a single data frame.\n",
"\n",
"Data loading should take about 1-2 hours to complete depending on the instance type you have chosen for your notebook instance."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"def convert_datetime(df):\n",
" try: \n",
" df[\"CalcTime\"] = pd.to_datetime(\"1900-01-01 \" + df[\"Time\"])\n",
" df[\"CalcDateTime\"] = pd.to_datetime(df[\"Date\"] + \" \" + df[\"Time\"])\n",
" except:\n",
" pass\n",
" return df\n",
"\n",
"def load_csvs(data_folder):\n",
" df = None\n",
" total_loaded = 0\n",
" df_initialized = False\n",
" data_dir = data_folder + '/'\n",
" data_subdirs = list(map(lambda date: data_dir + date, dates))\n",
" for d in data_subdirs:\n",
" trading_files = sorted(list(filter(lambda x:os.path.getsize(x)>2000, [os.path.join(d, x) for x in os.listdir(d) if x.endswith('.csv')]))) \n",
" if len(trading_files) > 0:\n",
" print(\"Loading {} files from {}\".format(len(trading_files), d))\n",
" if df_initialized == False:\n",
" frame = [pd.read_csv(f, engine='python', error_bad_lines=False, warn_bad_lines=False) for f in trading_files]\n",
" df = convert_datetime(pd.concat(frame, ignore_index = True))\n",
" df_initialized = True\n",
" else:\n",
" dfa = convert_datetime(pd.concat([pd.read_csv(f, engine='python', error_bad_lines=False) for f in trading_files], ignore_index = True))\n",
" dft = [df, dfa]\n",
" df = pd.concat(dft, ignore_index = True)\n",
" total_loaded = total_loaded + len(trading_files)\n",
" print(df.shape)\n",
" print(\"Total of {} files loaded\".format(total_loaded))\n",
" return df\n",
"unprocessed_df = load_csvs(download_folder)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"unprocessed_df.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2.2. Filter records\n",
"Next we filter the records pertaining to common stack with trading volumes greater than zero, and within the regular trading hours.\n",
"Within the scope of this exercise, we focus our attention to top 100 stocks, by trading volume.\n",
"\n",
"This filtering should take about 1-2 minute to complete depending on the instance type you have chosen for your notebook instance."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"# Filter common stock between trading hours 08:00 and 20:00\n",
"# Exclude auctions (those are with TradeVolume == 0)\n",
"#Number of stocks to keep : Top 100, by trading volume\n",
"num_stocks=100\n",
"time_fmt = \"%H:%M\"\n",
"opening_hours_str = \"08:00\"\n",
"closing_hours_str = \"20:00\"\n",
"opening_hours = datetime.datetime.strptime(opening_hours_str, time_fmt)\n",
"closing_hours = datetime.datetime.strptime(closing_hours_str, time_fmt)\n",
"\n",
"common_stocks = unprocessed_df[(unprocessed_df.SecurityType == 'Common stock') & \\\n",
" (unprocessed_df.TradedVolume > 0) & \\\n",
" (unprocessed_df.CalcTime >= opening_hours) & \\\n",
" (unprocessed_df.CalcTime <= closing_hours)]\n",
"\n",
"# Sort the stocks in descending order by trading volume\n",
"sort_by_volume = common_stocks[['Mnemonic', 'TradedVolume']].groupby(['Mnemonic']).sum().sort_values(['TradedVolume'], ascending=[0]).head(num_stocks)\n",
"stock_symbols = list(sort_by_volume.index.values)\n",
"sorted_stocks = common_stocks[common_stocks.Mnemonic.isin(stock_symbols)]\n",
"sorted_stocks = sorted_stocks.set_index(['Mnemonic', 'CalcDateTime']).sort_index()\n",
"stock_symbols = list(sort_by_volume.index.values)\n",
"print(sorted_stocks.shape)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Check to see how many days' records we imported. If you import all records starting from July-2017, to October-2018, it should contain 326 days' woth of records."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"len(sorted(list(sorted_stocks['Date'].unique())))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2.3. Select features by minute\n",
"\n",
"Next we build a clean data frame containing minute by minute transaction records with the following data points:\n",
"- Mnemonic (Stock Ticker Symbol)\n",
"- Minimum Price (During the interval)\n",
"- Maximum Prixe (During the interval)\n",
"- Start Price (At star of the interval)\n",
"- End Price (At end of the interval)\n",
"- Trading Volume (During the interval)\n",
"- Number of Trades (During the interval)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"#Build minute by minute index for trading hours\n",
"non_empty_days = sorted(list(sorted_stocks['Date'].unique()))\n",
"def build_index(non_empty_days, from_time, to_time):\n",
" date_ranges = []\n",
" for date in non_empty_days:\n",
" yyyy, mm, dd = date.split('-')\n",
" from_hour, from_min = from_time.split(':')\n",
" to_hour, to_min = to_time.split(':') \n",
" t1 = datetime.datetime(int(yyyy), int(mm), int(dd), int(from_hour),int(from_min),0)\n",
" t2 = datetime.datetime(int(yyyy), int(mm), int(dd), int(to_hour),int(to_min),0) \n",
" date_ranges.append(pd.DataFrame({\"OrganizedDateTime\": pd.date_range(t1, t2, freq='1Min').values}))\n",
" agg = pd.concat(date_ranges, axis=0) \n",
" agg.index = agg[\"OrganizedDateTime\"]\n",
" return agg\n",
"\n",
"# Prepared data would contain numeric features for all stocks,\n",
"# for all days in the interval, for which there were trades (that means excluding weekends and holidays)\n",
"# for all minutes from 08:00 until 20:00\n",
"# in minutes without trades the prices from the last available minute are carried forward\n",
"# trades are filled with zero for such minutes\n",
"\n",
"def basic_stock_features(input_df, mnemonic, time_index):\n",
" stock = input_df.loc[mnemonic].copy()\n",
" stock['HasTrade'] = 1.0 \n",
" stock = stock.reindex(time_index) \n",
" features = ['MinPrice', 'MaxPrice', 'EndPrice', 'StartPrice']\n",
" for f in features:\n",
" stock[f] = stock[f].fillna(method='ffill') \n",
" features = ['HasTrade', 'TradedVolume', 'NumberOfTrades']\n",
" for f in features:\n",
" stock[f] = stock[f].fillna(0.0) \n",
" stock['Mnemonic'] = mnemonic\n",
" selected_features = ['Mnemonic', 'MinPrice', 'MaxPrice', 'StartPrice', 'EndPrice', 'HasTrade', 'TradedVolume', 'NumberOfTrades']\n",
" return stock[selected_features]\n",
"\n",
"datetime_index = build_index(non_empty_days, opening_hours_str, closing_hours_str)[\"OrganizedDateTime\"].values\n",
"\n",
"stocks = []\n",
"for stock in stock_symbols:\n",
" stock = basic_stock_features(sorted_stocks, stock, datetime_index)\n",
" stocks.append(stock)\n",
"\n",
"prepared = pd.concat(stocks, axis=0)\n",
"prepared.insert(0, 'Id', range(0, 0 + len(prepared)))\n",
"prepared = prepared.reset_index().set_index('Id')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"prepared.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2.4. Save processed data\n",
"Finally we save the processed data and upload to our SageMaker S3 Bucket.
\n",
"Source data, as downloaded from DBG dataset includes minute by minute details of price movement and trading numbers and volumes for the stocks. However, in practice, we might be interested in time series at various aggregation level, in order to do more effective hourly, daily or weekly predictions.\n",
"\n",
"Therefore we resample the data at various interval levels. First we define a resampling function that is able to resample various metrices for the aggregation levels.
\n",
"For example, at a certain interval level, we are interested in:
\n",
"- Last record for the End Price\n",
"- First record for the Start Price\n",
"- Minimum of all records for Min Price\n",
"- Maximum of all records for Max Price\n",
"- Total of all records for Number of Trades and Traded Volume"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"intervals = ['M', 'W', 'D', 'H']\n",
"\n",
"def resample_data(df, interval = None, mnemonics=None, metrics=None):\n",
" if mnemonics is None:\n",
" mnemonics = list(df.Mnemonic.unique())\n",
" if metrics is None:\n",
" metrics = list(df.columns.values)\n",
" if 'Mnemonic' in metrics:\n",
" metrics.remove('Mnemonic')\n",
" if 'HasTrade' in metrics:\n",
" metrics.remove('HasTrade') \n",
" \n",
" columns = list(df.columns.values)\n",
" if 'CalcDateTime' not in columns: \n",
" df[\"CalcDateTime\"] = pd.to_datetime(df[\"CalcDateTime\"])\n",
" \n",
" if interval is None or not isinstance(interval, str) or interval not in intervals:\n",
" raise ValueError('Interval not supported, must be one of : {}'.format(intervals))\n",
" \n",
" resampeled_frames = [] \n",
" for mnemonic in mnemonics:\n",
" print(\"Resampling {} records for interval - {}\".format(mnemonic, interval))\n",
" selected = df[df.Mnemonic == mnemonic].copy()\n",
" selected.index = selected['CalcDateTime']\n",
" selected = selected.sort_index()\n",
" resampled = pd.DataFrame() \n",
" for metric in metrics: \n",
" if metric == 'EndPrice':\n",
" resampled[metric] = selected[metric].resample(interval).last()\n",
" elif metric == 'StartPrice':\n",
" resampled[metric] = selected[metric].resample(interval).first() \n",
" elif metric == 'MinPrice':\n",
" resampled[metric] = selected[metric].resample(interval).min() \n",
" elif metric == 'MaxPrice':\n",
" resampled[metric] = selected[metric].resample(interval).max() \n",
" elif metric == 'TradedVolume' or metric == \"NumberOfTrades\" :\n",
" resampled[metric] = selected[metric].resample(interval).sum() \n",
" else:\n",
" pass\n",
" resampled_columns = ['Mnemonic']\n",
" for col in list(resampled.columns.values):\n",
" resampled_columns.append(col)\n",
" resampled['Mnemonic'] = mnemonic\n",
" resampled = resampled[resampled_columns] \n",
" resampled.dropna(inplace = True)\n",
" resampeled_frames.append(resampled)\n",
" resampeleds = pd.concat(resampeled_frames)\n",
" return resampeleds"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next we do a resampling for the following intervals and upload the file to an appropriate S3 location directly, saving local disk space:\n",
"- Month\n",
"- Week\n",
"- Day\n",
"- Hour\n",
"\n",
"Resampling of the time series data at four different interval levels should take about 10 to 20 minutes to complete, depending on the instance type you have chosen for your notebook instance.\n",
"\n",
"Note that this causes gaps in time series. However SageMaker DeepAR algorithm is designed to work with time series with gaps."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"for interval in intervals:\n",
" rescaled_series = resample_data(prepared, interval)\n",
" csv_buffer = io.StringIO()\n",
" rescaled_series.to_csv(csv_buffer)\n",
" s3_resource = boto3.resource('s3')\n",
" s3_resource.Object(s3_bucket, '{}/{}/resampled_stockdata.csv'.format(s3_data_key, interval, interval)).put(Body=csv_buffer.getvalue())\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Having the prepared data saved to an S3 bucket not only allows us to save some space in lcoal machine, but also helps when starting training job using SageMaker.
\n",
"Since reading from S3 is optimized for high speed, when you are using an EC2 instance, which is what a SageMaker Notebook instance runs on, it makes sense to just load the data directly from S3, as and when needed.
\n",
"Also, since SageMaker notebook instances come with only 5GB of Elastic Block Storage (to remain cost efficient), it is advisable to keep the device storage space clean, when not used.
\n",
"Therefore, we delete the raw data files locally, to conserve storage space on notebook instance.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Remove the download folder\n",
"!rm -rf {download_folder}\n",
"#Reset the data frames loaded in memory\n",
"rescaled_series = None \n",
"prepared = None\n",
"unprocessed_df = None"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2.5. Alternatively - Load resampled data for workshop\n",
"\n",
"Since obtaining raw data from source S3 buckets maintained by Deutsche Börse, filtering, resampling and uploading to SageMaker S3 bucket can take upward of two hours, within the time limit of this workshop you could obtain cleaned data as hosted directly from S3 buckets specifically maintained for this workshop.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import io\n",
"import boto3\n",
"import sagemaker\n",
"import datetime\n",
"import pandas as pd\n",
"from pandas.tools.plotting import autocorrelation_plot\n",
"from pandas.tools.plotting import scatter_matrix\n",
"import matplotlib\n",
"import matplotlib.pyplot as plt\n",
"%matplotlib inline\n",
"\n",
"# Define IAM role and session\n",
"role = sagemaker.get_execution_role()\n",
"session = sagemaker.Session()\n",
"\n",
"s3_data_key = 'dbg-stockdata/source'\n",
"s3_bucket = session.default_bucket()\n",
"processed_filename = 'dbg_processed'\n",
"\n",
"\n",
"workshop_bucket = \"fsv-309\"\n",
"intervals = ['M', 'W', 'D', 'H']\n",
"datafilename = \"resampled_stockdata.csv \"\n",
"\n",
"for interval in intervals:\n",
" !aws s3 cp s3://{workshop_bucket}/{s3_data_key}/{interval}/{datafilename} s3://{s3_bucket}/{s3_data_key}/{interval}/{datafilename}\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2.6. Validate Data Load from S3\n",
"\n",
"Before moving on, we want to validate data by downloading from the location on S3 that we just saved, and run some quick introspection to develop an intuition about the nature of metrices and their possible influence on stock price movement.
\n",
"For our preliminary analysis, we choose to load and investigate data with daily interval."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def load_resampled_from_s3(interval, bucket, s3_data_key):\n",
" s3 = boto3.client('s3')\n",
" obj = s3.get_object(Bucket=bucket, Key=\"{}/{}/resampled_stockdata.csv\".format(s3_data_key, interval))\n",
" loaded = pd.read_csv(io.BytesIO(obj['Body'].read()), parse_dates=True)\n",
" mnemonics = list(loaded.Mnemonic.unique())\n",
" \n",
" return loaded, mnemonics"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"interval = \"H\"\n",
"stockdata, stocksymbols = load_resampled_from_s3(interval, s3_bucket, s3_data_key)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"stockdata.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Data exploration\n",
"At the end, we do a quick check on the data, and plot some series to develop an intuition of what the data looks like."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"matplotlib.rcParams['figure.figsize'] = (25, 17) # use bigger graphs\n",
"def timeseries_plot(mnemonics, metrics, data=None, interval = None, bucket = None, s3_key = None):\n",
" if data is None and interval is not None and bucket is not None and s3_key is not None:\n",
" data, symbols = load_resampled_from_s3(interval, bucket, s3_key) \n",
" columns = list(data.columns)\n",
" ax = None\n",
" for mnemonic in mnemonics:\n",
" selected = data[data.Mnemonic == mnemonic].copy()\n",
" selected.index = selected['CalcDateTime']\n",
" selected = selected.sort_index()\n",
" for column in columns:\n",
" if column != 'CalcDateTime' and column != 'Mnemonic' and column not in metrics:\n",
" selected = selected.drop(column, axis=1)\n",
" selected_columns = list(selected.columns)\n",
" for i, column in enumerate(selected_columns):\n",
" selected_columns[i] = \"{}-{}\".format(mnemonic, column)\n",
" selected.columns = selected_columns\n",
" ax = selected.plot( ax = ax )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's plot ending price for 3 automobile companies - BMW, Daimler, Porsche, Volkwagen"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"timeseries_plot(['BMW', 'CON', 'DAI', 'PAH3', 'VOW3'], ['EndPrice'], stockdata)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Observe here since the four stocks we chose to display are all automobile companies, their movements show very similar pattern.\n",
"\n",
"We can also choose to display these stocks at lower resolution, such as at weekly interval, and still be able to observe the same patter."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"timeseries_plot(['BMW', 'CON', 'DAI', 'PAH3', 'VOW3'], ['EndPrice'], interval = 'D', bucket = s3_bucket, s3_key = s3_data_key)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"timeseries_plot(['BMW','CON', 'DAI', 'PAH3', 'VOW3'], ['EndPrice'], interval = 'W', bucket = s3_bucket, s3_key = s3_data_key)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, in order to develop an intuition on how different time series for a single stock might influence each other, we plot some metrices alongside each other, that might be somehow related, for a sample stock, in this case we just used \"BMW\".\n",
"\n",
"\n",
"Since the data set gives minute by minute observation of values, it might clutter the visual plot when it comes to understanding the patterns in the data, use data resampled at higher interval rate, such as daily or weekly.
\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"timeseries_plot(['BMW'], ['EndPrice', 'MinPrice', 'MaxPrice'], interval = 'W', bucket = s3_bucket, s3_key = s3_data_key)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Observation 1\n",
"\n",
"We can observed from the graph above:\n",
"\n",
"- When there is a downward trend, the `EndPrice` is closer to the `MinPrice`, than to the `MaxPrice`\n",
"- When there is an upward trend, the `EndPrice` is closer to `MaxPrice` than to the `MinPrice`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"timeseries_plot(['BMW'], ['StartPrice', 'MinPrice', 'MaxPrice'], interval = 'W', bucket = s3_bucket, s3_key = s3_data_key)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Observation 2\n",
"Similar (and correlated) behavior is true for the `StartPrice`:\n",
"\n",
"- When there is a downward trend, the `StartPrice` is closer to the `MaxPrice`, than to the `MinPrice`\n",
"- When there is an upward trend, the `StartPrice` is closer to `MinPrice` than to the `MaxPrice`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"timeseries_plot(['BMW'], ['StartPrice', 'EndPrice'], interval = 'W', bucket = s3_bucket, s3_key = s3_data_key)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Observation 3\n",
"\n",
"Another observation that can be made is that:\n",
"- when the trend is upwards, `EndPrice` is above `StartPrice`\n",
"- when the trend is downwars, `EndPrice` is below `StartPrice`\n",
" \n",
"Therefore: if the lines of `EndPrice` and `StartPrice` cross, one could expect trend reversal.\n",
"(Of course one needs to account for the variance, and false positives)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"With these observations, we develop intuition that behaviour of the various metrices present in the data might have a correlation with future movement of the time series.
\n",
"\n",
"\n",
"In the next step we apply Deep Neural Networks based Machine Learning techniques to try and find the correlation and hopefully be able to predict future movement, with certain confidence level."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "conda_python3",
"language": "python",
"name": "conda_python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}