{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Amazon Forecast チュートリアル -電気使用量の予測-" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Table Of Contents\n", "1. [はじめに](#はじめに)\n", "1. [セットアップ](#セットアップ)\n", "1. [データの準備](#データの準備)\n", "1. [データセットグループとデータセットの作成](#データセットグループとデータセットの作成)\n", "1. [Predictorの作成](#Predictorの作成)\n", "1. [Forecastの作成](#Forecastの作成)\n", "1. [予測値と実測値の可視化](#予測値と実測値の可視化)\n", "1. [RollingForecastの実行](#RollingForecastの実行)\n", "1. [リソースの削除](#リソースの削除)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## はじめに\n", "\n", "本ノートブックの実行には約1.5hほどかかります。\n", "\n", "本ノートブックはSageMakerからAmazon Forecastの操作を行う[amazon-forecast-samples](https://github.com/aws-samples/amazon-forecast-samples)のTutorialに加え、Rolling Forecast(既存のPredictorを使用して、Predictorを再学習することなく、さらに先のデータポイントの予測を行う)の実行を含んでいます。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "本ノートブックの実行には、AmazonSageMakerのIAMロールにs3にアクセスできる権限付与(ノートブックインスタンス作成時に行います)と「IAMFullAccess」、「AmazonForecastFullAccess」のポリシーをアタッチする必要があります。 \n", "また、本ノートブック上で作成したs3バケットを削除したい場合は「AmazonS3FullAccess」もアタッチする必要があります(s3バケットの削除はs3のコンソールからも行えますので「AmazonS3FullAccess」は必須ではありません)。\n", "\n", "boto3を使用したForecastの操作については[ForecastService](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/forecast.html)にドキュメントがあります。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## セットアップ\n", "\n", "必要なライブラリのインポートと分析データをアップロードするs3バケットを指定します。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import sys\n", "import os\n", "import json\n", "import time\n", "import logging\n", "\n", "import dateutil.parser\n", "import pandas as pd\n", "import boto3\n", "from botocore.exceptions import ClientError\n", "\n", "# importing forecast notebook utility from notebooks/common directory\n", "sys.path.insert( 0, os.path.abspath(\"./common\") )\n", "import util" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "分析に使用するデータを格納するs3のバケットとregionを表示されるフォーム内に入力してください。 \n", "すでに存在するバケット名でも新規に作成するバケット名でも問題ありません。 \n", "バケットを新規に作成する際はs3のバケットの命名規則に従う必要があります。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "text_widget_bucket = util.create_text_widget( \"bucket_name\", \"input your S3 bucket name\" )\n", "text_widget_region = util.create_text_widget( \"region\", \"input region name.\", default_value=\"us-west-2\" )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "フォームへ入力が完了したら以下のセルを実行してください。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "bucket_name = text_widget_bucket.value\n", "assert bucket_name, \"bucket_name not set.\"\n", "\n", "region = text_widget_region.value\n", "assert region, \"region not set.\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "session = boto3.Session(region_name=region) \n", "forecast = session.client(service_name='forecast') \n", "forecastquery = session.client(service_name='forecastquery')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## データの準備" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Forecastの[Getting Started](https://docs.aws.amazon.com/forecast/latest/dg/getting-started.html)からダウンロードできる電気使用量の[データ](https://docs.aws.amazon.com/forecast/latest/dg/samples/electricityusagedata.zip)を使用します。\n", "このデータには370世帯の2014/01/01から2014/12/31(2015/01/01の0時まで)までの1年分の電力使用量が毎時単位で格納されています。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!wget https://docs.aws.amazon.com/forecast/latest/dg/samples/electricityusagedata.zip\n", "!unzip -o electricityusagedata.zip" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "データを読み込み、3つのファイルを作成します。ファイルの作成はダウンロードしたelectricityusagedata.csvを使用し、各対象の期間を抽出します。\n", "- electricityusagedata_train.csvは、学習用データとしてpredictorを作成するために利用します。この Predictor で予測を行うと、2014/12/30 01:00:00から Forecast Horizon で指定した期間 (本ノートブックでは2014/12/31 00:00:00までの24時間分) の予測を得ることができます。\n", "- 本ノートブックでは、electricityusagedata_train.csvに24時間分のデータを追加した、electricityusagedata_add.csvを利用することでRolling Forecastを実行し、2014/12/31 00:00:00より先の予測を行います。\n", "- electricityusagedata_test.csvは実測値としてRolling Forecastの予測値との比較に使用します。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = pd.read_csv(\"./electricityusagedata.csv\", dtype = object, names=['timestamp','value','item'])\n", "df.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train_data = df[(df['timestamp'] >= '2014-01-01') & (df['timestamp'] <= '2014-12-30 00:00:00')]\n", "\n", "add_data = df[(df['timestamp'] >= '2014-01-01') & (df['timestamp'] <= '2014-12-31 00:00:00')]\n", "\n", "test_data = df[(df['timestamp'] >= '2014-01-01') & (df['timestamp'] <= '2015-01-01 00:00:00')]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train_data.tail()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "add_data.tail()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_data.tail()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train_data.to_csv(\"electricityusagedata_train.csv\", header=False, index=False)\n", "add_data.to_csv(\"electricityusagedata_add.csv\", header=False, index=False)\n", "test_data.to_csv(\"electricityusagedata_test.csv\", header=False, index=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "s3のバケットを作成し、データをアップロードします。すでに作成しているバケットを使用する際も以下のコードを実行して問題ありません。 \n", "バケットの新規作成が行われる場合はcreate_bucket()のreturnがTrueとなります。 \n", "バケットがすでに作成済みであればcreate_bucket()のreturnがFalseになり、作成は行われません。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def create_bucket(bucket_name, region=None):\n", " \"\"\"Create an S3 bucket in a specified region\n", "\n", " If a region is not specified, the bucket is created in the S3 default\n", " region (us-east-1).\n", "\n", " :param bucket_name: Bucket to create\n", " :param region: String region to create bucket in, e.g., 'us-west-2'\n", " :return: True if bucket created, else False\n", " \"\"\"\n", "\n", " # Create bucket\n", " try:\n", " if region is None:\n", " s3_client = boto3.client('s3')\n", " s3_client.create_bucket(Bucket=bucket_name)\n", " else:\n", " s3_client = boto3.client('s3', region_name=region)\n", " location = {'LocationConstraint': region}\n", " s3_client.create_bucket(Bucket=bucket_name,\n", " CreateBucketConfiguration=location)\n", " except ClientError as e:\n", " logging.error(e)\n", " return False\n", " return True" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "create_bucket(bucket_name=bucket_name, region=region)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "バケットにデータをアップロードします。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# 学習用データ\n", "key=\"electricityusagedata_train.csv\"\n", "boto3.Session().resource('s3').Bucket(bucket_name).Object(key).upload_file(\"electricityusagedata_train.csv\")\n", "\n", "# Rolling Forecast用のデータ\n", "add_key=\"electricityusagedata_add.csv\"\n", "boto3.Session().resource('s3').Bucket(bucket_name).Object(add_key).upload_file(\"electricityusagedata_add.csv\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## データセットグループとデータセットの作成" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "DATASET_FREQUENCY = \"H\" \n", "TIMESTAMP_FORMAT = \"yyyy-MM-dd hh:mm:ss\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# 作成するデータセットグループやデータセットに使用する接頭辞です。任意の名前を入力してください。\n", "project = 'forecast_tutorial'\n", "\n", "# データセットグループ名\n", "datasetGroupName= project +'_dsg'\n", "# データセット名\n", "datasetName= project+'_ds'\n", "\n", "# 学習用データのS3パス\n", "s3DataPath = \"s3://\"+bucket_name+\"/\"+key" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "データセットグループを作成(定義)します。 " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "create_dataset_group_response = forecast.create_dataset_group(\n", " DatasetGroupName=datasetGroupName,\n", " Domain=\"CUSTOM\"\n", ")\n", "\n", "datasetGroupArn = create_dataset_group_response['DatasetGroupArn']" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "forecast.describe_dataset_group(DatasetGroupArn=datasetGroupArn)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "データセットのスキーマを作成(定義)します。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Specify the schema of your dataset here. Make sure the order of columns matches the raw data files.\n", "schema ={\n", " \"Attributes\":[\n", " {\n", " \"AttributeName\":\"timestamp\",\n", " \"AttributeType\":\"timestamp\"\n", " },\n", " {\n", " \"AttributeName\":\"target_value\",\n", " \"AttributeType\":\"float\"\n", " },\n", " {\n", " \"AttributeName\":\"item_id\",\n", " \"AttributeType\":\"string\"\n", " }\n", " ]\n", "}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "データセットを作成(定義)します。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response=forecast.create_dataset(\n", " Domain=\"CUSTOM\",\n", " DatasetType='TARGET_TIME_SERIES',\n", " DatasetName=datasetName,\n", " DataFrequency=DATASET_FREQUENCY, \n", " Schema=schema\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "datasetArn = response['DatasetArn']\n", "forecast.describe_dataset(DatasetArn=datasetArn)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "データセットグループにデータセットを追加します。 \n", "本分析では「分析対象のデータセットのみ」を使用しますが、データセットグループには分析対象のデータセットのほか、「メタデータ」、「関連時系列データ」を格納でき、分析対象のデータセットと併せることでモデルの予測精度の改善が見込めます。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "forecast.update_dataset_group(DatasetGroupArn=datasetGroupArn, DatasetArns=[datasetArn])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Forecast用のIAMロールを作成する\n", "以下のコードの実行にはSageMakerのロールにIAMFullAccessの権限を追加する必要があります。\n", "\n", "多くのAWSサービスと同様に、ForecastはS3リソースと安全にやり取りするためにIAMロールを引き受ける必要があります。\n", "本ノートブックでは、get_or_create_iam_role()ユーティリティ関数を使用してIAMロールを作成します。 実装については、\n", "[\"common/util/fcst_utils.py\"](./common/util/fcst_utils.py)を参照してください。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Create the role to provide to Amazon Forecast.\n", "role_name = \"ForecastNotebookRole-Tutorial\"\n", "role_arn = util.get_or_create_iam_role(role_name=role_name)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Data Import Jobの作成\n", "\n", "一連のデータセット関連の定義が完了したので、s3からAmazon Forecastにデータをインポートします。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "datasetImportJobName = 'my_dsimportjob'\n", "ds_import_job_response=forecast.create_dataset_import_job(DatasetImportJobName=datasetImportJobName,\n", " DatasetArn=datasetArn,\n", " DataSource= {\n", " \"S3Config\" : {\n", " \"Path\":s3DataPath,\n", " \"RoleArn\": role_arn\n", " } \n", " },\n", " TimestampFormat=TIMESTAMP_FORMAT\n", " )" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ds_import_job_arn=ds_import_job_response['DatasetImportJobArn']\n", "print(ds_import_job_arn)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "以下のセルに実行には約5-10分かかります。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "status_indicator = util.StatusIndicator()\n", "\n", "while True:\n", " status = forecast.describe_dataset_import_job(DatasetImportJobArn=ds_import_job_arn)['Status']\n", " status_indicator.update(status)\n", " if status in ('ACTIVE', 'CREATE_FAILED'): break\n", " time.sleep(10)\n", "\n", "status_indicator.end()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "forecast.describe_dataset_import_job(DatasetImportJobArn=ds_import_job_arn)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Predictorの作成\n", "\n", "「forecastHorizon(予測期間)」は、将来予測される時点の数です。 週次データの場合、12と入力すると12週を意味します。 今回のデータは時間ごとのデータです。次の日の予測を試みるので、24に設定します。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# predictorの名前\n", "predictorName= project+'_ets_algo'" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# 予測期間\n", "forecastHorizon = 24" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# アルゴリズム(他のアルゴリズムに変更することも可能です)\n", "algorithmArn = 'arn:aws:forecast:::algorithm/ETS'" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "create_predictor_response=forecast.create_predictor(PredictorName=predictorName, \n", " AlgorithmArn=algorithmArn,\n", " ForecastHorizon=forecastHorizon,\n", " PerformAutoML= False,\n", " PerformHPO=False,\n", " EvaluationParameters= {\"NumberOfBacktestWindows\": 1, \n", " \"BackTestWindowOffset\": 24}, \n", " InputDataConfig= {\"DatasetGroupArn\": datasetGroupArn},\n", " FeaturizationConfig= {\"ForecastFrequency\": \"H\", \n", " \"Featurizations\": \n", " [\n", " {\"AttributeName\": \"target_value\", \n", " \"FeaturizationPipeline\": \n", " [\n", " {\"FeaturizationMethodName\": \"filling\", \n", " \"FeaturizationMethodParameters\": \n", " {\"frontfill\": \"none\", \n", " \"middlefill\": \"zero\", \n", " \"backfill\": \"zero\"}\n", " }\n", " ]\n", " }\n", " ]\n", " }\n", " )" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "predictor_arn=create_predictor_response['PredictorArn']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "以下のセルに実行には約25分かかります(アルゴリズムやAutoM、HPOの有無によって実行時間は異なります)。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "status_indicator = util.StatusIndicator()\n", "\n", "while True:\n", " status = forecast.describe_predictor(PredictorArn=predictor_arn)['Status']\n", " status_indicator.update(status)\n", " if status in ('ACTIVE', 'CREATE_FAILED'): break\n", " time.sleep(10)\n", "\n", "status_indicator.end()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# 評価指標の取得\n", "forecast.get_accuracy_metrics(PredictorArn=predictor_arn)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Forecastの作成\n", "\n", "次に、作成したpredictorを使用して予測を作成します" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Forecastの名前\n", "forecastName= project+'_ets_algo_forecast'" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "create_forecast_response=forecast.create_forecast(\n", " ForecastName=forecastName,\n", " PredictorArn=predictor_arn)\n", "\n", "forecast_arn = create_forecast_response['ForecastArn']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "以下のセルに実行には約25分かかります(アルゴリズムによって実行時間は異なります)。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "status_indicator = util.StatusIndicator()\n", "\n", "while True:\n", " status = forecast.describe_forecast(ForecastArn=forecast_arn)['Status']\n", " status_indicator.update(status)\n", " if status in ('ACTIVE', 'CREATE_FAILED'): break\n", " time.sleep(10)\n", "\n", "status_indicator.end()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 予測値の取得\n", "\n", "ここではサンプルとして「client_21」の予測値を取得します。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(forecast_arn)\n", "print()\n", "forecastResponse = forecastquery.query_forecast(\n", " ForecastArn=forecast_arn,\n", " Filters={\"item_id\":\"client_21\"}\n", ")\n", "print(forecastResponse)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 予測値と実測値の可視化\n", "\n", "予測値と実測値を可視化してみましょう。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "actual_df = pd.read_csv(\"./electricityusagedata_test.csv\", names=['timestamp','value','item'])\n", "actual_df.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "actual_df = actual_df[(actual_df['timestamp'] >= '2014-12-30 00:00:00') & (actual_df['timestamp'] <= '2014-12-31 00:00:00')]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "actual_df = actual_df[(actual_df['item'] == 'client_21')]\n", "actual_df.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# 実測値のplot\n", "actual_df.plot()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# 10パーセンタイル点の予測値を取得します\n", "prediction_df_p10 = pd.DataFrame.from_dict(forecastResponse['Forecast']['Predictions']['p10'])\n", "prediction_df_p10.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Plot\n", "prediction_df_p10.plot()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# 同様に50, 90パーセンタイル点の値を取得します\n", "prediction_df_p50 = pd.DataFrame.from_dict(forecastResponse['Forecast']['Predictions']['p50'])\n", "prediction_df_p90 = pd.DataFrame.from_dict(forecastResponse['Forecast']['Predictions']['p90'])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# We start by creating a dataframe to house our content, here source will be which dataframe it came from\n", "results_df = pd.DataFrame(columns=['timestamp', 'value', 'source'])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for index, row in actual_df.iterrows():\n", " clean_timestamp = dateutil.parser.parse(row['timestamp'])\n", " results_df = results_df.append({'timestamp' : clean_timestamp , 'value' : row['value'], 'source': 'actual'} , ignore_index=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# To show the new dataframe\n", "results_df.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Now add the P10, P50, and P90 Values\n", "for index, row in prediction_df_p10.iterrows():\n", " clean_timestamp = dateutil.parser.parse(row['Timestamp'])\n", " results_df = results_df.append({'timestamp' : clean_timestamp , 'value' : row['Value'], 'source': 'p10'} , ignore_index=True)\n", "for index, row in prediction_df_p50.iterrows():\n", " clean_timestamp = dateutil.parser.parse(row['Timestamp'])\n", " results_df = results_df.append({'timestamp' : clean_timestamp , 'value' : row['Value'], 'source': 'p50'} , ignore_index=True)\n", "for index, row in prediction_df_p90.iterrows():\n", " clean_timestamp = dateutil.parser.parse(row['Timestamp'])\n", " results_df = results_df.append({'timestamp' : clean_timestamp , 'value' : row['Value'], 'source': 'p90'} , ignore_index=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "results_df" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pivot_df = results_df.pivot(columns='source', values='value', index=\"timestamp\")\n", "pivot_df" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# 予測値と実測値のplot\n", "pivot_df.plot()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## RollingForecastの実行\n", " それでは、先ほど作成したpredictorを使用してさらに24時間先の予測を行ってみましょう。\n", " \n", " まずはelectricityusagedata_addに対するデータセットインポートジョブを作成します(追加データでRolling Forecastする場合、追加データには学習に使った既存データが全て含まれている必要があります)。 " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# electricityusagedata_add.csvのパス\n", "s3DataPath = \"s3://\"+bucket_name+\"/\"+add_key" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# データセットの定義\n", "new_datasetArn = response['DatasetArn']\n", "forecast.describe_dataset(DatasetArn=new_datasetArn)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "datasetImportJobName = 'add_dataset_import'\n", "ds_import_job_response=forecast.create_dataset_import_job(DatasetImportJobName=datasetImportJobName,\n", " DatasetArn=new_datasetArn,\n", " DataSource= {\n", " \"S3Config\" : {\n", " \"Path\":s3DataPath,\n", " \"RoleArn\": role_arn\n", " } \n", " },\n", " TimestampFormat=TIMESTAMP_FORMAT\n", " )" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "new_ds_import_job_arn=ds_import_job_response['DatasetImportJobArn']\n", "print(new_ds_import_job_arn)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "以下のセルに実行には約5分かかります。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "status_indicator = util.StatusIndicator()\n", "\n", "while True:\n", " status = forecast.describe_dataset_import_job(DatasetImportJobArn=new_ds_import_job_arn)['Status']\n", " status_indicator.update(status)\n", " if status in ('ACTIVE', 'CREATE_FAILED'): break\n", " time.sleep(10)\n", "\n", "status_indicator.end()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "forecast.describe_dataset_import_job(DatasetImportJobArn=new_ds_import_job_arn)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 新しいForecast(Rolling Forecast)の作成\n", "\n", "作成済みのpredictorを使用して、さらに24時間先の予測値を取得します" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# 新しいForecastの名前\n", "forecastName= project+'_ets_algo_forecast_new'" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "create_forecast_response=forecast.create_forecast(\n", " ForecastName=forecastName,\n", " PredictorArn=predictor_arn)\n", "\n", "# 新しいForecastARN\n", "new_forecast_arn = create_forecast_response['ForecastArn']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "以下のセルに実行には約25分かかります。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "status_indicator = util.StatusIndicator()\n", "\n", "while True:\n", " status = forecast.describe_forecast(ForecastArn=new_forecast_arn)['Status']\n", " status_indicator.update(status)\n", " if status in ('ACTIVE', 'CREATE_FAILED'): break\n", " time.sleep(10)\n", "\n", "status_indicator.end()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Rolling Forecastの予測値の取得\n", "\n", "先ほどと同様にサンプルとして「client_21」の予測値を取得します。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(new_forecast_arn)\n", "print()\n", "forecastResponse = forecastquery.query_forecast(\n", " ForecastArn=new_forecast_arn,\n", " Filters={\"item_id\":\"client_21\"}\n", ")\n", "print(forecastResponse)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Rolling Forecastの予測値と実測値の可視化\n", "\n", "Rolling Forecastの予測値と実測値を可視化してみましょう。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "actual_df = pd.read_csv(\"./electricityusagedata_test.csv\", names=['timestamp','value','item'])\n", "actual_df.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "actual_df = actual_df[(actual_df['timestamp'] >= '2014-12-31 00:00:00') & (actual_df['timestamp'] <= '2015-01-01 00:00:00')]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "actual_df = actual_df[(actual_df['item'] == 'client_21')]\n", "actual_df.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# 10パーセンタイル点の予測値を取得します\n", "prediction_df_p10 = pd.DataFrame.from_dict(forecastResponse['Forecast']['Predictions']['p10'])\n", "prediction_df_p10.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Plot\n", "prediction_df_p10.plot()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# 同様に50, 90パーセンタイル点の値を取得します\n", "prediction_df_p50 = pd.DataFrame.from_dict(forecastResponse['Forecast']['Predictions']['p50'])\n", "prediction_df_p90 = pd.DataFrame.from_dict(forecastResponse['Forecast']['Predictions']['p90'])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# We start by creating a dataframe to house our content, here source will be which dataframe it came from\n", "results_df = pd.DataFrame(columns=['timestamp', 'value', 'source'])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for index, row in actual_df.iterrows():\n", " clean_timestamp = dateutil.parser.parse(row['timestamp'])\n", " results_df = results_df.append({'timestamp' : clean_timestamp , 'value' : row['value'], 'source': 'actual'} , ignore_index=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# To show the new dataframe\n", "results_df.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Now add the P10, P50, and P90 Values\n", "for index, row in prediction_df_p10.iterrows():\n", " clean_timestamp = dateutil.parser.parse(row['Timestamp'])\n", " results_df = results_df.append({'timestamp' : clean_timestamp , 'value' : row['Value'], 'source': 'p10'} , ignore_index=True)\n", "for index, row in prediction_df_p50.iterrows():\n", " clean_timestamp = dateutil.parser.parse(row['Timestamp'])\n", " results_df = results_df.append({'timestamp' : clean_timestamp , 'value' : row['Value'], 'source': 'p50'} , ignore_index=True)\n", "for index, row in prediction_df_p90.iterrows():\n", " clean_timestamp = dateutil.parser.parse(row['Timestamp'])\n", " results_df = results_df.append({'timestamp' : clean_timestamp , 'value' : row['Value'], 'source': 'p90'} , ignore_index=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "results_df" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pivot_df = results_df.pivot(columns='source', values='value', index=\"timestamp\")\n", "pivot_df" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pivot_df.plot()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "青線が実測値で黄線、緑線、赤線がそれぞれ10、50、90パーセンタイルの予測値になります。深夜帯の変化のない部分、夕方の上昇、夜間の減少といった変化を予測できていることを確認できました。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## リソースの削除\n", "\n", "Forecastで作成したリソースとs3にアップロードしたオブジェクトを削除します。 \n", "これらの操作はForecastおよびs3のコンソール上からも実行可能です。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Delete the Foreacst:\n", "util.wait_till_delete(lambda: forecast.delete_forecast(ForecastArn=forecast_arn))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Delete the Rolling Foreacst:\n", "util.wait_till_delete(lambda: forecast.delete_forecast(ForecastArn=new_forecast_arn))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Delete the Predictor:\n", "util.wait_till_delete(lambda: forecast.delete_predictor(PredictorArn=predictor_arn))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Delete ds_import_job_arn\n", "util.wait_till_delete(lambda: forecast.delete_dataset_import_job(DatasetImportJobArn=ds_import_job_arn))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Delete new_ds_import_job_arn\n", "util.wait_till_delete(lambda: forecast.delete_dataset_import_job(DatasetImportJobArn=new_ds_import_job_arn))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Delete the datasetArn:\n", "util.wait_till_delete(lambda: forecast.delete_dataset(DatasetArn=datasetArn))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Delete the new_datasetArn:\n", "util.wait_till_delete(lambda: forecast.delete_dataset(DatasetArn=new_datasetArn))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Delete the DatasetGroup:\n", "util.wait_till_delete(lambda: forecast.delete_dataset_group(DatasetGroupArn=datasetGroupArn))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Delete train file in S3\n", "boto3.Session().resource('s3').Bucket(bucket_name).Object(key).delete()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Delete add file in S3\n", "boto3.Session().resource('s3').Bucket(bucket_name).Object(add_key).delete()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Delete your S3 bucket(以下のコードの実行はs3のFull Accessが必要です。権限がない場合はs3のコンソールから行ってください。)\n", "boto3.Session().resource('s3').Bucket(bucket_name).delete()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### IAM RoleとPolicyの削除" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "util.delete_iam_role(role_name)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "conda_python3", "language": "python", "name": "conda_python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.5" } }, "nbformat": 4, "nbformat_minor": 4 }