{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# What-if Analysis\n",
"\n",
"This notebook describes how to create forecasts and perform what-if analysis from different versions of related time series datasets. As a sample use-case, we use product demand forecasting with two different variations of future prices, and see how product demand changes if we increase the prices of products.\n",
"\n",
"Please note that We can use just one Predictor to generate two Forecasts. Following is the steps:\n",
"\n",
"1. [Import libraries and setup AWS resources](#import_and_setup)\n",
"2. [Prepare training dataset CSVs](#prepare_csvs)\n",
"3. [Create DatasetGroup and Datasets](#create_dsg_ds)\n",
"4. [Import the target time series data, and the first variation of the related time series](#import_1)\n",
"5. [Create Predictor](#create_predictor)\n",
"6. [Create Forecast from the first variation of the related time series](#forecast_1)\n",
"7. [Import 2nd variation of the related time series](#import_2)\n",
"8. [Create Forecast from the 2nd variation of the related time series](#forecast_2)\n",
"9. [Query forecasts, visualize and compare](#visualize_and_compare)\n",
"10. [Resource Cleanup](#cleanup)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import libraries and setup AWS resources "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import sys\n",
"import os\n",
"import time\n",
"import datetime\n",
"\n",
"import pandas as pd\n",
"import matplotlib.pyplot as plt\n",
"\n",
"import boto3\n",
"\n",
"# importing forecast notebook utility from notebooks/common directory\n",
"sys.path.insert( 0, os.path.abspath(\"../../common\") )\n",
"import util"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Configure the S3 bucket name and region name for this lesson.\n",
"\n",
"- If you don't have an S3 bucket, create it first on S3.\n",
"- Although we have set the region to us-west-2 as a default value below, you can choose any of the regions that the service is available in."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"text_widget_bucket = util.create_text_widget( \"bucket_name\", \"input your S3 bucket name\" )\n",
"text_widget_region = util.create_text_widget( \"region\", \"input region name.\", default_value=\"us-west-2\" )"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"bucket_name = text_widget_bucket.value\n",
"assert bucket_name, \"bucket_name not set.\"\n",
"\n",
"region = text_widget_region.value\n",
"assert region, \"region not set.\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"session = boto3.Session(region_name=region)\n",
"s3 = session.client('s3')\n",
"forecast = session.client(service_name='forecast') \n",
"forecastquery = session.client(service_name='forecastquery')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Create the role to provide to Amazon Forecast.\n",
"role_name = \"ForecastNotebookRole-WhatIfAnalysis\"\n",
"role_arn = util.get_or_create_iam_role( role_name = role_name )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prepare training dataset CSVs \n",
"1. Load historical product demand data\n",
"2. Split into target time series (demand) and related time series (price)\n",
"3. Extend related time series to forecast horizon\n",
"4. Create another variation of related time series data with increased price in forecast horizon\n",
"5. Upload them onto S3"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df = pd.read_csv( \"./data/product_demand.csv\" )\n",
"df"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df[ df[\"item_id\"]==\"item_001\" ].plot( x=\"timestamp\" )"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df_tts = df[[\"item_id\", \"timestamp\", \"demand\" ]]\n",
"df_rts = df[[\"item_id\", \"timestamp\", \"price\" ]]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For the first variation of related time series, we simply extend the latest price of each product toward forecast horizon to see what if we don't change the price."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# extend RTS to forecast horizon\n",
"training_data_period = ( df_tts[\"timestamp\"].min(), df_tts[\"timestamp\"].max() )\n",
"\n",
"df_rts_extend = df_rts[ df_rts[\"timestamp\"]==training_data_period[1] ].copy()\n",
"\n",
"date = pd.to_datetime(training_data_period[1])\n",
"for i in range(3):\n",
"\n",
" if date.month == 12:\n",
" date = datetime.date( date.year+1, 1, 1 )\n",
" else:\n",
" date = datetime.date( date.year, date.month+1, 1 )\n",
"\n",
" df_rts_extend[ \"timestamp\" ] = date.strftime(\"%Y-%m-%d\")\n",
" df_rts = df_rts.append(df_rts_extend)\n",
"\n",
"df_rts[ df_rts[\"item_id\"]==\"item_001\" ]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For the 2nd variation of related time series, we apply 10% price increase for the firt 10 items (\"item_001\" ~ \"item_010\"). Using this related time series, we will to see what if we increase the price."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# create different version of RTS with increased future price\n",
"df_rts1 = df_rts\n",
"df_rts2 = df_rts.copy()\n",
"\n",
"df_rts2.loc[ ( df_rts2[\"timestamp\"]>training_data_period[1] ) & (df_rts2[\"item_id\"]<=\"item_010\"), \"price\" ] *= 1.1\n",
"\n",
"df_rts2[ df_rts2[\"item_id\"]==\"item_001\" ]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df_tts.to_csv( \"./data/tts.csv\", index=False )\n",
"df_rts1.to_csv( \"./data/rts1.csv\", index=False )\n",
"df_rts2.to_csv( \"./data/rts2.csv\", index=False )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Upload to S3"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"project = \"whatif_analysis\"\n",
"\n",
"key_tts = \"%s/tts.csv\" % project\n",
"key_rts1 = \"%s/rts1.csv\" % project\n",
"key_rts2 = \"%s/rts2.csv\" % project\n",
"\n",
"s3.upload_file( Filename=\"./data/tts.csv\", Bucket=bucket_name, Key=key_tts )\n",
"s3.upload_file( Filename=\"./data/rts1.csv\", Bucket=bucket_name, Key=key_rts1 )\n",
"s3.upload_file( Filename=\"./data/rts2.csv\", Bucket=bucket_name, Key=key_rts2 )\n",
"\n",
"s3_data_path_tts = \"s3://\" + bucket_name + \"/\" + key_tts\n",
"s3_data_path_rts1 = \"s3://\" + bucket_name + \"/\" + key_rts1\n",
"s3_data_path_rts2 = \"s3://\" + bucket_name + \"/\" + key_rts2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Create DatasetGroup and Datasets \n",
" \n",
"Creating single set of DatasetGroup, Datasets. Please note that we don't have to create two RELATED_TIME_SERIES datasets."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"response = forecast.create_dataset_group(\n",
" DatasetGroupName = project + \"_dsg\",\n",
" Domain=\"RETAIL\",\n",
" )\n",
"\n",
"dataset_group_arn = response['DatasetGroupArn']"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"DATASET_FREQUENCY = \"M\"\n",
"TIMESTAMP_FORMAT = \"yyyy-MM-dd\"\n",
"\n",
"schema ={\n",
" \"Attributes\":[\n",
" {\n",
" \"AttributeName\":\"item_id\",\n",
" \"AttributeType\":\"string\"\n",
" },\n",
" {\n",
" \"AttributeName\":\"timestamp\",\n",
" \"AttributeType\":\"timestamp\"\n",
" },\n",
" {\n",
" \"AttributeName\":\"demand\",\n",
" \"AttributeType\":\"float\"\n",
" },\n",
" ]\n",
"}\n",
"\n",
"response = forecast.create_dataset(\n",
" Domain = \"RETAIL\",\n",
" DatasetType = 'TARGET_TIME_SERIES',\n",
" DatasetName = project + \"_tts\",\n",
" DataFrequency = DATASET_FREQUENCY, \n",
" Schema = schema\n",
")\n",
"\n",
"tts_dataset_arn = response['DatasetArn']"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"schema ={\n",
" \"Attributes\":[\n",
" {\n",
" \"AttributeName\":\"item_id\",\n",
" \"AttributeType\":\"string\"\n",
" },\n",
" {\n",
" \"AttributeName\":\"timestamp\",\n",
" \"AttributeType\":\"timestamp\"\n",
" },\n",
" {\n",
" \"AttributeName\":\"price\",\n",
" \"AttributeType\":\"float\"\n",
" },\n",
" ]\n",
"}\n",
"\n",
"response = forecast.create_dataset(\n",
" Domain = \"RETAIL\",\n",
" DatasetType = 'RELATED_TIME_SERIES',\n",
" DatasetName = project + \"_rts\",\n",
" DataFrequency = DATASET_FREQUENCY, \n",
" Schema = schema\n",
")\n",
"\n",
"rts_dataset_arn = response['DatasetArn']"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"forecast.update_dataset_group( \n",
" DatasetGroupArn = dataset_group_arn, \n",
" DatasetArns = [\n",
" tts_dataset_arn,\n",
" rts_dataset_arn,\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import the target time series data, and the first variation of the related time series \n",
" \n",
"The first variation of related time series contains simply extended prices in the forecast horizon."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"response = forecast.create_dataset_import_job(\n",
" DatasetImportJobName = project + \"_tts_import\",\n",
" DatasetArn = tts_dataset_arn,\n",
" DataSource = {\n",
" \"S3Config\" : {\n",
" \"Path\" : s3_data_path_tts,\n",
" \"RoleArn\" : role_arn\n",
" }\n",
" },\n",
" TimestampFormat = TIMESTAMP_FORMAT\n",
")\n",
"\n",
"tts_dataset_import_job_arn = response['DatasetImportJobArn']"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"response = forecast.create_dataset_import_job(\n",
" DatasetImportJobName = project + \"_rts_import1\",\n",
" DatasetArn = rts_dataset_arn,\n",
" DataSource = {\n",
" \"S3Config\" : {\n",
" \"Path\" : s3_data_path_rts1,\n",
" \"RoleArn\" : role_arn\n",
" }\n",
" },\n",
" TimestampFormat = TIMESTAMP_FORMAT\n",
")\n",
"\n",
"rts_dataset_import_job1_arn = response['DatasetImportJobArn']"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"status_indicator = util.StatusIndicator()\n",
"\n",
"while True:\n",
" status = forecast.describe_dataset_import_job( DatasetImportJobArn = tts_dataset_import_job_arn )['Status']\n",
" status_indicator.update(status)\n",
" if status in ('ACTIVE', 'CREATE_FAILED'): break\n",
" time.sleep(10)\n",
"\n",
"status_indicator.end()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"status_indicator = util.StatusIndicator()\n",
"\n",
"while True:\n",
" status = forecast.describe_dataset_import_job( DatasetImportJobArn = rts_dataset_import_job1_arn )['Status']\n",
" status_indicator.update(status)\n",
" if status in ('ACTIVE', 'CREATE_FAILED'): break\n",
" time.sleep(10)\n",
"\n",
"status_indicator.end()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Predictor \n",
"\n",
"Creating a Predictor based on the dataset contents we imported. We will re-use this Predictor to generate two variasions of forecasts."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"forecast_horizon = 3\n",
"\n",
"response = forecast.create_predictor(\n",
" PredictorName = project + \"_predictor\",\n",
" AlgorithmArn = \"arn:aws:forecast:::algorithm/Deep_AR_Plus\",\n",
" ForecastHorizon = forecast_horizon,\n",
" PerformAutoML = False,\n",
" PerformHPO = False,\n",
" EvaluationParameters = {\n",
" \"NumberOfBacktestWindows\" : 1, \n",
" \"BackTestWindowOffset\" : forecast_horizon\n",
" }, \n",
" InputDataConfig = {\n",
" \"DatasetGroupArn\" : dataset_group_arn\n",
" },\n",
" FeaturizationConfig = {\n",
" \"ForecastFrequency\" : DATASET_FREQUENCY, \n",
" \"Featurizations\" : [\n",
" {\n",
" \"AttributeName\" : \"demand\", \n",
" \"FeaturizationPipeline\" : [\n",
" {\n",
" \"FeaturizationMethodName\": \"filling\", \n",
" \"FeaturizationMethodParameters\": {\n",
" \"frontfill\": \"none\", \n",
" \"middlefill\": \"zero\", \n",
" \"backfill\": \"zero\"\n",
" }\n",
" }\n",
" ]\n",
" }\n",
" ]\n",
" }\n",
")\n",
"\n",
"predictor_arn = response['PredictorArn']"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"status_indicator = util.StatusIndicator()\n",
"\n",
"while True:\n",
" status = forecast.describe_predictor( PredictorArn = predictor_arn )['Status']\n",
" status_indicator.update(status)\n",
" if status in ('ACTIVE', 'CREATE_FAILED'): break\n",
" time.sleep(10)\n",
"\n",
"status_indicator.end()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"forecast.get_accuracy_metrics( PredictorArn = predictor_arn )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Forecast from the first variation of the related time series \n",
"\n",
"Creating a Forecast based on the 1st variation of related time series."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"response = forecast.create_forecast(\n",
" ForecastName = project + \"_forecast1\",\n",
" PredictorArn = predictor_arn\n",
")\n",
"\n",
"forecast1_arn = response['ForecastArn']"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"status_indicator = util.StatusIndicator()\n",
"\n",
"while True:\n",
" status = forecast.describe_forecast( ForecastArn = forecast1_arn )['Status']\n",
" status_indicator.update(status)\n",
" if status in ('ACTIVE', 'CREATE_FAILED'): break\n",
" time.sleep(10)\n",
"\n",
"status_indicator.end()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import 2nd variation of the related time series \n",
"\n",
"Now importing 2nd variation of related time series, with 10% increased price. Note that we are reusing the same Predictor we created, and just importing different version of related time series. We use same target time series data, so we don't have to import again here."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"response = forecast.create_dataset_import_job(\n",
" DatasetImportJobName = project + \"_rts_import2\",\n",
" DatasetArn = rts_dataset_arn,\n",
" DataSource = {\n",
" \"S3Config\" : {\n",
" \"Path\" : s3_data_path_rts2,\n",
" \"RoleArn\" : role_arn\n",
" }\n",
" },\n",
" TimestampFormat = TIMESTAMP_FORMAT\n",
")\n",
"\n",
"rts_dataset_import_job2_arn = response['DatasetImportJobArn']"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"status_indicator = util.StatusIndicator()\n",
"\n",
"while True:\n",
" status = forecast.describe_dataset_import_job( DatasetImportJobArn = rts_dataset_import_job2_arn )['Status']\n",
" status_indicator.update(status)\n",
" if status in ('ACTIVE', 'CREATE_FAILED'): break\n",
" time.sleep(10)\n",
"\n",
"status_indicator.end()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Forecast from the 2nd variation of the related time series \n",
"\n",
"Creating a Forecast based on the latest dataset contents (with the 2nd variation of related time series)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"response = forecast.create_forecast(\n",
" ForecastName = project + \"_forecast2\",\n",
" PredictorArn = predictor_arn\n",
")\n",
"\n",
"forecast2_arn = response['ForecastArn']"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"status_indicator = util.StatusIndicator()\n",
"\n",
"while True:\n",
" status = forecast.describe_forecast( ForecastArn = forecast2_arn )['Status']\n",
" status_indicator.update(status)\n",
" if status in ('ACTIVE', 'CREATE_FAILED'): break\n",
" time.sleep(10)\n",
"\n",
"status_indicator.end()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Query forecasts, visualize and compare \n",
"\n",
"So far we got two Forecasts for different future product prices. Let's get the forecasted product demands, visualize, and compare."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def plot_compare( item_id ):\n",
"\n",
" plt.figure(figsize=(12, 6))\n",
" plt.title(item_id)\n",
" \n",
" df_item_actual = df_tts[ df_tts[\"item_id\"]==item_id ]\n",
" plt.plot( pd.to_datetime(df_item_actual[\"timestamp\"]), df_item_actual[\"demand\"], label=\"actual\", color=(1,0,0) )\n",
"\n",
" def plot_forecast( single_item_forecast, label, color, hatch ):\n",
"\n",
" x = []\n",
" y_p10 = []\n",
" y_p50 = []\n",
" y_p90 = []\n",
"\n",
" # visually connect last actual value with forecasts\n",
" df_connect = df_item_actual[ df_item_actual[\"timestamp\"]==training_data_period[1] ].reset_index(drop=True)\n",
" x.append( datetime.datetime.strptime( df_connect.at[ 0, \"timestamp\" ], \"%Y-%m-%d\" ) )\n",
" y_p10.append( df_connect.at[0,\"demand\"] )\n",
" y_p50.append( df_connect.at[0,\"demand\"] )\n",
" y_p90.append( df_connect.at[0,\"demand\"] )\n",
"\n",
" for p10, p50, p90 in zip( single_item_forecast[\"p10\"], single_item_forecast[\"p50\"], single_item_forecast[\"p90\"] ):\n",
"\n",
" date = datetime.datetime.strptime(p50[\"Timestamp\"],\"%Y-%m-%dT00:00:00\").date()\n",
" x.append(date)\n",
"\n",
" y_p10.append(p10[\"Value\"])\n",
" y_p50.append(p50[\"Value\"])\n",
" y_p90.append(p90[\"Value\"])\n",
"\n",
" plt.plot( x, y_p50, label=\"%s p50\" % label, color=color )\n",
" plt.fill_between( x, y_p10, y_p90, label=\"%s p10-p90\" % label, color=color, alpha=0.2, hatch=hatch )\n",
"\n",
" def plot_price( single_item_price, label, color ):\n",
" \n",
" x = []\n",
" y = []\n",
" \n",
" for timestamp, price in zip( single_item_price[\"timestamp\"], single_item_price[\"price\"] ):\n",
" date = datetime.datetime.strptime(timestamp,\"%Y-%m-%d\").date()\n",
" x.append(date)\n",
" y.append(price)\n",
"\n",
" plt.plot( x, y, label=label, color=color, linestyle=\":\" )\n",
" \n",
" response = forecastquery.query_forecast(\n",
" ForecastArn = forecast1_arn,\n",
" Filters = { \"item_id\" : item_id }\n",
" )\n",
" plot_forecast( response[\"Forecast\"][\"Predictions\"], \"forecast1\", (0,0,1), \"+\" )\n",
"\n",
" response = forecastquery.query_forecast(\n",
" ForecastArn = forecast2_arn,\n",
" Filters = { \"item_id\" : item_id }\n",
" )\n",
" plot_forecast( response[\"Forecast\"][\"Predictions\"], \"forecast2\", (1,0,1), \"x\" )\n",
"\n",
" plot_price( df_rts2[df_rts2[\"item_id\"]==item_id], \"price2\", (1,0,1) )\n",
" plot_price( df_rts1[df_rts1[\"item_id\"]==item_id], \"price1\", (0,0,1) )\n",
" \n",
" bottom, top = plt.ylim()\n",
" plt.ylim((-top*0.03, top*1.03))\n",
"\n",
" plt.legend( loc='lower left' )\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": false
},
"outputs": [],
"source": [
"for item_id in [ \"item_001\", \"item_002\", \"item_003\" ]:\n",
" plot_compare(item_id)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Resource cleanup "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Delete forecasts\n",
"util.wait_till_delete(lambda: forecast.delete_forecast(ForecastArn = forecast1_arn))\n",
"util.wait_till_delete(lambda: forecast.delete_forecast(ForecastArn = forecast2_arn))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Delete predictor\n",
"util.wait_till_delete(lambda: forecast.delete_predictor(PredictorArn = predictor_arn))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Delete dataset import jobs\n",
"util.wait_till_delete(lambda: forecast.delete_dataset_import_job(DatasetImportJobArn = tts_dataset_import_job_arn))\n",
"util.wait_till_delete(lambda: forecast.delete_dataset_import_job(DatasetImportJobArn = rts_dataset_import_job1_arn))\n",
"util.wait_till_delete(lambda: forecast.delete_dataset_import_job(DatasetImportJobArn = rts_dataset_import_job2_arn))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Delete datasets\n",
"util.wait_till_delete(lambda: forecast.delete_dataset(DatasetArn = tts_dataset_arn))\n",
"util.wait_till_delete(lambda: forecast.delete_dataset(DatasetArn = rts_dataset_arn))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Delete dataset group\n",
"util.wait_till_delete(lambda: forecast.delete_dataset_group(DatasetGroupArn = dataset_group_arn))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Delete IAM role\n",
"util.delete_iam_role( role_name )"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "conda_python3",
"language": "python",
"name": "conda_python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}