{
"cells": [
{
"cell_type": "markdown",
"id": "0534387a",
"metadata": {},
"source": [
"**Measuring Demand Forecasting benefits series**\n",
"\n",
"# Measuring forecast benefits\n",
"\n",
"> *This notebook should work with the **`Data Science 3.0`** kernel in SageMaker Studio, and the default `ml.t3.medium` instance type (2 vCPU + 4 GiB RAM)*\n",
"\n",
"In this notebook, we'll analyze the performance of the baseline (moving average) and ML-powered (Amazon SageMaker Canvas and/or Amazon Forecast) forecasts against actual historical data, and go beyond raw accuracy metrics to estimate actual business value.\n",
"\n",
"⚠️ If you haven't already prepared your baseline and ML-powered forecast, go back to run the [1.1. Moving Average Baseline.ipynb](1.1.%20Moving%20Average%20Baseline.ipynb) and either [1.2. Run SageMaker Canvas.ipynb](1.2.%20Run%SageMaker%20Canvas.ipynb) or [1.3. Run Amazon Forecast.ipynb](1.3.%20Run%20Amazon%20Forecast.ipynb) first."
]
},
{
"cell_type": "markdown",
"id": "34b1bdd8",
"metadata": {
"tags": []
},
"source": [
"## Contents\n",
"\n",
"1. [Dependencies and setup](#setup)\n",
"1. [Load input data](#data)\n",
" 1. [Actual sales/demand](#actuals)\n",
" 1. [Forecast predictions](#predictions)\n",
" 1. [Moving average baseline forecast](#movavg)\n",
" 1. [(Optional) SageMaker Canvas predictions](#canvas)\n",
" 1. [(Optional) Amazon Forecast predictions](#amzforecast)\n",
"1. [Estimating downstream costs of forecasting errors](#costs)\n",
" 1. [Cost of excess inventory](#inventory)\n",
" 1. [Cost of stock-out events](#stockouts)\n",
" 1. [Total costs and benefits](#totalcosts)\n",
"1. [Online evaluation and A/B testing](#abtesting)\n",
"1. [Conclusions](#conclusions)"
]
},
{
"cell_type": "markdown",
"id": "01ef6adb",
"metadata": {
"tags": []
},
"source": [
"## Dependencies and setup\n",
"\n",
"As before we'll first load libraries needed by the rest of the notebook:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c3229377",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"%load_ext autoreload\n",
"%autoreload 2\n",
"\n",
"# Python Built-Ins:\n",
"from dataclasses import dataclass\n",
"from datetime import datetime\n",
"from typing import Dict, List, Optional\n",
"\n",
"# External Dependencies:\n",
"import pandas as pd # Tabular/dataframe processing tools\n",
"import sagemaker # SageMaker SDK used just to look up default S3 bucket\n",
"\n",
"# Local Dependencies:\n",
"import util"
]
},
{
"cell_type": "markdown",
"id": "f4a26a1e-eab3-4cc8-959f-fcd4f9b8af23",
"metadata": {
"tags": []
},
"source": [
"## Load actuals and forecasts\n",
"\n",
"To retrospectively evaluate demand forecast(s) in a given period, we'll need:\n",
"\n",
"- **The actual observed demand/sales** from the period\n",
"- **The predictions** of the forecast models - assuming multiple models with the goal to compare between them\n",
"- **Additional reference data** - to relate demand estimate errors to actual business costs\n",
"\n",
"The following sub-sections will load and normalize the forecasts to be reviewed and the actual observed sales for the period, and discuss our assumptions on their structure.\n",
"\n",
"First, we'll define the evaluation period and what information will be collected for each forecast:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "152d23dc-ee56-4125-b387-008212bb2e58",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"PERIOD_START = datetime(year=2019, month=12, day=1)\n",
"PERIOD_END = datetime(year=2020, month=1, day=1) # (Exclusive)\n",
"\n",
"\n",
"@dataclass\n",
"class NormalizedForecast:\n",
" name: str # Some kind of human-ready identifier\n",
" df: pd.DataFrame # Filtered to the eval period, and indexed the same as the actuals data\n",
" quantiles: List[str] # (Multiple) alternative forecasts/quantile columns from this model\n",
"\n",
"\n",
"FORECASTS: Dict[str, NormalizedForecast] = {}"
]
},
{
"cell_type": "markdown",
"id": "96ddedae-7828-4d13-a1fc-6b08e0c87f53",
"metadata": {},
"source": [
"### Load actual sales/demand\n",
"\n",
"Now that the original forecast period is over, you should have actual sales data to compare your forecast to."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a4563009-dc02-4dbb-be12-ab70af484cbf",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"actuals_timestamp_col = \"date\"\n",
"actuals_amt_col = \"sales\"\n",
"\n",
"actuals_df = pd.read_parquet(\n",
" \"s3://measuring-forecast-benefits-assets/dataset/v1/sales.parquet\",\n",
").rename(columns={\"sku\": \"item_id\"})\n",
"\n",
"actuals_dimensions = [\n",
" col for col in actuals_df if col not in (actuals_timestamp_col, actuals_amt_col)\n",
"]\n",
"print(f\"Data has breakdown dimensions: {actuals_dimensions}\")\n",
"\n",
"actuals_df"
]
},
{
"cell_type": "markdown",
"id": "20cf0cef-e5e6-4aa8-8543-6077683b1765",
"metadata": {},
"source": [
"To anchor the comparison between forecasts (which may have different gaps from different models), we'll want a standard list of what dimension combinations (item-locations) to consider.\n",
"\n",
"We'll take that from the sales actuals here... But remember that sales can be sparse due to low-volume items: An item might have had a forecast for the period, but never sold any units. So we'll extract the list **before filtering** by the analysis period start/end:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "12052891-1485-494a-9cab-3e223cd20776",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# Calculate all unique location-item_id combinations from actual data:\n",
"loc_item_combos = actuals_df[[\"location\", \"item_id\"]].drop_duplicates().reset_index(drop=True)\n",
"\n",
"# Split the country and product out and index by these columns (helpful for joins later):\n",
"loc_item_combos[\"country\"] = loc_item_combos[\"location\"].str.split(\"_\").str[0]\n",
"loc_item_combos[\"product\"] = loc_item_combos[\"item_id\"].str.split(\"_\").str[0]\n",
"\n",
"loc_item_combos"
]
},
{
"cell_type": "markdown",
"id": "b3c03cc0-d75c-41bb-a752-254a1bce410a",
"metadata": {},
"source": [
"In this example the source (sales) data is daily but the baseline (rolling average) forecast is monthly only - so we'll need to conduct our comparative analysis at the monthly level. This is realistic as many businesses make stocking decisions at a similar frequency.\n",
"\n",
"So after filtering to our period of interest, we'll also need to aggregate the actual demand up to a monthly basis:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "430af426-a5c2-42ef-a365-a7788927f793",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# Filter to just the analysis period:\n",
"actuals_df = util.analytics.filter_to_period(\n",
" actuals_df,\n",
" period_start=PERIOD_START,\n",
" period_end=PERIOD_END,\n",
" timestamp_col_name=actuals_timestamp_col,\n",
")\n",
"\n",
"# Ensure the source timestamp data is properly parsed to datetimes:\n",
"actuals_df[actuals_timestamp_col] = pd.to_datetime(actuals_df[actuals_timestamp_col])\n",
"# Create the month column:\n",
"actuals_df[\"month\"] = actuals_df[actuals_timestamp_col].dt.strftime(\"%Y-%m\")\n",
"\n",
"# Aggregate the data (sum):\n",
"actuals_df = actuals_df.groupby(\n",
" [\"month\"] + actuals_dimensions\n",
").agg(\n",
" {actuals_amt_col: \"sum\"}\n",
")\n",
"\n",
"# Going forward, the timestamp column for this DF is updated:\n",
"actuals_timestamp_col = \"month\"\n",
"\n",
"# Preview resulting dataframe:\n",
"actuals_df"
]
},
{
"cell_type": "markdown",
"id": "5ccfbfdc-b8b0-4f9f-b4f0-4d41602264ab",
"metadata": {},
"source": [
"Any forecasts we load for comparison must be normalized to use this same multi-level index of month/period and other dimensions (here item_id, location)."
]
},
{
"cell_type": "markdown",
"id": "88951051-680d-43d0-a5d4-27ca7131f694",
"metadata": {},
"source": [
"### Forecast predictions\n",
"\n",
"You'll likely have two or more candidate forecasts to compare, since it doesn't really make sense to discuss the \"value\" of one forecast by itself without some kind of business baseline to measure against.\n",
"\n",
"In this example, we'll use the moving average baseline forecast, and compare it against the either the SageMaker Canvas or Amazon Forecast model (whichever you created)."
]
},
{
"cell_type": "markdown",
"id": "47a643cb-0deb-4f13-9851-3115044f4ff0",
"metadata": {},
"source": [
"#### Moving average baseline forecast\n",
"\n",
"The moving average baseline forecast was calculated in our first notebook and saved locally:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e7cbee29-6be4-4cc2-b260-0327747a3356",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"mov_avg_df = util.analytics.filter_to_period(\n",
" pd.read_csv(\"dataset/moving_avg.csv\"),\n",
" period_start=PERIOD_START,\n",
" period_end=PERIOD_END,\n",
" timestamp_col_name=\"month\",\n",
")\n",
"\n",
"mov_avg_df"
]
},
{
"cell_type": "markdown",
"id": "6b5eed20-2ee1-4c55-aa51-2b659f73c8d8",
"metadata": {},
"source": [
"This forecast is already stored in a monthly format so there's no aggregation to do in this case.\n",
"\n",
"However, we'd like to:\n",
"- Map column names to the standard set as used in actuals data, and\n",
"- Ensure the records are *indexed* by the unique fields they should be, ready for data joins\n",
"\n",
"Below we'll perform that renaming and re-indexing, and check that no records are merged in the process:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5d9e1f2d-47b3-43ea-8a16-9ee6713a933d",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"mov_avg_len_prev = len(mov_avg_df)\n",
"\n",
"# Re-name columns:\n",
"mov_avg_df.rename(\n",
" columns={\n",
" \"sku\": \"item_id\", # Needs to match actuals data\n",
" \"mov_avg\": \"movavg\", # We'd like to avoid underscores in quantile names later\n",
" },\n",
" inplace=True,\n",
")\n",
"\n",
"# Index by month and item/dimensions:\n",
"mov_avg_df = mov_avg_df.groupby([\"month\"] + actuals_dimensions).agg({\"movavg\": \"sum\"})\n",
"\n",
"# Check record count was not changed by the re-indexing / \"aggregation\":\n",
"assert len(mov_avg_df) == mov_avg_len_prev, (\n",
" \"Moving average forecast data changed length during re-indexing! Did you have duplicated \"\n",
" \"records or incorrect dimension settings? (From %s to %s records)\"\n",
" % (mov_avg_len_prev, len(mov_avg_df))\n",
")\n",
"\n",
"mov_avg_df"
]
},
{
"cell_type": "markdown",
"id": "017f6fc3-aa6f-4819-8267-b3b072b72da9",
"metadata": {},
"source": [
"This normalized data is now ready for comparison, so we'll add it to our list:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4d78e76c-e4ef-4158-b1b8-3adeeea5db50",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"FORECASTS[\"Moving Average\"] = NormalizedForecast(\n",
" name=\"Moving Average\",\n",
" df=mov_avg_df,\n",
" quantiles=[\"movavg\"],\n",
")"
]
},
{
"cell_type": "markdown",
"id": "8b1621b2-0085-45d5-9d17-2de7060c3ba7",
"metadata": {},
"source": [
"#### (Optional) SageMaker Canvas predictions\n",
"\n",
"IF you ran through the [SageMaker Canvas notebook](1.2.%20Run%20SageMaker%20Canvas.ipynb), you should have a **donwloaded CSV file** of predictions from the model.\n",
"\n",
"▶️ **Open** the `dataset` folder here in SageMaker Studio, using the folder menu in the left sidebar.\n",
"\n",
"▶️ **Drag and drop** your Canvas result file from your computer to the folder area, to upload it to your SageMaker workspace. (⏰ This might take a few minutes to complete - see the upload progress bar at the bottom of the screen for current status)\n",
"\n",
"▶️ **Check** the file location in the code cell below and edit it to match your uploaded file."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "84ccd6e2-3b6e-4d84-ac0f-75aaf868a688",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"canvas_result_df = util.analytics.filter_to_period(\n",
" pd.read_csv(\"dataset/canvas_result.csv\"), # TODO: EDIT YOUR FILENAME/PATH AS NEEDED\n",
" period_start=PERIOD_START,\n",
" period_end=PERIOD_END,\n",
" timestamp_col_name=\"date\",\n",
")\n",
"\n",
"# Check missing values:\n",
"missing_by_field = canvas_result_df.isna().sum()\n",
"print(f\"\\nTotal missing values:\\n{missing_by_field}\")\n",
"if missing_by_field.sum() > 0:\n",
" raise ValueError(\n",
" \"There are missing values in your SageMaker Canvas prediction result, which is most likely \"\n",
" \"caused by an error in download/upload. Please retry uploading your forecasts to Studio - \"\n",
" \"and possibly re-downloading them from Canvas if the error persists.\"\n",
" )\n",
"\n",
"# Preview the data:\n",
"canvas_result_df"
]
},
{
"cell_type": "markdown",
"id": "115b0293-b548-4fa0-aab6-52abfb746ddb",
"metadata": {},
"source": [
"There are a few transformations we need to apply to this dataset ready for comparison with the moving average baseline:\n",
"\n",
"1. Since the model model has daily granularity data in this case, we'll need to aggregate the results to monthly for comparable metrics.\n",
"1. Canvas has lower-cased our `sku` and `location` values which would interfere with joins later. We can use the original values from `loc_item_combos` collected earlier to fix this.\n",
"1. As business logic, we'll also enforce that any predictions that turn out negative for the month as a whole are set to zero.\n",
"1. We'll need to ensure the dimension column names match and the data is indexed by them, ready for joining to actuals.\n",
"\n",
"The cell below combines these steps:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f10cd6a8-9c98-44b3-83f9-082c27d0b946",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# Fix locations back to original casing:\n",
"# Define the list of locations with lowercase equivalents\n",
"tmp_locs_lower = pd.DataFrame(\n",
" {\"loc\": loc_item_combos[\"location\"]}\n",
").drop_duplicates().reset_index(drop=True)\n",
"tmp_locs_lower[\"loc_lower\"] = tmp_locs_lower[\"loc\"].str.lower()\n",
"# Join on to the dataframe and remove the old/temporary columns:\n",
"canvas_result_df = canvas_result_df.merge(\n",
" tmp_locs_lower,\n",
" left_on=\"location\",\n",
" right_on=\"loc_lower\",\n",
" how=\"left\",\n",
").drop(columns=[\"location\", \"loc_lower\"]).rename(columns={\"loc\": \"location\"})\n",
"del tmp_locs_lower\n",
"\n",
"# Fix product SKUs back to original casing:\n",
"# Define the list of SKUs with lowercase equivalents\n",
"tmp_skus_lower = pd.DataFrame(\n",
" {\"item_id\": loc_item_combos[\"item_id\"]}\n",
").drop_duplicates().reset_index(drop=True)\n",
"tmp_skus_lower[\"item_id_lower\"] = tmp_skus_lower[\"item_id\"].str.lower()\n",
"# Join on to the dataframe and remove the old/temporary columns:\n",
"canvas_result_df = canvas_result_df.merge(\n",
" tmp_skus_lower,\n",
" left_on=\"sku\",\n",
" right_on=\"item_id_lower\",\n",
" how=\"left\",\n",
").drop(columns=[\"sku\", \"item_id_lower\"])\n",
"del tmp_skus_lower\n",
"\n",
"# Normalize and parse the timestamp column:\n",
"canvas_result_df.rename(columns={\"date\": \"timestamp\"}, inplace=True)\n",
"canvas_result_df[\"timestamp\"] = pd.to_datetime(canvas_result_df[\"timestamp\"])\n",
"\n",
"# Create the month column:\n",
"canvas_result_df[\"month\"] = canvas_result_df[\"timestamp\"].dt.strftime(\"%Y-%m\")\n",
"\n",
"# Aggregate the data (sum):\n",
"canvas_result_df = canvas_result_df.groupby(\n",
" [\"month\"] + actuals_dimensions\n",
").agg(\n",
" {\n",
" \"p10\": \"sum\",\n",
" \"p50\": \"sum\",\n",
" \"p90\": \"sum\",\n",
" }\n",
")\n",
"\n",
"# Force any negative (month-aggregated) predictions up to 0 sales:\n",
"canvas_result_df[canvas_result_df < 0] = 0\n",
"\n",
"# Preview resulting dataframe:\n",
"canvas_result_df"
]
},
{
"cell_type": "markdown",
"id": "eae3ebd1-7f43-4bd1-bde9-1fd0309c3625",
"metadata": {},
"source": [
"This forecast now matches our standard format, so we can add it to the evaluation set:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f1823584-b911-4d1e-8286-d77628a1d047",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"FORECASTS[\"SageMaker Canvas\"] = NormalizedForecast(\n",
" name=\"SageMaker Canvas\",\n",
" df=canvas_result_df,\n",
" quantiles=[\"p10\", \"p50\", \"p90\"],\n",
")"
]
},
{
"cell_type": "markdown",
"id": "62303566-bc35-4b4e-bf96-5b18ab5fa0a2",
"metadata": {},
"source": [
"#### Amazon Forecast predictions\n",
"\n",
"IF you ran through the [Amazon Forecast notebook](1.3.%20Run%20Amazon%20Forecast.ipynb), you should now have an **exported predictor backtest** and *optionally* also a forward-looking forecast.\n",
"\n",
"Because of the way the date cut-offs in this example have been set up, it's the backtest export you'll need to use for comparison.\n",
"\n",
"▶️ **Find** your exported `backtest_s3_uri` from the Amazon Forecast notebook or the Amazon Forecast Console, and fill it in below.\n",
"\n",
"Backtest exports contain **two** datasets: The actual forecasted values, and the calculated accuracy metrics. For the `forecast_export_s3uri` below, we'll need to append `/forecasted-values` to your main backtest export URI, to select only the actual forecast folder:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "46b8fdf7-2fae-4775-9fa9-ce8382d08cce",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"forecast_export_s3uri = (\n",
" \"s3://TODO - YOUR backtest export S3 URI from previous notebook\"\n",
" + \"/forecasted-values\"\n",
")\n",
"\n",
"amz_forecast_df = util.analytics.filter_to_period(\n",
" pd.read_parquet(forecast_export_s3uri),\n",
" period_start=PERIOD_START,\n",
" period_end=PERIOD_END,\n",
" # timestamp_col_name=\"date\", # backtest uses 'timestamp' already, forecast would use 'date'\n",
")\n",
"amz_forecast_df"
]
},
{
"cell_type": "markdown",
"id": "905283f4-3fc6-48b0-acdf-189d72cbe290",
"metadata": {},
"source": [
"Since we trained a model with daily granularity data in this case, we'll need to aggregate the results to monthly for comparable metrics.\n",
"\n",
"As business logic, we'll also enforce that any predictions that turn out negative for the month as a whole are set to zero."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b2c1ec66-8b4c-436f-80c8-09d638c6199b",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# Ensure the source timestamp data is properly parsed to datetimes:\n",
"amz_forecast_df[\"timestamp\"] = pd.to_datetime(amz_forecast_df[\"timestamp\"])\n",
"# Create the month column:\n",
"amz_forecast_df[\"month\"] = amz_forecast_df[\"timestamp\"].dt.strftime(\"%Y-%m\")\n",
"\n",
"# Aggregate the data (sum):\n",
"amz_forecast_df = amz_forecast_df.groupby(\n",
" [\"month\"] + actuals_dimensions\n",
").agg(\n",
" {\n",
" \"mean\": \"sum\",\n",
" \"p10\": \"sum\",\n",
" \"p50\": \"sum\",\n",
" \"p90\": \"sum\",\n",
" }\n",
")\n",
"\n",
"# Force any negative (month-aggregated) predictions up to 0 sales:\n",
"amz_forecast_df[amz_forecast_df < 0] = 0\n",
"\n",
"# Preview resulting dataframe:\n",
"amz_forecast_df"
]
},
{
"cell_type": "markdown",
"id": "ad571a27-ca4d-4699-9b4c-ed440eb9ce4c",
"metadata": {},
"source": [
"This forecast now matches our standard format, so we can add it to the evaluation set:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3dfee450-a596-45ce-9f91-2eea88407b5a",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"FORECASTS[\"Amazon Forecast\"] = NormalizedForecast(\n",
" name=\"Amazon Forecast\",\n",
" df=amz_forecast_df,\n",
" quantiles=[\"mean\", \"p10\", \"p50\", \"p90\"],\n",
")"
]
},
{
"cell_type": "markdown",
"id": "700c563b-0ba5-46d2-8719-aaf5108909c8",
"metadata": {
"tags": []
},
"source": [
"## Estimating downstream costs of forecasting errors\n",
"\n",
"Now we have a retrospective period selected, and our actual sales data as well as multiple candidate forecasts for the period - loaded and normalized.\n",
"\n",
"From here we could of course calculate and compare basic accuracy-oriented metrics like RMSE, MAPE, or MASE to quantify which forecast was \"closest\" to actual recorded sales. Standard metrics like these are great for giving us a comparable view of the accuracy of different forecasts, but do little to help us answer the bigger question: **What's the value to our business** of considering a switch from forecast A to forecast B?\n",
"\n",
"We come to an important but challenging insight:\n",
"\n",
"> *The forecast itself has no value at all: Only the **business decisions it drives***\n",
"\n",
"As forecasting analysts, this presents a challenge: Often these decisions are made by **humans** on **separate teams** - for example store- or category-managers, or sales reps. How can we quantify what we don't directly control?\n",
"\n",
"Luckily, a second and equally important idea comes to our rescue:\n",
"\n",
"> *It's okay for a business case to **start rough**, so long as it's **unbiased***\n",
"\n",
"We're trying to estimate return on project investments here, not engineer components for the next space shuttle. Uncertainty and approximations are expected in business, and it should be acceptable to start simple and iteratively refine our model.\n",
"\n",
"What *is* important is that we stay mindful of how approximate the estimate is, and try to avoid choosing assumptions that bias it excessively one way or the other (being ultra-conservative or overly-optimistic).\n",
"\n",
"In the following sections we'll present some **basic, early-stage models** for estimating different business costs incurred due to forecasting errors. We aim to keep assumptions pretty high-level, so these metrics are applicable to many businesses.\n",
"\n",
"Ultimately, **it's up to you** to refine and improve these estimates based on your specific business context and the data available: We'll talk more about these opportunities in each section."
]
},
{
"cell_type": "markdown",
"id": "f50d6704-74cd-42a4-b255-bcbd918b8e9b",
"metadata": {
"tags": []
},
"source": [
"### Cost of excess inventory\n",
"\n",
"Regardless of whether stock ordering or production planning processes are manual or automated, over-forecasting demand generally leads to over-ordering or over-producing stock.\n",
"\n",
"While over-ordering has an immediate impact on free cash flow, the bottom-line cost can of course be complex to estimate:\n",
"\n",
"- Can the excess stock be stored and sold in future periods? Or does it have limited shelf life that might make it a write-off?\n",
"- Will demand in future periods be sufficient to sell off the excess in reasonable time? (For example: seasonal goods or one-off crazes)\n",
"- What costs does storage incur? (For example: warehouse space, extra transportation, refrigeration...)\n",
"- What rate does stored inventory get lost to shrinkage? (For example: theft, accidental damage)\n",
"- If goods do expire, are sales truly First-In-First-Out? (For example: customers choosing milk cartons with longer expiry dates in-store)\n",
"\n",
"We also need to make some assumptions about the **ordering/production decision process** to compare multiple forecasts: For example if human store managers ultimately make stocking decisions based on the demand forecast, it's not straightforward to say \"what could have been\" if we'd presented a different demand forecast at the time."
]
},
{
"cell_type": "markdown",
"id": "11d5bc00-0d1e-4f8f-99e2-717e0fb1e7d7",
"metadata": {
"tags": []
},
"source": [
"#### Starting simple\n",
"\n",
"For a rough initial estimate, we'll compare our forecasts using the following assumptions:\n",
"\n",
"- The business orders/produces exactly as many of each item as demand is forecast\n",
"- In each period (month) where the forecast exceeded actual demand, we multiply the excess by some per-item procurement cost\n",
"- The per-item cost can vary by period\n",
"\n",
"You can think of the per-item cost as the full production/procurement cost of an item, if you're modelling the impact of over-ordering on free cash flow each period... Or some discounted per-period cost for storage and shrinkage, if your products have a longer shelf-life and you're interested in actual bottom-line losses.\n",
"\n",
"First, we'll need the per-item costs:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b75300d9-ed6e-40b4-ae0e-6daf55500662",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"costs_df = util.analytics.filter_to_period(\n",
" pd.read_parquet(\"s3://measuring-forecast-benefits-assets/dataset/v1/unit_costs.parquet\"),\n",
" period_start=PERIOD_START,\n",
" period_end=PERIOD_END,\n",
" timestamp_col_name=\"date\",\n",
")\n",
"\n",
"costs_df.rename(columns={\"date\": \"month\"}, inplace=True)\n",
"costs_df"
]
},
{
"cell_type": "markdown",
"id": "8a12185b-f167-4fbd-bab0-e5ad62549533",
"metadata": {},
"source": [
"These costs are already aggregated at the month level, but the dimensions are coarser than our actual forecasts: By country instead of store location, and product type instead of individual SKU.\n",
"\n",
"luckily it's fairly straightforward for us to map from location to country and SKU to product in the sample dataset, because the location and SKU IDs are just combinations of multiple fields."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "790620b0-25b1-4f37-9d67-6fb9691cf697",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"costs_df = costs_df.merge(\n",
" loc_item_combos,\n",
" on=[\"country\", \"product\"],\n",
" how=\"left\",\n",
").set_index([\"month\", \"location\", \"item_id\"])\n",
"\n",
"costs_df"
]
},
{
"cell_type": "markdown",
"id": "72c5273a-5d48-4245-be65-cfbf7cf82f1b",
"metadata": {},
"source": [
"With per-item costs normalized, we're ready to join together our actuals and forecasts and estimate the costs of over-stocking."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "854ddc65-1613-4483-abd4-d49262c3c499",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"def join_forecasts(\n",
" actuals: pd.DataFrame,\n",
" forecasts: Dict[str, NormalizedForecast],\n",
" actuals_zero_fill_cols: Optional[List[str]] = None,\n",
"):\n",
" \"\"\"Join actuals and (multiple) forecasts with multi-level column names\"\"\"\n",
" result = actuals.copy()\n",
" result.columns = pd.MultiIndex.from_arrays(\n",
" [[\"Actual\"] * len(actuals.columns), [c for c in actuals.columns]],\n",
" names=[\"source\", \"column\"],\n",
" )\n",
" for forecast in forecasts.values():\n",
" if forecast.name == \"Actual\":\n",
" raise ValueError(\n",
" \"'Actual' is reserved: You can't use this for a NormalizedForecast.name!\"\n",
" )\n",
" forecast_norm = forecast.df.copy()\n",
" forecast_norm.columns = pd.MultiIndex.from_arrays(\n",
" [[forecast.name] * len(forecast_norm.columns), [c for c in forecast_norm.columns]],\n",
" names=[\"source\", \"column\"],\n",
" )\n",
" result = result.join(forecast_norm, how=\"outer\")\n",
"\n",
" if actuals_zero_fill_cols:\n",
" for colname in actuals_zero_fill_cols:\n",
" result[(\"Actual\", colname)] = result[(\"Actual\", colname)].fillna(0)\n",
"\n",
" return result"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8502bf87-42cb-4416-b493-e0fe6da6124b",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# Join the actuals and forecasts together:\n",
"# (Any missing actual sales records implies 0 sales for that item)\n",
"overstock_df = join_forecasts(actuals_df, FORECASTS, actuals_zero_fill_cols=[\"sales\"])\n",
"\n",
"# Add in the item costs data:\n",
"costs_tmp = costs_df.copy()\n",
"costs_tmp.columns = pd.MultiIndex.from_arrays(\n",
" [[\"Unit Costs\"] * len(costs_tmp.columns), [c for c in costs_tmp.columns]],\n",
" names=[\"source\", \"column\"],\n",
")\n",
"overstock_df = overstock_df.join(costs_tmp, how=\"left\")\n",
"del costs_tmp\n",
"\n",
"# Any gaps in sales data should be interpreted as zero sales for that particular product/period/etc:\n",
"overstock_df.loc[:, (\"Actual\", \"sales\")].fillna(0, inplace=True)\n",
"\n",
"# For each forecast, for each quantile, estimate the over-stock losses:\n",
"for forecast in FORECASTS.values():\n",
" for quantile in forecast.quantiles:\n",
" print(f\"{forecast.name} - {quantile}\")\n",
" # Calculate how many units over real sales were forecast:\n",
" overstock_df.loc[:, (forecast.name, f\"{quantile}_overstock\")] = (\n",
" overstock_df[forecast.name][quantile] - overstock_df[\"Actual\"][\"sales\"]\n",
" ).clip(lower=0)\n",
" # Multiply by item cost for the total loss:\n",
" overstock_df.loc[:, (forecast.name, f\"{quantile}_overstock_cost\")] = (\n",
" overstock_df[forecast.name][f\"{quantile}_overstock\"]\n",
" * overstock_df[\"Unit Costs\"][\"unit_cost\"]\n",
" )\n",
"\n",
"print(\"\\n-- Missing values after join: --\")\n",
"print(overstock_df.isna().sum(), \"\\n\")\n",
"overstock_df"
]
},
{
"cell_type": "markdown",
"id": "c6dfbd81-7d8e-4753-a61f-d511176540ba",
"metadata": {},
"source": [
"You might observe (hopefully very few) missing forecast values in the above table, but shouldn't have any gaps in actuals or item unit costs. This is because different models might have different criteria for *when* they can forecast: For example, the moving average may require some warm-up months, and Amazon Forecast may exclude items from the backtest export if their sales data starts very late (such as after the backtest window begins).\n",
"\n",
"We can summarize and slice these detailed calculations as needed, to analyze the performance of each model.\n",
"\n",
"For these summaries, we'll `dropna()` to ignore any records where not all models were able to forecast. This gives fairer comparisons (otherwise models with broader support would be penalized by having potentially-non-zero `overstock` costs where other models missing data), but doesn't quantify the value of one model having broader item support than another."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "70923e1d-952c-4a8d-9632-8d08a701b2ad",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# Filter to just the final costs columns:\n",
"overstock_costs = overstock_df[\n",
" [\n",
" (forecast.name, f\"{quantile}_overstock_cost\")\n",
" for forecast in FORECASTS.values()\n",
" for quantile in forecast.quantiles\n",
" ]\n",
"].dropna()\n",
"\n",
"# Top-level summary for each forecast and quantile:\n",
"overstock_costs.sum().map(\"${:,.2f}\".format)"
]
},
{
"cell_type": "markdown",
"id": "0ca3316f-3379-45db-b14c-0cd018b924cd",
"metadata": {},
"source": [
"As a quick validation check here, it should make sense that the top-level `p90_overstock_cost` is much greater than the `p10_overstock_cost`: As discussed further [here](https://aws.amazon.com/blogs/machine-learning/amazon-forecast-now-supports-the-generation-of-forecasts-at-a-quantile-of-your-choice/), Amazon Forecast can generate quantile forecasts from p1 to p99 to characterize likely lower- and upper-bounds of actual demand, and reflect the uncertainty of demand as it changes over time.\n",
"\n",
"If we take a very **low** quantile forecast, and only order/produce that many products, then of course we'll see minimal costs incurred due to over-stocking: The trade-off would be that we **miss out on a lot of potential sales** due to not having sufficient stock on hand. Those lost revenues are what we'll discuss in the next section.\n",
"\n",
"Between the low-level and top-level views, you can of course also slice and summarize intermediate views for individual managers: For example by store as shown below."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c4d0fd72-4672-45a1-9c19-275cd1666015",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"overstock_costs.groupby([\"location\"]).sum().applymap(\"${:,.2f}\".format)"
]
},
{
"cell_type": "markdown",
"id": "4bf00f05-0dd8-4452-8a40-9fee72626d02",
"metadata": {
"tags": []
},
"source": [
"### Cost of stock-out events\n",
"\n",
"On the opposite side to over-stocking, what happens when we *under-forecast* and don't order or manufacture enough product to meet customer demand?\n",
"\n",
"Running out of stock of products is sometimes called a \"stock-out\" event, and this too is bad for business:\n",
"\n",
"1. As an immediate result, the business will **lose revenue** for any customers who tried to buy the product but couldn't.\n",
"2. In the longer term, limited selection or patchy availability will **erode customer loyalty** and may harm the business' market share.\n",
"\n",
"Of these factors, direct lost revenue (1) is perhaps the easier to start estimating. However, there are still potential complexities to be aware of:\n",
"\n",
"- We need some kind of estimate of how many potential sales were missed on days/periods where stock ran out. Of course the original demand forecast itself is something we can use here, but:\n",
" - In the typical situation where we're *comparing multiple forecasts*, which one should we take as our \"best guess\"?\n",
" - In retrospect we do actually have more information available to us than when the forecast was originally made. For example if sales tracked above forecast for the first 20 days of the month before stock ran out, doesn't that mean our \"best guess\" of demand in the final 10 days would likely also be higher than the original forecast? Is it worth training a new model specifically to answer this? Or (if working with probabilistic models) choosing a higher quantile for the \"best guess\"?\n",
"- Sometimes actual sales can be restricted *even in low-stock periods* before inventory systems record stock finally dropping to zero:\n",
" - In high-demand periods in physical retail, there may be periods where no stock is available on shelves despite inventory being available in back-room/storage\n",
" - Unless stock counts and reconciliations are performed regularly, shrinkage factors like theft and loss can cause stock tracking systems to show limited availability when there is none in practice."
]
},
{
"cell_type": "markdown",
"id": "81f276d6-4ba9-414b-911c-ae16b4d1d327",
"metadata": {
"tags": []
},
"source": [
"#### Starting simple\n",
"\n",
"For a rough initial estimate, we'll compare our forecasts using the following assumptions:\n",
"\n",
"1. The business orders/produces exactly as many of each item as demand is forecast\n",
"2. In each period (month) where the actual demand exceeded the forecast, we multiply the shortfall by the current item sale price\n",
"3. Item sale prices can vary over periods\n",
"\n",
"Of course in practice, if (1) was true then (2) could never happen: Observed sales would be strictly less than or equal to the forecast in each month. For a self-consistent, online evaluation of one forecast model already in production, you could directly use inventory data to identify stock-out days and use the forecasted demand for those days as your basis. However, what you're really evaluating there is the *end-to-end loop* of what inventory decisions that forecast drove - so that doesn't really generalize to comparing multiple models. This is discussed further in the \"[A/B Testing](#abtesting)\" section below.\n",
"\n",
"With this method you could choose to use either actual item sale prices (to model lost revenue) or just item margins (to model lost profit). We'll refer to \"prices\" for consistency, and start by loading that dataset:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d7bace90-c2eb-4f3a-ad3d-12774bd2abde",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"prices_df = util.analytics.filter_to_period(\n",
" pd.read_parquet(\"s3://measuring-forecast-benefits-assets/dataset/v1/prices_promos.parquet\"),\n",
" period_start=PERIOD_START,\n",
" period_end=PERIOD_END,\n",
" timestamp_col_name=\"date\",\n",
")\n",
"prices_df"
]
},
{
"cell_type": "markdown",
"id": "2292815e-4397-436e-885b-250fec40fa39",
"metadata": {
"tags": []
},
"source": [
"In this example price data and actual sales are both available at the daily level which might allow for a more detailed model of when in the month stock would run out and what the real average price of missed sales might be. However, since our baseline forecast is only monthly with no breakdown by day, such detailed comparison would require some assumptions anyway.\n",
"\n",
"For an basic view, we'll just take an average unit price for the month over all days (without weighting by actual or forecast sales on those days):"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4f500278-a59b-49bd-a199-134ea7cc8db0",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"prices_df[\"month\"] = prices_df[\"date\"].dt.strftime(\"%Y-%m\")\n",
"\n",
"prices_df = prices_df.groupby(\n",
" [\"month\", \"country\", \"product\"]\n",
").agg(\n",
" {\"unit_price\": \"mean\"}\n",
").reset_index().merge(\n",
" loc_item_combos,\n",
" on=[\"country\", \"product\"],\n",
" how=\"left\",\n",
").set_index([\"month\", \"location\", \"item_id\"])\n",
"\n",
"prices_df"
]
},
{
"cell_type": "markdown",
"id": "7d07d801-da2d-45f4-bc52-a7190198fd88",
"metadata": {},
"source": [
"As for over-stock cost estimation earlier, we're now ready to join our actual data with forecasts and item prices, to estimate the lost revenues due to stock-out events:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "64fbd792-45e7-4505-91cd-b20ff4eba2e9",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# Join the actuals and forecasts together:\n",
"# (Any missing actual sales records implies 0 sales for that item)\n",
"stockout_df = join_forecasts(actuals_df, FORECASTS, actuals_zero_fill_cols=[\"sales\"])\n",
"\n",
"# Add in the item costs data:\n",
"prices_tmp = prices_df.copy()\n",
"prices_tmp.columns = pd.MultiIndex.from_arrays(\n",
" [[\"Unit Prices\"] * len(prices_tmp.columns), [c for c in prices_tmp.columns]],\n",
" names=[\"source\", \"column\"],\n",
")\n",
"stockout_df = stockout_df.join(prices_tmp, how=\"left\")\n",
"del prices_tmp\n",
"\n",
"# Any gaps in sales data should be interpreted as zero sales for that particular product/period/etc:\n",
"stockout_df.loc[:, (\"Actual\", \"sales\")].fillna(0, inplace=True)\n",
"\n",
"# For each forecast, for each quantile, estimate the over-stock losses:\n",
"for forecast in FORECASTS.values():\n",
" for quantile in forecast.quantiles:\n",
" print(f\"{forecast.name} - {quantile}\")\n",
" # Calculate how many units over real sales were forecast:\n",
" stockout_df.loc[:, (forecast.name, f\"{quantile}_missedsales\")] = (\n",
" stockout_df[\"Actual\"][\"sales\"] - stockout_df[forecast.name][quantile]\n",
" ).clip(lower=0)\n",
" # Multiply by item cost for the total loss:\n",
" stockout_df.loc[:, (forecast.name, f\"{quantile}_missedsales_rev\")] = (\n",
" stockout_df[forecast.name][f\"{quantile}_missedsales\"]\n",
" * stockout_df[\"Unit Prices\"][\"unit_price\"]\n",
" )\n",
"\n",
"print(\"\\n-- Missing values after join: --\")\n",
"print(stockout_df.isna().sum(), \"\\n\")\n",
"stockout_df"
]
},
{
"cell_type": "markdown",
"id": "5fa19d0a-ac4d-47f8-9c96-9f256b7ede88",
"metadata": {},
"source": [
"As before, there may be a small number of missing forecast values due to differing criteria for each model to be able to forecast.\n",
"\n",
"Again, we can drop any records with missing data and aggregate this detail into a global summary for each forecast and quantile:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "17606c5f-15fe-4afd-9c10-c72f7ed2152b",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# Filter to just the final costs columns:\n",
"stockout_costs = stockout_df[\n",
" [\n",
" (forecast.name, f\"{quantile}_missedsales_rev\")\n",
" for forecast in FORECASTS.values()\n",
" for quantile in forecast.quantiles\n",
" ]\n",
"].dropna()\n",
"\n",
"# Top-level summary for each forecast and quantile:\n",
"stockout_costs.sum().map(\"${:,.2f}\".format)"
]
},
{
"cell_type": "markdown",
"id": "616d3672-5d73-4c7c-ab7d-52ef0dfda2a5",
"metadata": {},
"source": [
"The results here are somewhat opposite to the over-stocking costs calculated earlier: You'll see losses are much greater for *low-quantile* forecasts (like `p10` from Amazon Forecast or SageMaker Canvas) than high-quantile forecasts like `p90`. If we take a lower-bound forecast and only order sufficient stock to cover that, then of course we would expect bigger losses in potential revenue due to running out of inventory.\n",
"\n",
"As in the previous section, we could also summarize this to different levels for example by individual store:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8d6d042c-0d0a-42d6-8f69-fd662d747fab",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"stockout_costs.groupby([\"location\"]).sum().applymap(\"${:,.2f}\".format)"
]
},
{
"cell_type": "markdown",
"id": "c151c99a-23a0-49c5-b9bb-0cca5d5706f0",
"metadata": {
"tags": []
},
"source": [
"### Total costs and benefits\n",
"\n",
"So far we've identified and estimated multiple business inefficiencies caused by forecasting errors:\n",
"\n",
"- Reduced free cash flow or bottom-line written-off cost of excess inventory due to over-ordering (depending whether you used full item procurement cost, or just write-off cost proportion)\n",
"- Lost revenue or bottom-line profit from sales missed due to running out of stock (depending whether you used full item sale price, or just margin)\n",
"\n",
"Many businesses will apply **different weight to these different metrics**, and so it may not be appropriate to simply sum up the dollar values: Different trade-offs might be important to a business between revenue maximization and cost reduction.\n",
"\n",
"One way to combine all the factors would be to express each in terms of bottom-line impact. Another could be to take a weighted combination of revenue growth and cost reduction. However you tackle it, you might see **trade-offs** like we showed in the extreme case of choosing biased upper-bound or lower-bound forecasts!\n",
"\n",
"The code below shows a way you might bring the top-level summary costs together for the different forecast models. You should be able to see that Amazon Forecast / SageMaker Canvas quantiles like `mean` or `p50` significantly out-perform the Moving Average baseline forecast overall, but with different trade-offs between the business metrics:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c201f523-1b22-4a30-ab9c-ea33da58ab20",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# Create MultiIndex series by [forecast][metric]\n",
"tmp = pd.concat([stockout_costs.sum(), overstock_costs.sum()]).rename(\"value\").reset_index(level=1)\n",
"\n",
"# Split the raw metric names (e.g. mean_missedsales_rev) to their quantile and metric:\n",
"column_parted = tmp[\"column\"].str.partition(\"_\") # Assume no underscores in quantile names!\n",
"tmp[\"quantile\"] = column_parted[0]\n",
"tmp[\"metric\"] = column_parted[2]\n",
"tmp.drop(columns=[\"column\"], inplace=True)\n",
"\n",
"# Index and pivot the data for a nice view:\n",
"summary = tmp.set_index([\"quantile\"], append=True).pivot(columns=[\"metric\"])\n",
"\n",
"summary.applymap(\"${:,.2f}\".format)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ef714dab-63c6-498b-bf33-233341d63dd9",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"ax = (summary / 1000).plot.barh(\n",
" figsize=(10, 4), # (width, height)\n",
" stacked=True,\n",
" title=\"Business Inefficiencies by Forecast (Lower is Better)\",\n",
")\n",
"ax.grid(axis=\"x\")\n",
"ax.set_xlabel(\"Thousand Dollars\")\n",
"ax.set_ylabel(\"Forecast Model, Quantile\")\n",
"ax.get_figure().savefig(\"dataset/result-summary.png\", bbox_inches=\"tight\")"
]
},
{
"cell_type": "markdown",
"id": "58d3c311-dd82-4964-9607-f3174380b2ac",
"metadata": {},
"source": [
"In our tests as shown below (your exact numbers may vary), re-stocking based on Amazon Forecast `mean` or `p50` (median) quantiles delivered the best performance with combined business inefficiencies around $1M. Even taking an extreme quantile from Forecast like `p10` or `p90` delivered better results than ordering based on the recent moving average of sales (around $7M inefficiencies). SageMaker Canvas performed similarly to Forecast, which is no great surprise as it uses Amazon Forecast under the hood for forecasting models. As discussed in the Canvas notebook, our data preparation was a little different between the two AI/ML services and likely biased the comparison somewhat against Canvas.\n",
"\n",
""
]
},
{
"cell_type": "markdown",
"id": "eb36aa36-df4a-415a-b85e-1e2e170334e7",
"metadata": {},
"source": [
"## Online evaluation and A/B testing\n",
"\n",
"As the examples above have hopefully started to illustrate, when we talk about the \"business value of better demand forecasting\", what we *really* mean is the value of the **decisions you make based on the forecast**.\n",
"\n",
"Our example metrics have both been about the stock ordering / production planning process, making some **assumptions** about how your ordering or production choices might have changed under different forecasts, so we can model whether you would've had too much or too little stock on hand to meet demand.\n",
"\n",
"For many businesses these decisions aren't (yet?) fully automated, so you might well question those assumptions: How do I really know what stock the store manager would have ordered if I showed them a forecast for X instead of Y last month?\n",
"\n",
"Here are two ways you could refine your estimates further in situations like these:\n",
"\n",
"1. **Extend your retrospective model** to try and simulate the complexities of what would happen to ordering decisions and therefore stock levels under different forecasts.\n",
" - As we said earlier, a business case doesn't have to be exact: If you think there are important dependencies or feedbacks not being captured, you can iteratively refine your model starting from something simple.\n",
"2. **A/B test in the real-world** to try and directly measure your impact.\n",
"\n",
"With manual processes, simulating \"what could have been\" may be hard but estimating the cost of your **actual** waste (stock left in inventory, or forecasted sales during stock-out periods) should still be practical.\n",
"\n",
"You could consider running limited live pilots where both the old and new proposed forecasting models are used *in parallel*, to try and quantify real-world rewards.\n",
"\n",
"Applying this at a fine grain (for example randomly selecting products to use forecast model A or B, instead of big-bang deployment to an entire store or product category); and keeping the model selection hidden from decision-makers; are two practices you could apply to separate real signals from noise and avoid bias driven by big, widely-communicated changes."
]
},
{
"cell_type": "markdown",
"id": "c2bc3587-4d4f-439b-b51a-756f7a5942c4",
"metadata": {},
"source": [
"## Conclusions\n",
"\n",
"When comparing and refining forecasting models, it's important to **elevate your analysis** from science-oriented accuracy metrics to business-oriented value metrics where possible - to help you understand the real-world impact of model improvements, when to dive deeper, and when to shift focus to other more urgent projects.\n",
"\n",
"It's common for the downstream impacts of forecasting improvement to be complex, and for human decision-makers to intervene between the initial forecast output and the final outcomes that drive business costs or revenue.\n",
"\n",
"This doesn't mean forecast owning teams should give up trying to understand the impact of their investments though: You can start out with simple heuristics, and iteratively refine as you explore the business context. If counter-factual \"what could have been\" analysis is too complex in your case for estimates to be useful, you could explore running live A/B tests to track the difference in real-world value between your candidate models.\n",
"\n",
"Inventory management is one practical place to start, and in this example we showed some simplistic models for estimating how demand forecasting errors might contribute to increased costs, impaired cashflow, and missed revenue through the stock ordering/production decisions they drive.\n",
"\n",
"By progressively building maturity in modelling the end-to-end impacts of forecasting, you can start to unlock other forecasting-related use-cases like:\n",
"\n",
"- Pricing optimization, by analyzing the price elasticity of the demand forecast\n",
"- Increased decision automation, by building confidence in automated ordering rules over time\n",
"- End-to-end supply chain optimization, starting to consider lead times and other factors\n",
"\n",
"For more information about how AWS can help you build a data-driven supply chain and incorporate ML into your business, check out:\n",
"\n",
"- [AWS Supply Chain](https://aws.amazon.com/aws-supply-chain/), a fully-managed service to unify supply chain data and provide actionable, ML-powered insights\n",
"- [AWS Supply Chain Competency Partners](https://aws.amazon.com/industrial/supply-chain-management/partners/), for AWS partners with validated experience in supply chain solutions\n",
"- ...And if you'd like to dive deeper, the [Operations Research](https://www.amazon.science/research-areas/operations-research-and-optimization) and [Machine Learning](https://www.amazon.science/research-areas/machine-learning) sections of the [Amazon Science blog](https://www.amazon.science/) share recent research from Amazon on related topics."
]
}
],
"metadata": {
"availableInstances": [
{
"_defaultOrder": 0,
"_isFastLaunch": true,
"category": "General purpose",
"gpuNum": 0,
"memoryGiB": 4,
"name": "ml.t3.medium",
"vcpuNum": 2
},
{
"_defaultOrder": 1,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"memoryGiB": 8,
"name": "ml.t3.large",
"vcpuNum": 2
},
{
"_defaultOrder": 2,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"memoryGiB": 16,
"name": "ml.t3.xlarge",
"vcpuNum": 4
},
{
"_defaultOrder": 3,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"memoryGiB": 32,
"name": "ml.t3.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 4,
"_isFastLaunch": true,
"category": "General purpose",
"gpuNum": 0,
"memoryGiB": 8,
"name": "ml.m5.large",
"vcpuNum": 2
},
{
"_defaultOrder": 5,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"memoryGiB": 16,
"name": "ml.m5.xlarge",
"vcpuNum": 4
},
{
"_defaultOrder": 6,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"memoryGiB": 32,
"name": "ml.m5.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 7,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"memoryGiB": 64,
"name": "ml.m5.4xlarge",
"vcpuNum": 16
},
{
"_defaultOrder": 8,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"memoryGiB": 128,
"name": "ml.m5.8xlarge",
"vcpuNum": 32
},
{
"_defaultOrder": 9,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"memoryGiB": 192,
"name": "ml.m5.12xlarge",
"vcpuNum": 48
},
{
"_defaultOrder": 10,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"memoryGiB": 256,
"name": "ml.m5.16xlarge",
"vcpuNum": 64
},
{
"_defaultOrder": 11,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"memoryGiB": 384,
"name": "ml.m5.24xlarge",
"vcpuNum": 96
},
{
"_defaultOrder": 12,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"memoryGiB": 8,
"name": "ml.m5d.large",
"vcpuNum": 2
},
{
"_defaultOrder": 13,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"memoryGiB": 16,
"name": "ml.m5d.xlarge",
"vcpuNum": 4
},
{
"_defaultOrder": 14,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"memoryGiB": 32,
"name": "ml.m5d.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 15,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"memoryGiB": 64,
"name": "ml.m5d.4xlarge",
"vcpuNum": 16
},
{
"_defaultOrder": 16,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"memoryGiB": 128,
"name": "ml.m5d.8xlarge",
"vcpuNum": 32
},
{
"_defaultOrder": 17,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"memoryGiB": 192,
"name": "ml.m5d.12xlarge",
"vcpuNum": 48
},
{
"_defaultOrder": 18,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"memoryGiB": 256,
"name": "ml.m5d.16xlarge",
"vcpuNum": 64
},
{
"_defaultOrder": 19,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"memoryGiB": 384,
"name": "ml.m5d.24xlarge",
"vcpuNum": 96
},
{
"_defaultOrder": 20,
"_isFastLaunch": true,
"category": "Compute optimized",
"gpuNum": 0,
"memoryGiB": 4,
"name": "ml.c5.large",
"vcpuNum": 2
},
{
"_defaultOrder": 21,
"_isFastLaunch": false,
"category": "Compute optimized",
"gpuNum": 0,
"memoryGiB": 8,
"name": "ml.c5.xlarge",
"vcpuNum": 4
},
{
"_defaultOrder": 22,
"_isFastLaunch": false,
"category": "Compute optimized",
"gpuNum": 0,
"memoryGiB": 16,
"name": "ml.c5.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 23,
"_isFastLaunch": false,
"category": "Compute optimized",
"gpuNum": 0,
"memoryGiB": 32,
"name": "ml.c5.4xlarge",
"vcpuNum": 16
},
{
"_defaultOrder": 24,
"_isFastLaunch": false,
"category": "Compute optimized",
"gpuNum": 0,
"memoryGiB": 72,
"name": "ml.c5.9xlarge",
"vcpuNum": 36
},
{
"_defaultOrder": 25,
"_isFastLaunch": false,
"category": "Compute optimized",
"gpuNum": 0,
"memoryGiB": 96,
"name": "ml.c5.12xlarge",
"vcpuNum": 48
},
{
"_defaultOrder": 26,
"_isFastLaunch": false,
"category": "Compute optimized",
"gpuNum": 0,
"memoryGiB": 144,
"name": "ml.c5.18xlarge",
"vcpuNum": 72
},
{
"_defaultOrder": 27,
"_isFastLaunch": false,
"category": "Compute optimized",
"gpuNum": 0,
"memoryGiB": 192,
"name": "ml.c5.24xlarge",
"vcpuNum": 96
},
{
"_defaultOrder": 28,
"_isFastLaunch": true,
"category": "Accelerated computing",
"gpuNum": 1,
"memoryGiB": 16,
"name": "ml.g4dn.xlarge",
"vcpuNum": 4
},
{
"_defaultOrder": 29,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"memoryGiB": 32,
"name": "ml.g4dn.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 30,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"memoryGiB": 64,
"name": "ml.g4dn.4xlarge",
"vcpuNum": 16
},
{
"_defaultOrder": 31,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"memoryGiB": 128,
"name": "ml.g4dn.8xlarge",
"vcpuNum": 32
},
{
"_defaultOrder": 32,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 4,
"memoryGiB": 192,
"name": "ml.g4dn.12xlarge",
"vcpuNum": 48
},
{
"_defaultOrder": 33,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"memoryGiB": 256,
"name": "ml.g4dn.16xlarge",
"vcpuNum": 64
},
{
"_defaultOrder": 34,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"memoryGiB": 61,
"name": "ml.p3.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 35,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 4,
"memoryGiB": 244,
"name": "ml.p3.8xlarge",
"vcpuNum": 32
},
{
"_defaultOrder": 36,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 8,
"memoryGiB": 488,
"name": "ml.p3.16xlarge",
"vcpuNum": 64
},
{
"_defaultOrder": 37,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 8,
"memoryGiB": 768,
"name": "ml.p3dn.24xlarge",
"vcpuNum": 96
},
{
"_defaultOrder": 38,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"memoryGiB": 16,
"name": "ml.r5.large",
"vcpuNum": 2
},
{
"_defaultOrder": 39,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"memoryGiB": 32,
"name": "ml.r5.xlarge",
"vcpuNum": 4
},
{
"_defaultOrder": 40,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"memoryGiB": 64,
"name": "ml.r5.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 41,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"memoryGiB": 128,
"name": "ml.r5.4xlarge",
"vcpuNum": 16
},
{
"_defaultOrder": 42,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"memoryGiB": 256,
"name": "ml.r5.8xlarge",
"vcpuNum": 32
},
{
"_defaultOrder": 43,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"memoryGiB": 384,
"name": "ml.r5.12xlarge",
"vcpuNum": 48
},
{
"_defaultOrder": 44,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"memoryGiB": 512,
"name": "ml.r5.16xlarge",
"vcpuNum": 64
},
{
"_defaultOrder": 45,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"memoryGiB": 768,
"name": "ml.r5.24xlarge",
"vcpuNum": 96
},
{
"_defaultOrder": 46,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"memoryGiB": 16,
"name": "ml.g5.xlarge",
"vcpuNum": 4
},
{
"_defaultOrder": 47,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"memoryGiB": 32,
"name": "ml.g5.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 48,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"memoryGiB": 64,
"name": "ml.g5.4xlarge",
"vcpuNum": 16
},
{
"_defaultOrder": 49,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"memoryGiB": 128,
"name": "ml.g5.8xlarge",
"vcpuNum": 32
},
{
"_defaultOrder": 50,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"memoryGiB": 256,
"name": "ml.g5.16xlarge",
"vcpuNum": 64
},
{
"_defaultOrder": 51,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 4,
"memoryGiB": 192,
"name": "ml.g5.12xlarge",
"vcpuNum": 48
},
{
"_defaultOrder": 52,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 4,
"memoryGiB": 384,
"name": "ml.g5.24xlarge",
"vcpuNum": 96
},
{
"_defaultOrder": 53,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 8,
"memoryGiB": 768,
"name": "ml.g5.48xlarge",
"vcpuNum": 192
}
],
"instance_type": "ml.t3.medium",
"kernelspec": {
"display_name": "Python 3 (Data Science 3.0)",
"language": "python",
"name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/sagemaker-data-science-310-v1"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}