{ "cells": [ { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "**Measuring Demand Forecasting benefits series**\n", "\n", "# Generating forecasts with Amazon Forecast\n", "\n", "> *This notebook should work with the **`Data Science 3.0`** kernel in SageMaker Studio (older versions may see errors), and the default `ml.t3.medium` instance type (2 vCPU + 4 GiB RAM)*\n", "\n", "In this notebook we'll walk through the process of importing data, training models, extracting metrics, and (optionally) producing forward-looking forecasts in [Amazon Forecast](https://aws.amazon.com/forecast/) - using the synthetic sample dataset and Python code.\n", "\n", "You could instead work with Amazon Forecast manually through [the AWS Console UI](https://docs.aws.amazon.com/forecast/latest/dg/gs-console.html), other APIs and SDKs, or with more advanced pipeline automations like [this one using AWS CDK](https://github.com/aws-samples/amazon-forecast-mlops-pipeline-cdk) or the [Improving Forecast Accuracy with Machine Learning Solution](https://aws.amazon.com/solutions/implementations/improving-forecast-accuracy-with-machine-learning/) from AWS Solutions.\n", "\n", "This notebook is provided to give users a relatively automated way to build Amazon Forecast models, in preparation for evaluating and comparing forecast business benefits. For more in-depth and step-by-step introductions to the mechanics of Forecast itself, check out the official [aws-samples/amazon-sagemaker-examples](https://github.com/aws-samples/amazon-forecast-samples) repository." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Contents\n", "\n", "1. [Dependencies and setup](#Dependencies-and-setup)\n", "1. [Prepare data](#Prepare-data)\n", "1. [Define and import datasets in Amazon Forecast](#Define-and-import-datasets-in-Amazon-Forecast)\n", "1. [Train predictor model](#Train-predictor-model)\n", "1. [Export predictor backtest results](#Export-predictor-backtest-results)\n", "1. [(Optional) Create and export a forecast](#forecast)\n", "1. [Next steps](#Next-steps)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "## Dependencies and setup\n", "\n", "Before getting started, we'll first import the libraries this notebook needs (all of which should be pre-installed on the supported SageMaker notebook kernel listed above), and configure where in Amazon S3 the input and output datasets should be stored:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "%load_ext autoreload\n", "%autoreload 2\n", "\n", "# Python Built-Ins:\n", "import json\n", "import logging\n", "import os\n", "from time import sleep # For polling waits\n", "\n", "# External Dependencies:\n", "import boto3 # General-purpose AWS SDK for Python\n", "import numpy as np # Numerical/math processing tools\n", "import pandas as pd # Tabular/dataframe processing tools\n", "import sagemaker # SageMaker SDK used just to look up default S3 bucket\n", "\n", "# Local Dependencies:\n", "import util\n", "\n", "# Configuration:\n", "BUCKET_NAME = sagemaker.Session().default_bucket()\n", "BUCKET_PREFIX = \"measuring-forecast-benefits/\"\n", "\n", "os.makedirs(\"dataset\", exist_ok=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### IAM access permissions\n", "\n", "- To access data on your behalf, Amazon Forecast needs an [AWS IAM Execution Role](https://docs.aws.amazon.com/forecast/latest/dg/aws-forecast-iam-roles.html) with appropriate S3 permissions.\n", "- To create and manage Amazon Forecast resources, **this notebook** also needs an Execution Role with Amazon Forecast permissions. If you're running this notebook in Amazon SageMaker, it will have an [associated role already](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html). If you're running this notebook locally instead, you'll need to [set up your CLI credentials](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html).\n", "\n", "These are normally **two separate roles**, but could be combined into one if it's appropriate for your security strategy.\n", "\n", "First, we'll check below that **this notebook** has basic Amazon Forecast access:\n", "\n", "> ⚠️ **If this check fails:** Find your notebook's identity/role in the [IAM Console](https://console.aws.amazon.com/iamv2/home?#/roles) and consider attaching the `AmazonForecastFullAccess` permission." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "forecast = boto3.client(\"forecast\")\n", "\n", "try:\n", " forecast.list_dataset_groups()\n", " print(\"SUCCESS: Notebook can call (at least basic) Amazon Forecast APIs\")\n", "except Exception as err:\n", " try: # Try to look up the NB role to help users find it for fixing permissions:\n", " nb_role_arn = sagemaker.get_execution_role()\n", " except:\n", " nb_role_arn = None\n", " print(\n", " \"ERROR: Notebook does not have access to Amazon Forecast APIs. Try attaching the \"\n", " \"'AmazonForecastFullAccess' permission to your execution role.\\n\\nDetected Role: %s\"\n", " % nb_role_arn\n", " )\n", " raise err" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For your **Amazon Forecast role**, you'll need to either [set this up by hand](https://docs.aws.amazon.com/forecast/latest/dg/aws-forecast-iam-roles.html) or grant your notebook additional permissions to create it for you.\n", "\n", "> ℹ️ **Tip:** If you have [Amazon SageMaker Canvas](https://aws.amazon.com/sagemaker/canvas/) set up [with forecasting enabled](https://docs.aws.amazon.com/sagemaker/latest/dg/canvas-set-up-forecast.html), you may already be able to use your SageMaker Execution Role as a Forecast role. Try setting `forecast_role_arn = nb_role_arn` below.\n", "\n", "Edit the cell below to insert your own Amazon Forecast Role ARN - or you can *try* to run it as-is to set up the role via the notebook:\n", "\n", "> ⚠️ **Note:** If you *do* temporarily attach administrative permissions like `IAMFullAccess` to your notebook execution role to allow it to create the Amazon Forecast role on your behalf, remember to remove these permissions when no longer needed - following the [principle of least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html). " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# TODO: Replace below with your own ARN if you create one manually:\n", "forecast_role_arn = util.iam.ensure_default_forecast_role()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There's **one final requirement** that we can't really test for here: Your notebook role needs permission to \"pass\" (use) your Amazon Forecast role.\n", "\n", "> ⚠️ [**Check**](https://console.aws.amazon.com/iamv2/home?#/roles) your notebook's execution Role/identity has an attached policy granting the `iam:PassRole` permission on your Amazon Forecast role.\n", ">\n", "> If you need, you can **Create an inline policy on your notebook role** in the AWS Console to grant this access. The JSON for this policy could be similar to:\n", ">\n", "> ```json\n", "> {\n", "> \"Version\": \"2012-10-17\",\n", "> \"Statement\": [\n", "> {\n", "> \"Sid\": \"PassRoleForForecast\",\n", "> \"Effect\": \"Allow\",\n", "> \"Action\": \"iam:PassRole\",\n", "> \"Resource\": \"\"\n", "> }\n", "> ]\n", "> }\n", "> ```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Prepare data\n", "\n", "With service permissions set up, we're ready to prepare our datasets and start using them in Amazon Forecast.\n", "\n", "We'll choose the [RETAIL domain](https://docs.aws.amazon.com/forecast/latest/dg/howitworks-domains-ds-types.html) for this project, which will influence the [minimum required schema](https://docs.aws.amazon.com/forecast/latest/dg/retail-domain.html) for the prepared datasets. You can find more information about preparing and importing data for Amazon Forecast in the [Importing Datasets](https://docs.aws.amazon.com/forecast/latest/dg/howitworks-datasets-groups.html), [Dataset Guidelines](https://docs.aws.amazon.com/forecast/latest/dg/dataset-import-guidelines-troubleshooting.html), and [Guidelines and Quotas](https://docs.aws.amazon.com/forecast/latest/dg/limits.html) pages of the Amazon Forecast Developer Guide." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Target Time-Series (TTS)\n", "\n", "The mandatory TTS dataset records the historical values of the quantity you actually want to predict: In this case, sales of products.\n", "\n", "For our sample, we'll first load the synthetic sales dataset before adjusting it for Amazon Forecast:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "sales_raw_df = pd.read_parquet(\"s3://measuring-forecast-benefits-assets/dataset/v1/sales.parquet\")\n", "sales_raw_df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This dataset is already close to our target format: We'll use `sku` as the `item_id` field and treat `location` as a [dimension](https://docs.aws.amazon.com/forecast/latest/dg/howitworks-predictor.html#creating-predictors).\n", "\n", "The only preparation needed is to:\n", "\n", "- Rename some columns to match the [required field names](https://docs.aws.amazon.com/forecast/latest/dg/retail-domain.html#target-time-series-type-retail-domain)\n", "- Explicitly store timestamps in a [supported timestamp format](https://docs.aws.amazon.com/forecast/latest/dg/dataset-import-guidelines-troubleshooting.html) - we'll use daily `yyyy-MM-dd` as this data has no sub-daily variations" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "tts_df = sales_raw_df.rename(\n", " columns={\"date\": \"timestamp\", \"sku\": \"item_id\", \"sales\": \"demand\"},\n", ")\n", "tts_df[\"timestamp\"] = tts_df[\"timestamp\"].dt.strftime(\"%Y-%m-%d\")\n", "tts_df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Once the data is prepared, we're ready to upload it to Amazon S3 to use with the Forecast service:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "tts_s3_uri = f\"s3://{BUCKET_NAME}/{BUCKET_PREFIX}training-data/tts/tts.parquet\"\n", "tts_df.to_parquet(tts_s3_uri, index=False)\n", "print(f\"Uploaded TTS to: {tts_s3_uri}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We also need to compile the [Amazon Forecast Schema](https://docs.aws.amazon.com/forecast/latest/dg/howitworks-datasets-groups.html#howitworks-dataset) for each dataset to be imported, so may as well detect that automatically from the dataframe columns here:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "FORECAST_DIMENSIONS = [col for col in tts_df if col not in (\"timestamp\", \"demand\", \"item_id\")]\n", "print(\"Forecast Dimensions:\\n \", FORECAST_DIMENSIONS)\n", "\n", "N_DIMENSION_COMBOS = len(tts_df[[\"item_id\"] + FORECAST_DIMENSIONS].drop_duplicates())\n", "print(f\"{N_DIMENSION_COMBOS} unique item/dimension combinations\", \"\\n\")\n", "\n", "tts_schema = util.amzforecast.autodiscover_dataframe_schema(\n", " tts_df,\n", " overrides={\"demand\": \"float\"},\n", ")\n", "print(\"TTS Dataset Schema:\\n\" + json.dumps(tts_schema, indent=2))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One other thing we need to configure **before we prepare RTS data** is the **forecast horizon**.\n", "\n", "When preparing other time-varying inputs later we'll need a solid understanding of what future period the forecast itself covers, for aligning our RTS inputs to cover that period. For more information on what frequencies and time granularities Amazon Forecast supports, see [this page](https://docs.aws.amazon.com/forecast/latest/dg/data-aggregation.html) in the Developer Guide." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# Configure forecast horizon and frequency:\n", "FORECAST_HORIZON = pd.offsets.Day() * 31 # or e.g. Hour(), Week(), MonthEnd()\n", "print(f\"Configured forecast horizon: {FORECAST_HORIZON}\")\n", "FORECAST_FREQ = FORECAST_HORIZON.base.freqstr # This should be Amazon Forecast compatible\n", "print(f\"({FORECAST_HORIZON.n} units of '{FORECAST_FREQ}')\\n\")\n", "\n", "# Check TTS history end date (forecast start minus one) is as you expect:\n", "TTS_END_DATE = pd.to_datetime(max(tts_df[\"timestamp\"]))\n", "print(f\"Historical data end date: {TTS_END_DATE}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Once the data is prepared and uploaded to Amazon S3, and we have the schema extracted, we can delete unnecessary variables to save notebook memory. The only extra information we'll need to keep for later is which combinations of location and item_id are present in the dataset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "item_location_combos = tts_df[[\"location\", \"item_id\"]].drop_duplicates()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "del sales_raw_df\n", "del tts_df" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "### Static Item Metadata\n", "\n", "The optional [Item Metadata dataset](https://docs.aws.amazon.com/forecast/latest/dg/item-metadata-datasets.html) records metadata about forecast items (i.e. products/SKUs) that **does not change over time**: I.e. a table of attributes keyed by unique `item_id`.\n", "\n", "Note that any other **dimensions** in the TTS dataset are not included in this lookup: Item Metadata has one key attribute only, so in cases like this sample data you may want to choose between representing certain fields (like store/location) as either **dimensions**, or **incorporating them into the item ID** so that `skuXYZ-storeABC` becomes one \"item ID\".\n", "\n", "In this example we'll keep `location` as a dimension, so the Item Metadata dataset cannot include it or any product attributes that are location-specific." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "metadata_raw_df = pd.read_csv(\"s3://measuring-forecast-benefits-assets/dataset/v1/metadata.csv\")\n", "metadata_raw_df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Again there's very little preparation required for this dataset: We'll just rename the `sku` field to required name `item_id`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "metadata_df = metadata_raw_df.rename(columns={\"sku\": \"item_id\"})\n", "metadata_df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We're then ready to upload the item metadata to Amazon S3:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "metadata_s3_uri = f\"s3://{BUCKET_NAME}/{BUCKET_PREFIX}training-data/metadata/metadata.csv\"\n", "metadata_df.to_csv(metadata_s3_uri, index=False)\n", "print(f\"Uploaded Item Metadata to: {metadata_s3_uri}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "...And extract the schema:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "metadata_schema = util.amzforecast.autodiscover_dataframe_schema(metadata_df)\n", "print(json.dumps(metadata_schema, indent=2))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Related Time-Series (RTS)\n", "\n", "The [optional Related Time-Series dataset](https://docs.aws.amazon.com/forecast/latest/dg/related-time-series-datasets.html) provides other input variables to your forecast that **vary over time**.\n", "\n", "Popular time-varying features to help predict future demand include pricing and promotions, public holidays and events, and even weather information. We need to prepare one consolidated dataset of all the RTS features we wish to include, in this sample building from two base datasets: Public holidays by country, and product prices/promotions - as loaded below." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Weekends and holidays\n", "\n", "The source weekend and holiday data has already been prepared in a flat file format. However, it extends for a full year beyond our TTS end date - so we need to trim it for only the forecasting period of interest:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "holiday_raw_df = pd.read_csv(\n", " \"s3://measuring-forecast-benefits-assets/dataset/v1/weekend_holiday_flag.csv\",\n", ")\n", "holiday_raw_df[\"date\"] = pd.to_datetime(holiday_raw_df[\"date\"]) # (As CSV)\n", "\n", "# Filter out any data beyond the end of the forecasting horizon:\n", "holiday_raw_df = holiday_raw_df[\n", " holiday_raw_df[\"date\"] <= pd.to_datetime(TTS_END_DATE) + FORECAST_HORIZON\n", "]\n", "\n", "holiday_raw_df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Prices and promotions\n", "\n", "The price and promotion data is likewise available in a flat file format already:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "prices_raw_df = pd.read_parquet(\n", " \"s3://measuring-forecast-benefits-assets/dataset/v1/prices_promos.parquet\",\n", ")\n", "# (No need to parse datetimes from parquet)\n", "prices_raw_df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "...But unlike the holidays reference data, it **doesn't extend beyond the TTS end date** at all.\n", "\n", "This is a problem because, as discussed [in the Amazon Forecast Developer Guide](https://docs.aws.amazon.com/forecast/latest/dg/related-time-series-datasets.html#related-time-series-historical-futurelooking), \"forward-looking\" inputs (where we know or hypothesize the values during the forecast period) are much more valuable to model accuracy than \"historical-only\" data (where the model doesn't know what to expect during the forecast period).\n", "\n", "Ideally, you would already have some plan for pricing actions in the near future. You could, if needed, build models with multiple different pricing scenarios and explore how forecasted demand changes (the *price elasticity of demand*).\n", "\n", "In this example, we'll just project forward the current price throughout the forecast horizon.\n", "\n", "First, create a dataframe of empty `NaN` placeholders for all the future dates and items:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "prices_dimensions = [\n", " c for c in prices_raw_df.columns if c not in (\"date\", \"promo\", \"unit_price\")\n", "]\n", "\n", "prices_future = pd.merge(\n", " # Range of dates in the forecast period:\n", " pd.date_range(\n", " TTS_END_DATE,\n", " TTS_END_DATE + FORECAST_HORIZON,\n", " freq=FORECAST_FREQ,\n", " inclusive=\"right\",\n", " name=\"date\",\n", " ).to_series(),\n", " # Unique combinations of country+product:\n", " prices_raw_df[prices_dimensions].drop_duplicates(),\n", " # Cross join (all combinations):\n", " how=\"cross\",\n", ")\n", "prices_future[\"promo\"] = float(\"nan\")\n", "prices_future[\"unit_price\"] = float(\"nan\")\n", "prices_future" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then, join the future placeholders with the historical prices and index and sort the data by the breakdown dimensions *before* date:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "tmp = pd.concat([prices_raw_df, prices_future]).set_index(prices_dimensions + [\"date\"]).sort_index()\n", "tmp" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We're now ready to forward-fill, and reset the index back to regular columns:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "prices_projected_df = tmp.groupby(level=prices_dimensions).ffill().reset_index()\n", "\n", "# Delete temp variables to save space and make sure we don't accidentally use the wrong ones:\n", "del prices_future\n", "del prices_raw_df\n", "del tmp\n", "\n", "prices_projected_df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you like, you can inspect this DataFrame to validate the continuity (i.e. `Brazil` `Gloves` will keep using the same `promo` and `unit_price` for records after `TTS_END_DATE` - and likewise for each other combination of country and product type)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Pulling the RTS together\n", "\n", "With our end dates aligned and set up to fully cover the forecast period, we're ready to combine the two datasets and normalize the dimensions to match the Target Time-Series (i.e. map `country` to `location`s and `product` to `item_id`s).\n", "\n", "First, we'll join them together:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "rts_df = pd.merge(holiday_raw_df, prices_projected_df, on=[\"date\", \"country\"], how=\"outer\")\n", "rts_df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This dataset is almost ready, but we need to expand from `country` to cover all separate `location` IDs and from `product` to cover all separate `item_id`s in the sales dataset. We can refer to the unique location/item_id list saved from earlier to do this:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# Construct the reference table:\n", "item_location_combos[\"country\"] = item_location_combos[\"location\"].str.split(\"_\").str[0]\n", "item_location_combos[\"product\"] = item_location_combos[\"item_id\"].str.split(\"_\").str[0]\n", "item_location_combos" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# Join to map country/product to locations/item_ids:\n", "rts_df = (\n", " pd.merge(\n", " item_location_combos,\n", " rts_df,\n", " on=[\"country\", \"product\"],\n", " how=\"outer\",\n", " )\n", " .drop(columns=[\"country\", \"product\"])\n", " .rename(columns={\"date\": \"timestamp\"})\n", ")\n", "\n", "# Standardize timestamp representation, as with TTS:\n", "rts_df[\"timestamp\"] = rts_df[\"timestamp\"].dt.strftime(\"%Y-%m-%d\")\n", "\n", "rts_df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As previously, once dataset preparation is complete we'll upload the data to Amazon S3:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "rts_s3_uri = f\"s3://{BUCKET_NAME}/{BUCKET_PREFIX}training-data/rts/rts.parquet\"\n", "rts_df.to_parquet(rts_s3_uri, index=False)\n", "print(f\"Uploaded Related Time-Series to: {rts_s3_uri}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "...And extract the schema:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "rts_schema = util.amzforecast.autodiscover_dataframe_schema(rts_df)\n", "print(json.dumps(rts_schema, indent=2))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also clear the tables to save memory in the notebook:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "del holiday_raw_df\n", "del prices_projected_df\n", "del rts_df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Define and import datasets in Amazon Forecast\n", "\n", "Once the datasets are available on Amazon S3, with known schemas conforming to Amazon Forecast's requirements, we're ready to define the Dataset Group in Forecast and import the datasets themselves.\n", "\n", "### Define the Dataset Group\n", "\n", "First, run the below cells to configure and set up the **schema and structure** of your datasets:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Configurations:\n", "DATASET_GROUP_NAME = \"benefits_demo\"\n", "DOMAIN = \"RETAIL\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "> ℹ️ **Tip:** The `util.amzforecast.create_or_reuse_...` functions we use throughout this notebook are just thin wrappers over the corresponding [Forecast boto3 client create_... methods](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/forecast.html), to transparently re-use resources (instead of raising errors) if they already exist.\n", ">\n", "> This helps make it quick to re-run the notebook if the kernel restarts, but of course may not always be the behaviour you want (for example if changing settings). You can check out the implementation in [util/amzforecast.py](util/amzforecast.py), and swap out calls for e.g. `forecast.create_dataset_group(...)` if you like." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dsg_arn = util.amzforecast.create_or_reuse_dataset_group(\n", " Domain=DOMAIN,\n", " DatasetGroupName=DATASET_GROUP_NAME,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "TTS_DATASET_NAME = f\"{DATASET_GROUP_NAME}_tts\"\n", "tts_arn = util.amzforecast.create_or_reuse_dataset(\n", " DatasetName=TTS_DATASET_NAME,\n", " Domain=DOMAIN,\n", " DatasetType=\"TARGET_TIME_SERIES\",\n", " DataFrequency=FORECAST_FREQ,\n", " Schema={\"Attributes\": tts_schema},\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "METADATA_DATASET_NAME = f\"{DATASET_GROUP_NAME}_meta\"\n", "metadata_arn = util.amzforecast.create_or_reuse_dataset(\n", " DatasetName=METADATA_DATASET_NAME,\n", " Domain=DOMAIN,\n", " DatasetType=\"ITEM_METADATA\",\n", " Schema={\"Attributes\": metadata_schema},\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "RTS_DATASET_NAME = f\"{DATASET_GROUP_NAME}_rts\"\n", "rts_arn = util.amzforecast.create_or_reuse_dataset(\n", " DatasetName=RTS_DATASET_NAME,\n", " Domain=DOMAIN,\n", " DatasetType=\"RELATED_TIME_SERIES\",\n", " DataFrequency=FORECAST_FREQ,\n", " Schema={\"Attributes\": rts_schema},\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This final cell links your Datasets to the Dataset Group (note that you can also change which datasets are included in a DSG later, but only datasets linked to a DSG will appear in the AWS Console for Amazon Forecast):" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "forecast.update_dataset_group(\n", " DatasetGroupArn=dsg_arn,\n", " DatasetArns=[tts_arn, rts_arn, metadata_arn],\n", ")" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "### Import data\n", "\n", "With the schemas defined, we can **import the actual data** from Amazon S3 into the Amazon Forecast service, by creating a batch import job for each dataset.\n", "\n", "The import process involves validating your data, so is asynchronous and can take some time to complete. In the cells below, we kick off all 3 jobs and then wait for them to complete in parallel:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tts_import_arn = util.amzforecast.create_dataset_import_job_by_hash(\n", " DatasetArn=tts_arn,\n", " DataSource={\n", " \"S3Config\": {\n", " \"Path\": tts_s3_uri,\n", " \"RoleArn\": forecast_role_arn,\n", " },\n", " },\n", " Format=\"PARQUET\",\n", " TimestampFormat=\"yyyy-MM-dd\",\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "metadata_import_arn = util.amzforecast.create_dataset_import_job_by_hash(\n", " DatasetArn=metadata_arn,\n", " DataSource={\n", " \"S3Config\": {\n", " \"Path\": metadata_s3_uri,\n", " \"RoleArn\": forecast_role_arn,\n", " },\n", " },\n", " Format=\"CSV\",\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "rts_import_arn = util.amzforecast.create_dataset_import_job_by_hash(\n", " DatasetArn=rts_arn,\n", " DataSource={\n", " \"S3Config\": {\n", " \"Path\": rts_s3_uri,\n", " \"RoleArn\": forecast_role_arn,\n", " },\n", " },\n", " Format=\"PARQUET\",\n", " TimestampFormat=\"yyyy-MM-dd\",\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "pending_jobs = [tts_import_arn, metadata_import_arn, rts_import_arn]\n", "\n", "\n", "def are_imports_finished(job_descs):\n", " global pending_jobs\n", " for desc in job_descs:\n", " status = desc[\"Status\"]\n", " if status == \"ACTIVE\":\n", " pending_jobs = [\n", " job_arn for job_arn in pending_jobs if job_arn != desc[\"DatasetImportJobArn\"]\n", " ]\n", " if len(pending_jobs) == 0:\n", " return True\n", " elif \"FAILED\" in status:\n", " raise ValueError(f\"Data import failed!\\n{desc}\")\n", "\n", "\n", "def max_all_etas(job_descs):\n", " eta_mins_by_job = list(\n", " filter(\n", " lambda t: t is not None,\n", " (d.get(\"EstimatedTimeRemainingInMinutes\") for d in job_descs),\n", " )\n", " )\n", " return f\"{max(eta_mins_by_job)} mins\" if len(eta_mins_by_job) > 0 else None\n", "\n", "\n", "util.progress.polling_spinner(\n", " # Call DescribeDatasetImportJob on all jobs:\n", " fn_poll_result=lambda: [\n", " forecast.describe_dataset_import_job(DatasetImportJobArn=job_arn)\n", " for job_arn in pending_jobs\n", " ],\n", " # Check if *all* jobs finished and cut finished jobs from list:\n", " fn_is_finished=are_imports_finished,\n", " # Stringify status as number of jobs remaining:\n", " fn_stringify_result=lambda descs: f\"{len(descs)} jobs pending\",\n", " # Get max of ETA from all outstanding jobs:\n", " fn_eta=max_all_etas,\n", " poll_secs=30,\n", " timeout_secs=60 * 60, # Max 1 hour\n", ")\n", "print(\"Data imported\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "> ⏰ These dataset imports can take several minutes to complete: We saw a wait of around 10 minutes with the sample dataset.\n", ">\n", "> This period includes behind-the-scenes overhead for the service to spin up managed infrastructure to analyze your data, so don't worry: It scales much better than linearly as data size increases." ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "## Train predictor model\n", "\n", "In Amazon Forecast, a trained model is called a \"Predictor\". After setting up the base datasets and importing data, you're ready to train (one or more) forecast models.\n", "\n", "In this section we'll kick off training of a new AutoPredictor, and then wait for that process to complete (which may take multiple hours). You can find more information about the parameters and process in the [Training a predictor section](https://docs.aws.amazon.com/forecast/latest/dg/howitworks-predictor.html) of the Amazon Forecast Developer Guide." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "TRAIN_FORECAST_TYPES = [\"mean\", \"0.10\", \"0.50\", \"0.90\"]\n", "METRIC = \"RMSE\"\n", "PREDICTOR_NAME = f\"{DATASET_GROUP_NAME}_auto_1\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "predictor_arn = util.amzforecast.create_or_reuse_auto_predictor(\n", " PredictorName=PREDICTOR_NAME,\n", " DataConfig={\n", " \"DatasetGroupArn\": dsg_arn,\n", " \"AttributeConfigs\": [\n", " # Multi-record aggregation and missing value filling logic:\n", " {\n", " \"AttributeName\": \"demand\",\n", " # (Note only TTS accept aggregation parameter)\n", " \"Transformations\": {\"aggregation\": \"sum\", \"middlefill\": \"zero\", \"backfill\": \"zero\"},\n", " },\n", " {\n", " \"AttributeName\": \"weekend_hol_flag\",\n", " \"Transformations\": {\n", " \"middlefill\": \"zero\",\n", " \"backfill\": \"zero\",\n", " \"futurefill\": \"zero\",\n", " },\n", " },\n", " {\n", " \"AttributeName\": \"promo\",\n", " \"Transformations\": {\n", " \"middlefill\": \"value\",\n", " \"middlefill_value\": \"1\",\n", " \"backfill\": \"value\",\n", " \"backfill_value\": \"1\",\n", " \"futurefill\": \"value\",\n", " \"futurefill_value\": \"1\",\n", " },\n", " },\n", " {\n", " \"AttributeName\": \"unit_price\",\n", " \"Transformations\": {\n", " \"middlefill\": \"mean\",\n", " \"backfill\": \"mean\",\n", " \"futurefill\": \"mean\",\n", " },\n", " },\n", " ],\n", " },\n", " ExplainPredictor=False, # Enable for explainability report, but increased training time\n", " ForecastDimensions=FORECAST_DIMENSIONS,\n", " ForecastFrequency=FORECAST_FREQ,\n", " ForecastHorizon=FORECAST_HORIZON.n,\n", " ForecastTypes=TRAIN_FORECAST_TYPES,\n", " OptimizationMetric=\"RMSE\", # Target metric for optimization between model candidates\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "util.progress.polling_spinner(\n", " fn_poll_result=lambda: forecast.describe_auto_predictor(PredictorArn=predictor_arn),\n", " fn_is_finished=util.amzforecast.is_forecast_resource_ready,\n", " fn_stringify_result=lambda desc: desc[\"Status\"],\n", " fn_eta=lambda desc: (\n", " f\"{desc['EstimatedTimeRemainingInMinutes']} mins\"\n", " if \"EstimatedTimeRemainingInMinutes\" in desc else None\n", " ),\n", " poll_secs=60,\n", " timeout_secs=5 * 60 * 60, # Max 5 hours\n", ")\n", "print(\"Predictor model trained\")" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "## Export predictor backtest results\n", "\n", "Because [Amazon Forecast's pricing](https://aws.amazon.com/forecast/pricing/) charges by **generated forecasts**, here's an important cost optimization tip for PoCs and experimentation: **[Use backtest exports](https://docs.aws.amazon.com/forecast/latest/dg/metrics.html)** where appropriate, rather than generating forward-looking forecasts, to evaluate your candidate models.\n", "\n", "When training your predictors, Amazon Forecast produces validation metrics by holding out the final portion of the data to calculate expected performance. By [creating a predictor backtest export job](https://docs.aws.amazon.com/forecast/latest/dg/API_CreatePredictorBacktestExportJob.html), you can export both detailed-level accuracy metrics per item over this final period, but also the raw forecasts themselves, mapped to actuals, to support any custom analyses or metrics you might wish to calculate.\n", "\n", "So if you'd like to compare your Forecast models to historical actuals (\"offline\"), you can save costs by including the full period in your Target Time-Series and running a backtest export job - versus excluding this final period and performing the reconciliation and analysis yourself.\n", "\n", "That's exactly what we'll do for the purpose of this sample:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "backtest_s3_uri = \"s3://{}/{}forecast-backtests/{}\".format(\n", " BUCKET_NAME,\n", " BUCKET_PREFIX,\n", " PREDICTOR_NAME,\n", ")\n", "print(f\"Exporting backtest results to: {backtest_s3_uri}\")\n", "\n", "backtest_export_arn = util.amzforecast.create_or_reuse_predictor_backtest_export_job(\n", " PredictorBacktestExportJobName=PREDICTOR_NAME,\n", " PredictorArn=predictor_arn,\n", " Destination={\n", " \"S3Config\": {\n", " \"Path\": backtest_s3_uri,\n", " \"RoleArn\": forecast_role_arn,\n", " },\n", " },\n", " Format=\"PARQUET\",\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "util.progress.polling_spinner(\n", " fn_poll_result=lambda: forecast.describe_predictor_backtest_export_job(\n", " PredictorBacktestExportJobArn=backtest_export_arn,\n", " ),\n", " fn_is_finished=util.amzforecast.is_forecast_resource_ready,\n", " fn_stringify_result=lambda desc: desc[\"Status\"],\n", " fn_eta=lambda desc: (\n", " f\"{desc['EstimatedTimeRemainingInMinutes']} mins\"\n", " if \"EstimatedTimeRemainingInMinutes\" in desc else None\n", " ),\n", " poll_secs=30,\n", " timeout_secs=30 * 60, # Max 30 mins\n", ")\n", "print(\"Backtest export done\")" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "## (Optional) Create and export a forecast\n", "\n", "For **online** evaluation (i.e. actually testing Forecast in production), you'll need to create an actual forward-looking forecast from your model.\n", "\n", "This is a separate process from model training because it is technically possible to repeatedly update your datasets and create new forecasts from the same predictor model. However, in practice many customers find the accuracy benefits of re-training each time outweigh the resource costs because training costs are often much lower than forecasting/inference: So it's usually best to optimize for accuracy first (re-train a predictor every month/cycle) and explore the impacts of relaxing this later.\n", "\n", "> ⚠️ **COST WARNING**\n", ">\n", "> This section is included for completeness but forecast generation is *not necessary* for the final (forecast comparison) notebook, which only uses the backtest result. The sample dataset includes a high number of SKUs/locations, and generating a full forecast (31 data points for each SKU/location, with a single quantile) **could cost approx $200-250 or more** at standard pricing, ignoring any free tier allowances.\n", ">\n", "> Refer to the [Amazon Forecast pricing page](https://aws.amazon.com/forecast/pricing/) and check you understand the profile of your dataset and horizon before generating forward-looking forecasts. We include logic below to estimate the number of forecast time-series and data points for your configuration.\n", "\n", "If you're sure you want to generate forecasts, set `generate_forecast = True` below to enable this section.\n", "\n", "If you want to run a limited-scope test, you could also [limit the forecast to particular time-series](https://docs.aws.amazon.com/forecast/latest/dg/howitworks-forecast.html#forecast-time-series) to control costs." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "INF_FORECAST_TYPES = [\"mean\"] # (Could also add e.g. \"0.10\", \"0.50\", \"0.90\")\n", "generate_forecast = True\n", "\n", "print(f\"Configured for approx {N_DIMENSION_COMBOS * len(INF_FORECAST_TYPES):,d} time-series\")\n", "print(f\"({N_DIMENSION_COMBOS * len(INF_FORECAST_TYPES) * FORECAST_HORIZON.n:,d} forecast data points)\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "if generate_forecast:\n", " forecast_arn = util.amzforecast.create_or_reuse_forecast(\n", " ForecastName=PREDICTOR_NAME,\n", " PredictorArn=predictor_arn,\n", " ForecastTypes=INF_FORECAST_TYPES,\n", " )\n", "else:\n", " print(\"Forecasting skipped\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "if generate_forecast:\n", " util.progress.polling_spinner(\n", " fn_poll_result=lambda: forecast.describe_forecast(ForecastArn=forecast_arn),\n", " fn_is_finished=util.amzforecast.is_forecast_resource_ready,\n", " fn_stringify_result=lambda desc: desc[\"Status\"],\n", " fn_eta=lambda desc: (\n", " f\"{desc['EstimatedTimeRemainingInMinutes']} mins\"\n", " if \"EstimatedTimeRemainingInMinutes\" in desc else None\n", " ),\n", " poll_secs=60,\n", " timeout_secs=2 * 60 * 60, # Max 2 hours\n", " )\n", " print(\"Forecast ready\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Created forecasts are [queryable](https://docs.aws.amazon.com/forecast/latest/dg/howitworks-forecast.html#query-forecast) via a real-time [QueryForecast API](https://docs.aws.amazon.com/forecast/latest/dg/howitqworks-forecast.html#query-forecast) (and so there are also [quotas](https://docs.aws.amazon.com/forecast/latest/dg/limits.html) on the number of forecasts you can keep active concurrently).\n", "\n", "For our analysis use case though (and for many real-world use-cases where businesses want to use some dashboarding tool to slice and explore the results), we're more interested in exporting the forecasts as a bulk dataset for all items.\n", "\n", "The cells below will initiate this export to Amazon S3, and wait for it to complete:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "if generate_forecast:\n", " export_s3_uri = \"s3://{}/{}forecast-exports/{}\".format(\n", " BUCKET_NAME,\n", " BUCKET_PREFIX,\n", " PREDICTOR_NAME,\n", " )\n", " print(f\"Exporting forecast to: {export_s3_uri}\")\n", "\n", " create_export_resp = forecast.create_forecast_export_job(\n", " ForecastExportJobName=PREDICTOR_NAME,\n", " ForecastArn=forecast_arn,\n", " Destination={\n", " \"S3Config\": {\n", " \"Path\": export_s3_uri,\n", " \"RoleArn\": forecast_role_arn,\n", " }\n", " },\n", " Format=\"PARQUET\",\n", " )\n", "\n", " export_arn = create_export_resp[\"ForecastExportJobArn\"]\n", "else:\n", " print(\"Forecasting skipped\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "if generate_forecast:\n", " util.progress.polling_spinner(\n", " fn_poll_result=lambda: forecast.describe_forecast_export_job(\n", " ForecastExportJobArn=export_arn,\n", " ),\n", " fn_is_finished=util.amzforecast.is_forecast_resource_ready,\n", " fn_stringify_result=lambda desc: desc[\"Status\"],\n", " fn_eta=lambda desc: (\n", " f\"{desc['EstimatedTimeRemainingInMinutes']} mins\"\n", " if \"EstimatedTimeRemainingInMinutes\" in desc else None\n", " ),\n", " poll_secs=30,\n", " timeout_secs=30 * 60, # Max 30 minutes\n", " )\n", " print(\"\\nForecast export ready:\")\n", " print(export_s3_uri)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Next steps\n", "\n", "Congratulations! If you ran through this notebook successfully, you managed to import data to Amazon Forecast, build a model and export batch forecast results (optionally, backtest results too) to Amazon S3. You should also be able to view your Dataset Group, Predictor, and Forecast through the [AWS Console for Amazon Forecast](https://console.aws.amazon.com/forecast/home?#datasetGroups).\n", "\n", "Now we're ready to dive in to analyzing the accuracy of the forecast as compared to the moving average baseline, and how that translates to actual business results. Head on over to [2. Measuring Forecast Benefits.ipynb](2.%20Measuring%20Forecast%20Benefits.ipynb) to follow along!" ] } ], "metadata": { "availableInstances": [ { "_defaultOrder": 0, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "memoryGiB": 4, "name": "ml.t3.medium", "vcpuNum": 2 }, { "_defaultOrder": 1, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 8, "name": "ml.t3.large", "vcpuNum": 2 }, { "_defaultOrder": 2, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 16, "name": "ml.t3.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 3, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 32, "name": "ml.t3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 4, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "memoryGiB": 8, "name": "ml.m5.large", "vcpuNum": 2 }, { "_defaultOrder": 5, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 16, "name": "ml.m5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 6, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 32, "name": "ml.m5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 7, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 64, "name": "ml.m5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 8, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 128, "name": "ml.m5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 9, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 192, "name": "ml.m5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 10, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 256, "name": "ml.m5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 11, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 384, "name": "ml.m5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 12, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 8, "name": "ml.m5d.large", "vcpuNum": 2 }, { "_defaultOrder": 13, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 16, "name": "ml.m5d.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 14, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 32, "name": "ml.m5d.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 15, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 64, "name": "ml.m5d.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 16, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 128, "name": "ml.m5d.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 17, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 192, "name": "ml.m5d.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 18, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 256, "name": "ml.m5d.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 19, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "memoryGiB": 384, "name": "ml.m5d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 20, "_isFastLaunch": true, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 4, "name": "ml.c5.large", "vcpuNum": 2 }, { "_defaultOrder": 21, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 8, "name": "ml.c5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 22, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 16, "name": "ml.c5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 23, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 32, "name": "ml.c5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 24, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 72, "name": "ml.c5.9xlarge", "vcpuNum": 36 }, { "_defaultOrder": 25, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 96, "name": "ml.c5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 26, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 144, "name": "ml.c5.18xlarge", "vcpuNum": 72 }, { "_defaultOrder": 27, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "memoryGiB": 192, "name": "ml.c5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 28, "_isFastLaunch": true, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 16, "name": "ml.g4dn.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 29, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 32, "name": "ml.g4dn.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 30, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 64, "name": "ml.g4dn.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 31, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 128, "name": "ml.g4dn.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 32, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "memoryGiB": 192, "name": "ml.g4dn.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 33, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 256, "name": "ml.g4dn.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 34, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 61, "name": "ml.p3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 35, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "memoryGiB": 244, "name": "ml.p3.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 36, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "memoryGiB": 488, "name": "ml.p3.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 37, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "memoryGiB": 768, "name": "ml.p3dn.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 38, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 16, "name": "ml.r5.large", "vcpuNum": 2 }, { "_defaultOrder": 39, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 32, "name": "ml.r5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 40, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 64, "name": "ml.r5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 41, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 128, "name": "ml.r5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 42, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 256, "name": "ml.r5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 43, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 384, "name": "ml.r5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 44, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 512, "name": "ml.r5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 45, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "memoryGiB": 768, "name": "ml.r5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 46, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 16, "name": "ml.g5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 47, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 32, "name": "ml.g5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 48, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 64, "name": "ml.g5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 49, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 128, "name": "ml.g5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 50, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "memoryGiB": 256, "name": "ml.g5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 51, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "memoryGiB": 192, "name": "ml.g5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 52, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "memoryGiB": 384, "name": "ml.g5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 53, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "memoryGiB": 768, "name": "ml.g5.48xlarge", "vcpuNum": 192 } ], "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (Data Science 3.0)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/sagemaker-data-science-310-v1" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.6" }, "vscode": { "interpreter": { "hash": "ddbda436de216a15b983650f9bacf1dadb709a40e27f1fee3bde2bf4145658fa" } } }, "nbformat": 4, "nbformat_minor": 4 }