{ "cells": [ { "cell_type": "markdown", "id": "138d730d-ed9c-4af9-a3b4-adde92bdd1fb", "metadata": {}, "source": [ "## Module 1: Prepare Data Using SageMaker Data Wrangler\n", "\n", "In this one-hour module, you will learn how to use SageMaker Data Wrangler to explore, clean, transform, and visualize your data, including techniques such as data aggregation, normalization, and feature engineering. Optionally, you will also learn how to train an auto-ml model directly in Data wrangler and deploy the model to online endpoints or run batch predictions offline.\n", "\n", "By the end of this module, you will be able to:\n", "\n", "* Understand the basic concepts of data preparation for machine learning\n", "* Use SageMaker Data Wrangler to explore, clean, transform, and visualize data\n", "* Create automate the data preparation workflow jobs to process your data at scale.\n", "* [**Optionally**] create data ML models with no code." ] }, { "cell_type": "markdown", "id": "1f818f9e-6093-4b72-9022-2aa041814667", "metadata": {}, "source": [ "## Navigate to Data Wrangler\n", "\n", "Make sure you have completed all the steps in the *0_setup.ipynb notebook*. Now you can navigate to *SageMaker Data Wrangler: Home → Data → Data Wrangler → Import Data*.\\\n", "\n", " a. This also creates a .flow file in your current directory.\\\n", " b. [**optional**] Rename to flow file to *5gcell_workshop.flow*.\n", "\n", "![Data Wrangler Flow](statics/module_01_dw02.png)\n", "![Data Wrangler Flow](statics/module_01_dw03.png)" ] }, { "cell_type": "markdown", "id": "37573e70-faa3-4170-8828-56a4e2cade28", "metadata": { "jp-MarkdownHeadingCollapsed": true, "tags": [] }, "source": [ "## Import Dataset from S3 bucket\n", "\n", "1. We start by importing the dataset we previously uploaded to our S3 bucket. Inside Data Wrangler, select **Amazon S3** from Data sources. The Import a dataset from S3 page will be displayed. \n", " \n", "![Import from S3](statics/dw-data-source.png)\n", " \n", "2. Navigate to the bucket and folder that contains the *5gcell.csv* file.\n", "3. Select the *5gcell.csv* file. You'll see a preview of the data.\n", "4. **Sampling Options**: You have the option to import your entire dataset into Data Wrangler or to sample a portion of it.\n", "\n", "> **Note**\\\n", "The larger the dataset, the more accurate your analyses and visualizations will be and the longer they may take to render. By importing only a sample, rendering time may improve, but at the possible expense of losing influential data points. Random and stratified sampling strategies may help mitigate issues like these, but this depends on the distribution of the data and your unique use case.\n", ">\n", "> The following sampling settings only apply during interactive mode within Data Wrangler. When exporting (for example, to training or S3), these settings are ignored. If you wish to return a smaller subset of the data when exporting, use the Split Data transform. \n", "> When importing from Amazon S3, the following sampling options are available:\\\n", "> **None** – Import the entire dataset.\\\n", "> **First K** – Sample the first K rows of the dataset, where K is an integer that you specify.\\\n", "> **Randomized** – Takes a random sample of a size that you specify.\\\n", "> **Stratified** – Takes a stratified random sample. A stratified sample preserves the ratio of values in a column.\n", "\n", "5. Let's accept all the defaults (First K and 50,000) and click the Import button. Check [Data Wrangler benchmark tests](https://aws.amazon.com/blogs/machine-learning/process-larger-and-wider-datasets-with-amazon-sagemaker-data-wrangler/#:~:text=Data%20Wrangler%20benchmark%20tests&text=This%20dataset's%20expanded%20size%20is,%2Dend%20customer%2Dfacing%20latency.) for its limit. \n", "\n", "### Navigating Data Wrangler Workspace\n", "\n", "After importing the data, you will see a summary page 3 tabs: Data, Analysis, Training.\n", "\n", "- Data tab summarizes the steps added to the data source at this point of data transformation. Expand the individual steps to modify.\n", "- Analysis tab shows the visualization/report generated.\n", "- You use the training tab to train an AutoML model (we will cover in more details later)\n", "- Use the **< Data flow button** on the top left to get to the main data flow workspace.\n", " \n", "![DW Workspace](statics/dw-workspace.png)\n" ] }, { "cell_type": "markdown", "id": "4d6127fc-a0e5-486f-a4a4-f1134589e15a", "metadata": {}, "source": [ "## Navigating Data Wrangler Workspace\n", "\n", "After importing the data, you will see a summary page with 3 tabs: Data, Analysis, Training\n", "\n", "* Data tab summarizes the steps add to the data sources at this point of data transformation. Expand the individual steps to modify.\n", "* Analysis tab shows are the visualization/report generated\n", "* You use the training tab to training an AutoML model (we will cover in more details later)\n", "\n", "![DW03](statics/module_01_dw03.png)\n" ] }, { "cell_type": "markdown", "id": "c6910700-7b84-42ec-8080-33c3de327d48", "metadata": {}, "source": [ "## Custom Transform\"\n", "---\n", "\n", "To determine good 5G accessibility, we will use *5g_sgnb_abnormal_release_rate_num*, which represents the likelihood of connectivity drops. Any record with *5g_sgnb_abnormal_release_rate_num* > 0 is considered anomaly. Then we will train a classification model that can predict the anomaly based on input features. This use case is part of the 5G performance observability initiative, aimed at predicting any potential loss of connectivity to the 5G radio network in the next hour, helping to ensure a seamless and uninterrupted user experience.\n", "\n", "We first use **Custom transform** in Data Wrangler to label the anomaly. For 5g_sgnb_abnormal_release_rate_num > 0, we label to 1. Otherwise, label to 0.\n", "\n", "Data Wrangler includes built-in transforms, which you can use to transform columns without any code. You can also add custom transformations using PySpark, Python (User-Defined Function), Pandas, and PySpark SQL. Some transforms operate in place, while others create a new output column in your dataset.\n", "You can import the popular libraries with an import statement in the custom transform code block, such as the following:\n", "\n", "* Numpy version 1.19.0\n", "* Scikit-learn version 0.23.2\n", "* Scipy version 1.5.4\n", "* Pandas version 1.0.3\n", "* Pyspark version 3.0.0\n", "\n", "To add a custom transform, click on the plus sign next to **Data Type** step.\n", "\n", "![DW Workspace](statics/add-custom-xform.png)\n", "\n", "Steps for custom transformation are:\n", "\n", "1. Click **Add step**.\n", "2. Select **Custom transform**.\n", "3. **Name** put Impute anomaly.\n", "4. Select **Python (Pandas)**.\n", "5. Place following code snippet into the code box. \n", "\n", "```\n", "# Table is available as variable `df`\n", "df['anomaly'] = 0\n", "df.loc[df['5g_sgnb_abnormal_release_rate_num']>0, 'anomaly']= 1\n", "df = df.drop(['5g_sgnb_abnormal_release_rate_num'], axis=1)\n", "```\n", " \n", "![DW Workspace](statics/custom-xform.png)" ] }, { "cell_type": "markdown", "id": "e0c3c4bd-902a-46da-9c22-2cdc7ed3cf73", "metadata": {}, "source": [ "## Get Insights On Data and Data Quality\n", "\n", "1. Use the **< Data flow** button on the top left to get to the main data flow workspace.\n", "2. To get some insights on the data we've just imported, click the “+” icon next to the **Data types** node in the Data Flow diagram, select **Get data insights**. \n", " \n", "![Data Insights](statics/data-insights.png)\n", "\n", "This is a shortcut that takes us to the analysis page where we are provided with a list of various analysis types to choose and apply.\n", " \n", "![Data Insights](statics/data-insights-2.png)\n", " \n", "3. By default, the selected **Analysis Type** is **Data Quality and Insights Report**.\n", "> Data Quality and Insights Report is a quick way to get a better understanding of your dataset. It generates a comprehensive report of your data across the following topics: Summary, Duplicate Rows, Anomalous Samples, Target Column, Quick Model, Feature Summary, Feature Details, Samples, and Definitions. You can export this report to share or review at a different time. Let’s look at some of the analysis in more detail. \n", "4. For **Target column**, select ***anomaly***.\n", "> To determine good 5G accessibility, we will use abnormal_release_rate, which represents the likelihood of connectivity drops. Any abnormal_release_rate > 0 is considered high probability for anomaly. Then we will train a classification model that can predict the likelihood of connectivity drops based on input features like network utilization, contention rates, health index, and throughput parameters. This use case is part of the 5G performance observabilitinitiative, aimed at predicting any potential loss of connectivity to the 5G radio network in the next hour, helping to ensure a seamless and uninterrupted user experience.\n", "\n", "5. For **Problem Type** select **Classification**.\n", "6. Click the **Create** button to generate the report.\n", "\n", "**Summary** provides a brief summary of the data that includes general information such as missing values, invalid values, feature types, outlier counts, and more. \n", " \n", "![Data Insights Summary](statics/dw-insights-summary.png)\n", "\n", "**High Priority Warnings** lists warnings in the dataset if there are any and the steps we can take from within Data Wrangler to address them. \n", " \n", "![Data Insights Summary](statics/dw-insights-hp-warn.png)\n", "\n", "**Duplicate Rows** helps you identify duplicate rows.\n", " \n", "![Data Insights Duplicate Rows](statics/dw-insights-duplicate-rows.png)\n", "\n", "**Anomalous Samples** are the most anomalous samples (with negative anomaly scores) identified by the Isolation forest algorithm. This helps you quickly spot outliers and anonymous data.\n", " \n", "![Data Insights](statics/dw-insights-anom-samp.png)\n", "\n", "**Target Column** analysis shows stats on target column and ranks the features on the order of their predictive power. It also detects potential issues and provides recommendation for remediation. **In this case, it has noticed your dataset is highly imbalance because anomalies are rare events. Recommendations are upsampling and class consolidation, which we will consider in our data preparation.**\n", " \n", "![Data Insights Target Column](statics/dw-insights-target-col.png)\n", "\n", "**Quick Model** provides an estimate of the expected predicted quality of a model that you train on your data. The Quick Model is a great way to get some prediction quality insight metrics on your dataset without going through the complete model building process.\n", " \n", "![Data Insights Target Column](statics/dw-insights-quick-model.png)\n", " \n", "![Data Insights Target Column](statics/dw-insights-quick-model-1.png)\n", " \n", "![Data Insights Target Column](statics/dw-insights-quick-model-2.png)" ] }, { "cell_type": "markdown", "id": "b6082477-dbb7-42d9-a80c-3d966ca2e273", "metadata": {}, "source": [ "## Feature Summary\n", "\n", "Based on the target column, Data Wrangler orders the features in the Feature Summary by their prediction power. Scores are normalized to the range [0, 1]. Higher prediction scores indicate columns that are more useful for predicting the target on their own. Lower scores point to columns that aren’t predictive of the target column.\n", " \n", "Notice some features are not contributing the to final prediction, and we can decide to drop these column base on our domain knowledge of this use case.\n", " \n", "![Feature Summary](statics/feat-summary-1.png)\n", " \n", "![Feature Summary](statics/feat-summary-2.png)\n", "\n", "## Other Feature specific details and definitions\n", "\n", "### Histogram\n", "\n", "We want to understand the distribution of number of 5G users per hour:\n", " \n", "![Histogram](statics/feat-summary-histogram.png)\n", "\n", "1. Use the **< Data flow** button on the top left to get to the main data flow workspace.\n", "2. Click the **“+”** icon next to the **Impute Anomaly** step in your data flow and select **Add analysis**.\n", "3. Select **Histogram** as the **Analysis type**\n", "4. Set the analysis name to *hist_number_of_5g_users*\n", "5. Select the *number_of_5g_users* variable as the **X-axis**\n", "6. Click the **Preview** button to visualize the results.\n", "7. Click **Save** to save this analysis.\n", "\n", "### Scatter plot\n", "\n", "Let’s create a scatter plot to visualize the relationship between the number of users at cell towers and time (per hour). \n", " \n", "![Scatter Plot](statics/feat-summary-scatter-plot.png)\n", "\n", "1. Choose **Scatter plot** as the **Analysis type**\n", "2. For **Analysis Name** enter *scat_num_5g_users*\n", "3. For X axis column, choose *hour_extracted*\n", "4. For Y axis column, choose *number_of_5g_users*\n", "5. Clicking on Preview yields the following visuals. Save the visualization by clicking on the Save button.\n", "\n", "### Feature Correlation\n", "\n", "Linear feature correlation is based on Pearson's correlation. Numeric to categorical correlation is calculated by encoding the categorical features as the floating point numbers that best predict the numeric feature before calculating Pearson's correlation. Linear categorical to categorical correlation is not supported.\n", " \n", "To create the analysis, choose **Feature Correlation** for **Analysis** type and choose **linear** for **Correlation type**.\n", " \n", "Based on the correlation values, we can see which feature pairs (as listed below) are strongly correlating with one another. Also, some of these features also showed up in the target analysis we did previously.\n", " \n", "![Feature Correlation](statics/feat-correlation.png)\n", "\n", "Please try the non-linear feature correlation on your own. Numeric to categorical correlation is calculated by encoding the categorical features as the floating point numbers that best predict the numeric feature before calculating Spearman's rank correlation. Categorical to categorical correlation is based on the normalized Cramer's V test.\n", "\n", "### Custom Visualization\n", "\n", "You can add an analysis to your Data Wrangler flow to create a custom visualization. Your dataset, with all applied transformations, is available as a [Pandas DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html).\n", "\n", "Data Wrangler uses the df variable to store the dataframe, which is accessible by the user. You must also provide the output variable, chart, to store an [Altair](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) output chart.\n", "\n", "Take advantage of the example code snippets if you are not familiar with the altair library. In this case, we are going to create a **scatter plot of Extracted Hour vs Throughput.**\n", "\n", "1. For **Analysis type**, select *Custom Visualization*\n", "2. For **Analysis name**, enter *throughput_vs_ hour*\n", "3. Expand **Search example snippets** and select **Scatter plot**\n", "4. Copy the code snippet to **Your custom visualization box**\n", "5. Specify the column names for X and Y axis.\n", "\n", "x=\"hour_extracted\"\\\n", "y=\"5g_user_downlink_avg_throughput_num\"\n", "\n", "![Feature Summary](statics/feat-summary-custom-vis.png)" ] }, { "cell_type": "markdown", "id": "958975b2-0689-4907-bdd5-78e01088f0da", "metadata": {}, "source": [ "While Wait for the Data To Load, Let's Explore the Feature Store Console\n", "Feature Store\n", "\n", "![XXX](statics/feature_store.png)" ] }, { "cell_type": "markdown", "id": "df700d51-0780-46db-a1fa-8a9391bc29ff", "metadata": {}, "source": [ "## Drop Column\n", "\n", "---\n", "\n", "Let's drop some columns with low prediction power. To drop columns, we can choose the Drop column transform and pick the column names we want to drop as show in the image below.\n", "\n", "1. Back to the **Data Flow**, click + icon next to the **Impute Anomaly** step in your data flow and select **Add transform**.\n", " \n", "![Drop Column](statics/feature-engg-drop-col-1.png)\n", "\n", "2. Click **+ Add Step**.\n", " \n", "![Drop Column](statics/feature-engg-drop-col-2.png)\n", "\n", "3. Choose **Manage columns**.\n", "4. **Transform** select **Drop column**.\n", "5. **Columns to drop** select ***5g_nr_qos_flow_success_rate_num*** (you can choose multiple columns to drop).\n", "6. Clicking on **Preview** to preview your data. Add the step to the workflow by clicking on the **Add** button.\n", "\n", "In this step, we keep the following columns based on the above data insights and domain knowledge:\n", "- *5g_avg_uplink_rssi*, \n", "- *5g_rrc_setup_success_rate_num*, \n", "- *5g_cce_utilization_num*, \n", "- *5g_rach_contention_rate_num*, \n", "- *number_of_5g_users*, \n", "- *cellname_nrcell*, \n", "- *hour_extracted*, \n", "- *anomaly*.\n", "\n", " \n", "![Drop Column](statics/feature-engg-drop-col-3.png)" ] }, { "cell_type": "markdown", "id": "c78ca973-7ad1-4f91-91ac-8912a1221594", "metadata": {}, "source": [ "## Handle Categorical Features\n", "---\n", "\n", "Categorical data is usually composed of a finite number of categories, where each category is represented with a string. Ordinal categories have an inherent order, and nominal categories do not. The Machine Size Type (L, M, H) is an example of ordinal categories.\n", "\n", "Encoding categorical data is the process of creating a numerical representation for categories. There are 3 ways we can encode a categorical value in Data Wrangler.\n", "\n", "* Ordinal encode\n", "* One Hot encode\n", "* Similarity encode\n", "\n", "With Data Wrangler, we can select Ordinal encode to encode categories into an integer between 0 and the total number of categories in the Input column you select. Select One-hot encode for Transform to use one-hot encoding or use similarity encoding when you have the following:\n", "\n", "* A large number of categorical variables\n", "* Noisy data\n", "\n", "Here, let’s apply ordinal encode for *cell_ID*.\n", "\n", "1. Click **Add step**.\n", "2. Select **Encode categorical**.\n", "3. **Transform** select Ordinal encode.\n", "4. From the **Input columns** select *cellname_nrcell*.\n", "5. Click **Preview** and then **Add**.\n", "\n", "Note: without defining an **Output column**, this is going to perform an inplace transform. If you need to keep the original column, please provide an **Output column**.\n", " \n", "![Handle Categorical Features](statics/handle-cat-features.png)" ] }, { "cell_type": "markdown", "id": "1568e16f-0908-4069-b754-67288d9d92b2", "metadata": {}, "source": [ "## Fill & Drop Missing\n", "---\n", "\n", "### Fill Missing\n", "\n", "We now fill missing values for numeric features:\n", "\n", "1. Add step.\n", "2. Select **Handle missing**.\n", "3. **Transform** select **Fill missing**.\n", "4. **Input columns** select numeric columns\n", "5. **Fill value** is 0\n", "6. Click **Preview** and then **Add**\n", "\n", "![Normalize Numeric Features](statics/fill-drop-missing-1.png)\n", "\n", "### Drop Missing\n", "\n", "Let us drop rows with missing values:\n", "\n", "1. Add step.\n", "2. Select **Handle missing**.\n", "3. **Transform** select **Drop missing**.\n", "4. Do not provide **Input columns**, so that all rows with a missing value in any column will be removed.\n", "5. Click **Preview** and then **Add**.\n", "\n", "![Normalize Numeric Features](statics/fill-drop-missing-2.png)" ] }, { "cell_type": "markdown", "id": "f447a4c9-e6e1-4329-9453-d1043467e33f", "metadata": {}, "source": [ "## Normalize Numeric Features\n", "\n", "---\n", "\n", "Machine learning algorithms like linear regression, logistic regression, neural networks, etc. that use gradient descent as an optimization technique require data to be scaled. To ensure that the gradient descent moves smoothly towards the minima and that the steps for gradient descent are updated at the same rate for all the features, we scale the data before feeding it to the model. Having features on a similar scale can help the gradient descent converge more quickly towards the minima.\n", "\n", "Normalization is good to use when you know that the distribution of your data does not follow a Gaussian distribution. This can be useful in algorithms that do not assume any distribution of the data like K-Nearest Neighbors and Neural Networks. Standardization, on the other hand, can be helpful in cases where the data follows a Gaussian distribution. However, this does not have to be necessarily true. Also, unlike normalization, standardization does not have a bounding range. So, even if you have outliers in your data, they will not be affected by standardization.\n", "\n", "For this example use case, let's see how to normalize the numeric feature columns to a standard scale [0, 1].\n", "\n", "1. Click **Add step**.\n", "2. Select **Process numeric**.\n", "3. **Transform** select *Scale values*.\n", "4. **Scaler** select *Min-max scaler*.\n", "5. **Input columns** select ***numeric columns***.\n", "6. Select **Scale**.\n", "7. Click **Preview** and then **Add**.\n", "\n", "![Normalize Numeric Features](statics/normalize-num-features.png)" ] }, { "cell_type": "markdown", "id": "e3708119-7f21-47c5-929d-67e6f5e40c46", "metadata": {}, "source": [ "## Custom Transform\n", "---\n", "\n", "Rename columns to make them easier to workwith.\n", "\n", "1. Click Add step\n", "2. Select Custom transform\n", "3. Name put Rename Columns\n", "4. Select Python (Pandas)\n", "5. Place following code snippet into the code box\n", "\n", "```\n", "# Table is available as variable `df`\n", "column_names = {\n", " \"cellname_nrcell\":\"location_id\",\n", " \"number_of_5g_users\":\"5g_users\",\n", " \"5g_rach_contention_rate_num\":\"contention_rate\",\n", " \"5g_rrc_setup_success_rate_num\":\"rrc_success_rate\",\n", " \"5g_avg_uplink_rssi\":\"uplink_rssi\",\n", " \"5g_cce_utilization_num\":\"cce_utilization\",\n", " \"5g_cell_downlink_avg_throughput_num\":\"downlink_throughput\",\n", " \"5g_cell_uplink_avg_throughput_num\":\"uplink_throughput\",\n", "}\n", "df = df.rename(columns=column_names)\n", "```\n", "\n", "![Rename Columns](statics/fe-rename-cols.png)" ] }, { "cell_type": "markdown", "id": "17553b54-5605-4326-a46a-c49f2aaa44a7", "metadata": {}, "source": [ "## Data Destination\n", "---\n", "\n", "You will be exporting the clean features into S3 and Amazon SageMaker Feature Store for the following lab 2 and 3 respectively.\n", "\n", "At this point, your data wrangler workflow should look something like this:\n", "\n", "![Add Destination](statics/add-destination-0.png)\n", "\n", "Let’s create a data destination for Feature Store first:\n", "\n", "1. Click the **“+”** from the node you wish to export from. All transforms made before, up to, and including that node will be included in the export.\n", "2. Choose **Add destination**.\n", "3. Choose Add destination. You can choose S3 or Feature Store, but for our example, we'll select **Feature Store**.\n", "\n", "![Add Destination](statics/add-destination-1.png)\n", "\n", "4. Select **5gcell-anomaly-features** from the feature group list. \n", "5. Click on **“Click this message to .....”** to validate the data schema. Follow the instruction to add **event time column**.\n", "6. Click Add to add feature group as data destination.\n", "\n", "![Add Destination](statics/add-destination-2.png)\n", "\n", "**Note: At this point, the columns in your data flow has to match the schema in the feature store. If you are getting a miss match, please carefully trace back the steps and recheck your work.**\n", "\n", "\n", "Follow a similar process to create a data destination with S3:\n", "\n", "1. Click the **'+'** from the node you wish to export from. All transforms made before, up to, and including that node will be included in the export.\n", "2. Choose **Add destination**. You can choose S3 or Feature Store, but for our example, we'll select S3.\n", "3. The *Add Destination* panel appears on the right side of Studio.\n", " 1. dataset name *(5gcell-clean.csv)* \n", " 2. S3 location: *s3://sagemaker--/telco-5g-observability/data/clean*" ] }, { "cell_type": "markdown", "id": "38657982-04b9-4904-9723-dce38b938852", "metadata": {}, "source": [ "## Process Full Data\n", "---\n", "\n", "Now we are ready to execute the entire data flow to process our data end-to-end:\n", "\n", "1. Click **Create job**. \n", "\n", "![Process Data](statics/process-data-1.png)\n", "\n", "The *Create job* panel appears from the right side with a default job name already populated. There are also *KMS*, and *Refit* trained parameter options.\n", "If you were encrypting the output data with KMS, you could enter that key arn here. For the purposes of this workshop, we'll keep it blank.\n", "\n", "2\\. Leave all other fields default, and click Next, **Configure Job**.\\\n", "The *Configure* job panel is now displayed.\n", "When we execute the job, the processing run parallelized across multiple EC2 instances. Based on the size of your dataset and the complexity of the transforms, you may wish to select a different instance type and count to improve speed and performance. For now, we'll keep the defaults. \n", "Under the *Spark memory configuration* tab, we can override default driver and executor memory settings. For now, we'll keep the defaults. \n", "Under the *Parameters* tab, we see there is the *basename_param* we had defined earlier. If you click on it, you can change the value. Since this is the correct path we wish to use, we'll leave the default.\n", "If you're processing data periodically, you can create a schedule to run the processing job automatically. For example, you can create a schedule that runs a processing job automatically when you get new data. This is configured under the *Associate Schedules* tab. \n", "For more information on creating schedules, please see [Create a Schedule to Automatically Process New Data](https://docs.aws.amazon.com/sagemaker/latest/dg/data-wrangler-data-export.html#data-wrangler-data-export-schedule-job).\n", "\n", "3. Click **Create** to create the processing job. The confirmation screen will appear.\n", "\n", "![Process Data](statics/process-data-2.png)\n", "\n", "You can click on the *Processing Job* name in the confirmation dialog to monitor the state of the export. Once completed, your data will be saved to both Feature store and S3 destinations you defined." ] }, { "cell_type": "markdown", "id": "4eeb7c6e-5d12-48d4-9491-e163ccba802e", "metadata": {}, "source": [ "## [Optional] Train & Test Split\n", "---\n", "\n", "** Note: This section of the lab is optional.**\n", "\n", "You can also train a model from SageMaker Data Wrangler. This feature uses Amazon SageMaker Autopilot to automatically train, tune, and deploy models on the data that you've just transformed. Underneath the hood, Autopilot goes through several algorithms and use the one that works best with your data.\n", "Now we want to split our train and test set. We can do this using the Split data transform.\n", "\n", "To train a model using Autopilot, you will first split the data into train and test set.\n", "\n", "1. Click **Add step**.\n", "2. Select **Split data**.\n", "3. **Transform** select *Randomized split*.\n", "4. Train: *0.8*, Test: *0.2*.\n", "5. Click **Preview** and then **Add**.\n", "\n", "![Train & Test Split](statics/train-test-1.png)\n", "\n", "For the purpose of this lab, we are going to use the Test dataset for batch inference, so we will add one more step to remove the target column. Click the **“+”** icon near Dataset: **5gcell.csv (Test)**\n", "\n", "1. Click **Add transform** (Test data only).\n", "2. Click **Add step**.\n", "3. select **Manage columns**.\n", "4. **Transform** select **Drop column**.\n", "5. **Columns to drop** select *anomaly*.\n", "6. Click **Preview** and then **Add**.\n", "\n", "![Train & Test Split](statics/train-test-2.png)\n", "\n", "Your data flow should now be split into 2 datasets branches. For each of the branch, let’s add an S3 destination following the previous instructions.\n", "\n", "- For train dataset: \n", "\n", " - dataset name (*train.csv*), and the \n", " - S3 location: *s3://sagemaker--/telco-5g-observability/data/train*.\n", " - Leave everything else default, then click **Add destination**. \n", " \n", "* For test dataset: \n", " \n", " * dataset name (*test.csv*)\n", " * S3 location: *s3://sagemaker--/telco-5g-observability/data/test*.\n", " * Leave everything else default, then click **Add destination**. \n", "\n", "Your flow should looks like this. If you are at this point, Create a new Job to transform your data.\n", "\n", "![Train & Test Split](statics/train-test-3.png)\n" ] }, { "cell_type": "markdown", "id": "bc07f202-0ffd-4af1-bcf7-431817447c58", "metadata": {}, "source": [ "## [Optional] Train a Model with Autopilot\n", "---\n", "\n", "**Note: This section of the lab is optional.**\n", "\n", "When you train and tune a model, Data Wrangler exports your data to an Amazon S3 location where Amazon SageMaker Autopilot can access it.\n", "\n", "1. Choose the **“+”** next to the train dataset, and select **Train model**.\n", "2. For Amazon S3 location, specify the Amazon S3 location where SageMaker exports your data. If presented with a root bucket path by default, Data Wrangler will create a unique export sub-directory under it — you don’t need to modify this default root path unless you’d like to.\n", "\n", "![Train Autopilot](statics/train-autopilot-1.png)\n", "\n", "You can accept the defaults, and click the *Export* and *train* button to export the transformed data to S3.\n", "\n", "![Train Autopilot](statics/train-autopilot-2.png)\n", "\n", "\n", "Once export is successful, you are taken to the Create an **Autopilot experiment** page, with the Input data S3 location already filled in for you (as it was populated from the results of the previous screen.)\n", "\n", "3. Optionally, set an Experiment name (if you don’t want the default name.)\n", "\n", "![Train Autopilot](statics/train-autopilot-3.png)\n", "\n", "\n", "4. Click **Next: Target and features**.\n", "5. **Target** select anomaly.\n", "6. Click **Next: Training method**.\n", "\n", "![Train Autopilot](statics/train-autopilot-4.png)\n", "\n", "As detailed in the post Amazon SageMaker Autopilot is up to eight times faster with new ensemble training mode powered by AutoGluon, you can either let Autopilot select the training mode automatically based on the dataset size, or select the training mode manually for either ensembling or hyperparameter optimization (HPO).\n", "The details of each option are as follows:\n", "\n", "* **Auto** – Autopilot automatically chooses either ensembling or HPO mode based on your dataset size. If your dataset is larger than 100 MB, Autopilot chooses HPO; otherwise it chooses ensembling.\n", "* **Ensembling** – Autopilot uses the AutoGluon ensembling technique to train several base models and combines their predictions using model stacking into an optimal predictive model.\n", "* **Hyperparameter optimization** – Autopilot finds the best version of a model by tuning hyperparameters using the Bayesian optimization technique and running training jobs on your dataset. HPO selects the algorithms most relevant to your dataset and picks the best range of hyperparameters to tune the models.\n", "\n", "7. For our workshop example, we'll leave the default selection of Auto.\n", "8. Click **Next: Deployment and advanced** settings to continue.\n", "9. For **Deployment settings**, switch to **Yes** for **Auto deploy**. Input **Auto deploy endpoint name**. \n", "10. **Select the machine learning problem type** to be **Binary classification**. **Objective metric** select F1.\n", "11. Click **Next: Review and create**.\n", "\n", "![Train Autopilot](statics/train-autopilot-5.png)\n", "\n", "12. Click **Create experiment**.\n", "\n", "![Train Autopilot](statics/train-autopilot-6.png)\n", "\n", "13. The results are listed and sorted in the decreasing order of the objective (F1) score. It also highlights the best performing model.\n", "\n", "![Train Autopilot](statics/train-autopilot-7.png)\n", "\n", "Click **View model details, Explainbility** tab to show **FEATURE IMPORTANCE**. You can select explanations for **“0.0”** or **“1.0”**. \n", "\n", "![Train Autopilot](statics/train-autopilot-8.png)\n", "\n", "Finally, with just a few clicks, you can **Deploy** any of your Autopilot experiments, either online to an realtime endpoint or offline for batch predictions.\n", "\n", "![Train Autopilot](statics/train-autopilot-9.png)\n" ] } ], "metadata": { "availableInstances": [ { "_defaultOrder": 0, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.t3.medium", "vcpuNum": 2 }, { "_defaultOrder": 1, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.t3.large", "vcpuNum": 2 }, { "_defaultOrder": 2, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.t3.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 3, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.t3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 4, "_isFastLaunch": true, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5.large", "vcpuNum": 2 }, { "_defaultOrder": 5, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 6, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 7, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 8, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 9, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 10, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 11, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 12, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.m5d.large", "vcpuNum": 2 }, { "_defaultOrder": 13, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.m5d.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 14, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.m5d.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 15, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.m5d.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 16, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.m5d.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 17, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.m5d.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 18, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.m5d.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 19, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.m5d.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 20, "_isFastLaunch": false, "category": "General purpose", "gpuNum": 0, "hideHardwareSpecs": true, "memoryGiB": 0, "name": "ml.geospatial.interactive", "supportedImageNames": [ "sagemaker-geospatial-v1-0" ], "vcpuNum": 0 }, { "_defaultOrder": 21, "_isFastLaunch": true, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 4, "name": "ml.c5.large", "vcpuNum": 2 }, { "_defaultOrder": 22, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 8, "name": "ml.c5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 23, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.c5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 24, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.c5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 25, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 72, "name": "ml.c5.9xlarge", "vcpuNum": 36 }, { "_defaultOrder": 26, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 96, "name": "ml.c5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 27, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 144, "name": "ml.c5.18xlarge", "vcpuNum": 72 }, { "_defaultOrder": 28, "_isFastLaunch": false, "category": "Compute optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.c5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 29, "_isFastLaunch": true, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g4dn.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 30, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g4dn.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 31, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g4dn.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 32, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g4dn.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 33, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g4dn.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 34, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g4dn.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 35, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 61, "name": "ml.p3.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 36, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 244, "name": "ml.p3.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 37, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 488, "name": "ml.p3.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 38, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.p3dn.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 39, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.r5.large", "vcpuNum": 2 }, { "_defaultOrder": 40, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.r5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 41, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.r5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 42, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.r5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 43, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.r5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 44, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.r5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 45, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 512, "name": "ml.r5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 46, "_isFastLaunch": false, "category": "Memory Optimized", "gpuNum": 0, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.r5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 47, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 16, "name": "ml.g5.xlarge", "vcpuNum": 4 }, { "_defaultOrder": 48, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 32, "name": "ml.g5.2xlarge", "vcpuNum": 8 }, { "_defaultOrder": 49, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 64, "name": "ml.g5.4xlarge", "vcpuNum": 16 }, { "_defaultOrder": 50, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 128, "name": "ml.g5.8xlarge", "vcpuNum": 32 }, { "_defaultOrder": 51, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 1, "hideHardwareSpecs": false, "memoryGiB": 256, "name": "ml.g5.16xlarge", "vcpuNum": 64 }, { "_defaultOrder": 52, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 192, "name": "ml.g5.12xlarge", "vcpuNum": 48 }, { "_defaultOrder": 53, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 4, "hideHardwareSpecs": false, "memoryGiB": 384, "name": "ml.g5.24xlarge", "vcpuNum": 96 }, { "_defaultOrder": 54, "_isFastLaunch": false, "category": "Accelerated computing", "gpuNum": 8, "hideHardwareSpecs": false, "memoryGiB": 768, "name": "ml.g5.48xlarge", "vcpuNum": 192 } ], "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (Data Science)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-west-2:236514542706:image/datascience-1.0" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.10" } }, "nbformat": 4, "nbformat_minor": 5 }