# cur-dashboard cur-dashboard is an automated, extendable and configurable AWS customer usage report analyzer (CUR). Use it to: * Visualize your AWS costs via Cloudwatch dashboards * Alert and react to spend changes within a few hours * Build automation and alarming based on changes or cost thresholds. * Gain insights into your AWS costs using simple SQL ![DBRdashboard Screenshot](https://raw.githubusercontent.com/andyfase/awsDBRanalysis/master/dbr_dashboard.png) CURdashboard automatically converts the CUR into a compact and query-efficent Parquet format and creates and maintains a set of AWS Athena tables to be easily queries against using standard SQL. A table per month is created and kept upto date as new billing data is produced It can also produce cost metrics which are piped to AWS Cloudwatch to allow for dashboards to be produced and alarms/automation to be developed. Metrics are generated by providing SQL queries which are executed using AWS Athena. A standard set of queries are provided, however the solution is designed to easly allow futher queries to be configured, so that metrics can be produced based on your needs and requirements. ## How does this work? AWS publishes [customer usage records](http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/billing-reports.html#other-reports) periodically during the day. These reports contain very detailed line-by-line billing details for every AWS charge related to your account. cur-dashboard sets up a AWS Autoscale Group configured to spin up a EC2 instance every N hours. This instance then bootstrap itself, converts the CSV based CUR reports into [parquet format](https://parquet.apache.org/) and re-uploads these converted files to S3. It then utilizes AWS Athena and standard SQL to create CUR tables within Athena and query specific billing metrics within them. The results of the queries are then reported to AWS Cloudwatch as custom metrics. Once the metrics are in Cloudwatch, it is then very easy to: * Graph metrics and create a billing Cloudwatch dashboard customized to your exact requirements. * Produce alarms based on CUR metrics, which can then trigger alerting or automation via SNS. ## How much will this cost? cur-dashboard utilizes a number of cost-saving measures to minimize its cost. For compute, EC2 spot instances are utilized. The instance will self-terminate after a few minutes of processing - hence will only be charged for the total time of processing, due to EC2 per-second billing. AWS Athena charges on a per query and per data scanned basis. Parquet files greatly reduce the quantity of data that AWS Athena need to process, hence the costs of each query is reduced. Additionally each query/metric can be enabled or disabled so that only the metrics you need are ingested into Cloudwatch. Overall costs will vary depending on the size of the Customer Usage Reports, the type of EC2 instance required and the number of metrics ingested into Cloudwatch. It is expected compute, storage and query costs will be less that $1 per month. ## Setup Setup of CURdashboard should take ~10 minutes. ### Step 1 - Setup CUR If you have not already, turn on customer usage records in your AWS account, follow the instructions [here](http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/billing-reports.html#turnonreports) Please allow upto 24 hours for the CUR report to be generated and pushed into the configured S3 bucket before processing to step #2. ### Step 2 - Create Stack Create the cur-dashboard environment using one the following Cloudformation Stacks: * Use [this template](https://raw.githubusercontent.com/andyfase/cur-dashboard/develop/templates/CURdashboard-master.yaml) to create cur-dashboard withing a new, dedicated VPC. * Use [this template](https://raw.githubusercontent.com/andyfase/cur-dashboard/develop/templates/CURdashboard.yaml) to create cur-dashboard within a existing VPC. It is important to ensure the paramters in __CUR Report Configuration__ are correct. Copy and paste from the CUR report definiton in the AWS Billing Dashboard as shown below: ![DBRdashboard Screenshot](https://github.com/andyfase/cur-dashboard/raw/develop/help/setup1.png) 1. Copy value of **S3 bucket** and paste into **Source CUR Bucket Name** 1. Copy value of **Report Name** and paste into **CUR Report Name** 1. Copy value of **Report Path** and paste into **CUR Report Path** Default values are provided for majority of all other needed parameters. ### Step 3 - Start Conversion To start the immediate process of CUR conversion, find the Autoscale group created by the cloudformation stack, within the EC2 console, and set the **Desired** value to 1. This will immediatly spin up a EC2 instance which will perform the 1st CUR conversion, create the Athena tables and produce the default initial Cloudwatch metrics. The code that performs the CUR conversion will create logs within CloudWatch Logs, look for a log group called **CURDashboard**. A new log stream will be added every time a EC2 instance is spun up. Once the CUR conversion is complete the code will set the desired value of the ASG group back to zero and the instance will be terminated. Once this has occured the CUR Athena tables will be available in the Athena console under the database called **cur** and custom Cloudwatch metrics with the namespace **CUR** will be visible. From this point the Autoscale group will peridoically spin up based on the schedule defined in the CloudFormation template ### Step 4 - Setup CUR based Cloudwatch Graphs and Dashboard Using the custom metrics, generate graphs that you would like and start creating your very own cost dashboard. Instructions on creating a Cloudwatch dashboard can be viewed [here](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html) ### Step 5 (Optional) - Modify Configuration File cur-dashboard is designed to highly flexible. The naming for Athena tables, the default metrics that are sent to Cloudwatch all can be altered, enabled and disabled via a configuration file. Additional new metrics can also be added based on custom SQL that you write. The codebase uses a configuration file that was on first use pushed into a Codecommit repository created by the cloudformation template. Every time a EC2 instance is spun up it will clone the repository and use the configuration file `analyzeCUR.config` at the HEAD of the branch configured when the CloudFormation stack was spun up (default master branch). To push changes to this file simply clone the repoistory locally, make the changes and then commit and push the changes. The syntax for the configuration file is described below. ## Configuration Configuration for CURdashboard is performed by editing the configuration file `analyzeCUR.config`. The configuration file is in the [TOML](www.toml.org?) format and has a number of sections which are described below. ### General Configuration options These options are held within the `[general]` TOML section Option Name | Description | Default Value --------------- | --------------------------------------------- | ------------- `namespace` | The Cloudwatch namespace used for all metrics | `CUR` ### Athena Configuration options These options are held within the `[athena]` TOML section ption Name | Description | Default Value --------------- | --------------------------------------------- | ------------- `database_name` | The database to create within Athena | `CUR` `table_prefix` | The prefix used when creating the monthly CUR Athena tables. The current date will be appended to this in the format `_MMYYYY` | `autocur` ### Metric Configuration Each metric is held within a `TOML` array in the configuration file. This array is iterated over to query Athena and then send the results as metrics to Cloudwatch. To add new metrics simply copy-and-paste an existing `[[metric]]` entry and then modify the various attributes, which are Metric Attribute | Description ----------------- | ------------ `enabled` | Enables / Disables the metric `hourly` | Enables / Disables hourly metric reporting `daily` | Enables / Disables daily metric reporting `type` | Reserved for future use. value of `dimension-per-row` is only accepted value currently `cwName` | The metric name that will be sent to Cloudwatch `cwDimension` | The dimension name that will be sent to Cloudwatch (the value of the dimension will be taken from the "dimension" row value (see below) `cwType` | The cloudwatch metric type that will be sent to cloudwatch `sql` | The SQL that will be executed on the Athena CUR table to fetch the metric information (see below) ### Athena Metric SQL Each metric that you wish to display on the dashboard is obtained by querying the CUR Athena table. Each row that is returned is considered a new metric value. The `date` column is used as the time-series "divider" and is converted to a timestamp which is sent for this row. Default useful metrics are pre-configured within the original configuration file. These can be disabled if required or even completely removed. New metrics can be added as described above. CURdashboard uses a substitution parameter for the date column `**INTERVAL**`, this is used so that the same query can retrieve costs split by hour and day. It is recommended that the date column SQL always be: `substr("lineitem/usagestartdate",1,**INTERVAL**) as date` Each row in the query results **MUST** contain the following aliased columns Column Name | Description ----------- | ----------- `date` | the timeperiod for the metric. Typically the hour (`format YYYY-MM-DD HH`) or day `value` | The metric value for this time period (normally a count(*) in SQL `dimension` | The dimension value that will be sent for this row. For example, if a query returns a row with `date` | `dimension` | `value` ------ | ----------- | ------ 2017-02-01 17 | m3.xlarge | 50 Then a custom metric (named using the `cwName` parameter) will be sent to Cloudwatch as follows: * The **timestamp** will be set to `2017-02-01 17:00:00` * The **dimension name** will be set to the parameter value `cwDimension` * The **dimension value** will be set to `m3.xlarge` * The **value** will be set to `50` Every row returned will send a metric using `put-metric-data` Note. Athena uses Presto under-the-hood. Hence all Presto SQL functions are available for you to utilize. These can be found [here](https://prestodb.io/docs/current/functions.html).