AWSTemplateFormatVersion: "2010-09-09" Description: CUR Dashboard app stack launched into an existing VPC including ASG with timed scale-up / scale-down policy. Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Network Configuration" Parameters: - VPCID - Subnet1 - Subnet2 - Label: default: "Security Configuration" Parameters: - KeypairName - Label: default: "AWS Quick Start Configuration" Parameters: - QSS3BucketName - QSS3KeyPrefix - QSResourceTagPrefix - Label: default: "CUR Report Configuration" Parameters: - CURBucket - CURUploadBucket - ReportName - ReportPath - Label: default: "CUR Conversion Configuration" Parameters: - Schedule - InstanceType - SpotPrice - Label: default: "CUR Dashboard Code Configuration" Parameters: - ConfigFile - GitBranch ParameterLabels: VPCID: default: "Existing VPC" Subnet1: default: "Existing Subnet #1" Subnet2: default: "Existing Subnet #2" KeypairName: default: "Key Pair Name" QSS3BucketName: default: "QS S3 Bucket Name" QSS3KeyPrefix: default: "QS S3 Key Prefix" QSResourceTagPrefix: default: "QS Tag Prefix" CURBucket: default: "Source CUR Bucket" CURUploadBucket: default: "Destination CUR Bucket (Leave blank if source and destination bucket should be the same)" ReportName: default: "CUR Report Name" ReportPath: default: CUR Report Path" Schedule: default: "CUR conversion schedule" InstanceType: default: "Instance Type/Size" SpotPrice: default: "Max price per hour for bid" ConfigFile: default: "Config file name" GitBranch: default: "Config git branch" Parameters: CURBucket: Type: String Description: S3 bucket configured for Cost and Usage Reports MinLength: 2 ReportName: Type: String Description: As defined under 'Report Name' within 'AWS Cost and Usage Reports' in Billing Dashboard MinLength: 2 ReportPath: Type: String Description: Ss defined under 'Report Path' within 'AWS Cost and Usage Reports' in Billing Dashboard MinLength: 2 CURUploadBucket: Type: String Default: '' Description: OPTIONAL S3 bucket for storing converted CUR files (only use this is your CUR bucket is invalid for use with Athena i.e. contains a underscore) InstanceType: Type: String Description: Instance Type to run. Larger instances required for larger CUR's Default: c4.large AllowedValues: - c4.large - c4.xlarge - c4.2xlarge - m4.large - m4.xlarge - m4.2xlarge - c5.large - c5.xlarge - c5.2xlarge SpotPrice: Type: Number Default: 0.50 MinValue: 0.01 Description: The Maximum price for the instance you want to pay (Note Actual price paid will vary based on spot market) Schedule: Type: Number Default: 4 MinValue: 1 Description: Time gap (in hours) between CUR automatic processing (the lower, the more frequent the conversion will occur) ConfigFile: Type: String Default: "analyzeCUR.config" Description: Name of cur-dashboard configuration file. This will be pushed into code-commit repo on first execution GitBranch: Type: String Default: "master" MinLength: 2 Description: Used branch for Codecommit config repo. Modify if testing changes before merge to master" KeypairName: Type: AWS::EC2::KeyPair::KeyName Description: EC2 Key pair for instances performing CUR conversion VPCID: Description: The VPC to launch the CURdashboard into Type: AWS::EC2::VPC::Id Subnet1: Description: Subnet in AZ 1 to launch spot instances into Type: AWS::EC2::Subnet::Id Subnet2: Description: Subnet in AZ 2 to launch spot instances into Type: AWS::EC2::Subnet::Id QSResourceTagPrefix: Type: String Description: "Tag prefix Prefix for resources that are created as part of cur-dashboard" ConstraintDescription: "Resource tag prefix can include numbers, lowercase letters, uppercase letters, hyphens (-)." AllowedPattern: "^[0-9a-zA-Z-]*$" Default: "cur-dashboard" QSS3KeyPrefix: Type: String Description: "S3 key prefix for the Quick Start assets." ConstraintDescription: "Quick Start key prefix can include numbers, lowercase letters, uppercase letters, hyphens (-), and forward slash (/)." AllowedPattern: "^[0-9a-zA-Z-/]*$" Default: "cur-dashboard/" QSS3BucketName: Type: String Description: "S3 bucket name for the Quick Start assets." ConstraintDescription: "Quick Start bucket name can include numbers, lowercase letters, uppercase letters, and hyphens (-). It cannot start or end with a hyphen (-)." AllowedPattern: "^[0-9a-zA-Z]+([0-9a-zA-Z-]*[0-9a-zA-Z])*$" Default: "aws-quickstart" Mappings: RegionMap: us-east-1: "ami": "ami-8c1be5f6" us-east-2: "ami": "ami-c5062ba0" us-west-2: "ami": "ami-e689729e" eu-central-1: "ami": "ami-c7ee5ca8" eu-west-1: "ami": "ami-acd005d5" eu-west-2: "ami": "ami-1a7f6d7e" ap-south-1: "ami": "ami-4fc58420" ap-northeast-2: "ami": "ami-9bec36f5" ap-southeast-1: "ami": "ami-0797ea64" ap-southeast-2: "ami": "ami-8536d6e7" ap-northeast-1: "ami": "ami-2a69be4c" Conditions: NoUploadBucket: "Fn::Equals": - { Ref: CURUploadBucket } - "" Resources: IamRole: Type: AWS::IAM::Role DependsOn: ConfigRepo Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Sid: 'PermitAssumeRoleEc2' Action: - sts:AssumeRole Effect: Allow Principal: Service: - ec2.amazonaws.com Path: / Policies: - PolicyName: autoCURPolicy PolicyDocument: Version: "2012-10-17" Statement: - PolicyName: "autoCURPolicy" PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Action: - athena:GetQueryExecution - athena:GetQueryResults - athena:RunQuery - athena:StartQueryExecution - glue:CreateDatabase - glue:CreateTable - glue:GetDatabase - glue:GetTable Resource: - "*" - Effect: Allow Action: - s3:GetBucketLocation - s3:GetObject - s3:ListBucket - s3:ListBucketMultipartUploads - s3:ListMultipartUploadParts - s3:AbortMultipartUpload - s3:CreateBucket - s3:PutObject Resource: - "arn:aws:s3:::aws-athena-query-results-*" - Effect: Allow Action: - s3:GetBucketLocation - s3:ListBucket - s3:GetObject - s3:ListObjects Resource: - !Sub 'arn:aws:s3:::${CURBucket}' - !Sub 'arn:aws:s3:::${CURBucket}/*' - Effect: Allow Action: - s3:GetObject - s3:ListObjects - s3:PutObject - s3:DeleteObject Resource: - !Sub - 'arn:aws:s3:::${DerivedBucket}/parquet-cur/*' - {DerivedBucket: !If [NoUploadBucket, !Ref CURBucket, !Ref CURUploadBucket]} - !Sub - 'arn:aws:s3:::${DerivedBucket}/parquet-cur' - {DerivedBucket: !If [NoUploadBucket, !Ref CURBucket, !Ref CURUploadBucket]} - Effect: Allow Action: - s3:GetObject Resource: !Sub 'arn:aws:s3:::${QSS3BucketName}/${QSS3KeyPrefix}*' - Effect: Allow Action: - cloudwatch:PutMetricData - ec2:DescribeReservedInstances - autoscaling:DescribeAutoScalingInstances Resource: - "*" - Effect: Allow Action: - logs:CreateLogGroup - logs:CreateLogStream - logs:PutLogEvents - logs:DescribeLogStreams Resource: !Sub 'arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:CURdashboard:*' - Effect: Allow Action: - autoscaling:UpdateAutoScalingGroup Resource: "*" Condition: StringEquals: autoscaling:ResourceTag/Name: !Sub '${QSResourceTagPrefix}-asg' - Effect: Allow Action: - codecommit:GitPull - codecommit:GitPush Resource: - "Fn::GetAtt": [ ConfigRepo, Arn ] IamProfile: Type: AWS::IAM::InstanceProfile Properties: Path: / Roles: - Ref: IamRole SecurityGroup: Type: AWS::EC2::SecurityGroup Properties: VpcId: !Ref VPCID GroupDescription: !Sub '${QSResourceTagPrefix}-sg' Tags: - Key: Name Value: !Sub '${QSResourceTagPrefix}-sg' LaunchConfig: Type: AWS::AutoScaling::LaunchConfiguration Metadata: 'AWS::CloudFormation::Authentication': S3AccessCreds: type: S3 roleName: !Ref IamRole buckets: - !Ref QSS3BucketName Properties: ImageId: "Fn::FindInMap": - RegionMap - Ref: "AWS::Region" - "ami" KeyName: { Ref: KeypairName } IamInstanceProfile: { Ref: IamProfile } InstanceType: { Ref: InstanceType } SpotPrice: { Ref: SpotPrice } SecurityGroups: - { Ref: SecurityGroup } UserData: "Fn::Base64": "Fn::Join": - "\n" - - "#!/bin/bash" - "yum install -y git" - "export HOME=/root" - "git config --global credential.helper '!aws codecommit credential-helper $@'" - "git config --global credential.UseHttpPath true" - "Fn::Join": - "" - - "git clone " - "Fn::GetAtt": [ ConfigRepo, CloneUrlHttp ] - " cur-dashboard-config" - "cd cur-dashboard-config" - "Fn::Sub": "git checkout ${GitBranch}" - "Fn::Sub": "if [ ! -f ./${ConfigFile} ]; then" - "Fn::Sub": "aws s3 cp s3://${QSS3BucketName}/${QSS3KeyPrefix}scripts/analyzeCUR.config ./${ConfigFile}" - "Fn::Sub": "git add .; git commit -m 'Default Config File'; git push origin ${GitBranch}" - "fi" - "Fn::Sub": "aws s3 cp s3://${QSS3BucketName}/${QSS3KeyPrefix}scripts/bin/analyzeCUR ." - "Fn::If": - NoUploadBucket - "Fn::Sub": "chmod 755 ./analyzeCUR && ./analyzeCUR -bucket ${CURBucket} -reportname ${ReportName} -reportpath ${ReportPath} -account ${AWS::AccountId} -config ./${ConfigFile}" - "Fn::Sub": "chmod 755 ./analyzeCUR && ./analyzeCUR -bucket ${CURBucket} -reportname ${ReportName} -reportpath ${ReportPath} -account ${AWS::AccountId} -config ./${ConfigFile} -destbucket ${CURUploadBucket}" - "instance=`curl -s http://169.254.169.254/latest/meta-data/instance-id`" - "Fn::Sub": "group=`aws autoscaling describe-auto-scaling-instances --instance-ids $instance --region ${AWS::Region} --query AutoScalingInstances[0].AutoScalingGroupName --output text`" - "Fn::Sub": "aws autoscaling update-auto-scaling-group --auto-scaling-group-name $group --region ${AWS::Region} --desired-capacity 0" Asg: Type: AWS::AutoScaling::AutoScalingGroup Properties: Tags: - Key: Name Value: !Sub '${QSResourceTagPrefix}-asg' PropagateAtLaunch: true - Key: Project Value: "CURdashboard" PropagateAtLaunch: true MinSize: 0 MaxSize: 1 DesiredCapacity: 0 LaunchConfigurationName: { Ref: LaunchConfig } VPCZoneIdentifier: - !Ref Subnet1 - !Ref Subnet2 AsgScheduleUp: Type: AWS::AutoScaling::ScheduledAction Properties: AutoScalingGroupName: { Ref: Asg } DesiredCapacity: 1 Recurrence: "Fn::Sub": "1 */${Schedule} * * *" AsgScheduleDown: Type: AWS::AutoScaling::ScheduledAction Properties: AutoScalingGroupName: { Ref: Asg } DesiredCapacity: 0 Recurrence: "Fn::Sub": "55 */1 * * *" ConfigRepo: Type: AWS::CodeCommit::Repository Properties: RepositoryName: !Sub '${QSResourceTagPrefix}-config-repo' RepositoryDescription: "cur-dashboard repository for configuration file" Policy: Type: AWS::IAM::Policy Properties: Roles: - { Ref: CustomResourceLambdaFunctionExecutionRole } PolicyName: CURDashboardLambdaCustomResourceCleaner PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Action: - athena:GetQueryExecution - athena:GetQueryResults - athena:RunQuery - athena:StartQueryExecution - glue:DeleteDatabase - glue:DeleteTable - glue:GetDatabase - glue:GetTable - glue:GetTables Resource: - "*" - Effect: Allow Action: - s3:GetBucketLocation - s3:GetObject - s3:ListBucket - s3:ListBucketMultipartUploads - s3:ListMultipartUploadParts - s3:AbortMultipartUpload - s3:CreateBucket - s3:PutObject Resource: - "arn:aws:s3:::aws-athena-query-results-*" - Effect: Allow Action: - "logs:DeleteLogGroup" # Disable permissions for cleanup Lambda to create cloudwatch logs (as we want to be clean). Uncomment for debugging. # - "logs:CreateLogGroup" # - "logs:CreateLogStream" # - "logs:PutLogEvents" Resource: "arn:aws:logs:*:*:*" - Effect: Allow Action: - "s3:DeleteObject" - "s3:ListBucket" - "s3:ListObjects" Resource: - !Sub - 'arn:aws:s3:::${DerivedBucket}/*' - {DerivedBucket: !If [NoUploadBucket, !Ref CURBucket, !Ref CURUploadBucket]} - !Sub - 'arn:aws:s3:::${DerivedBucket}' - {DerivedBucket: !If [NoUploadBucket, !Ref CURBucket, !Ref CURUploadBucket]} CustomResourceLambdaFunctionExecutionRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Principal: Service: [ lambda.amazonaws.com ] Action: - sts:AssumeRole Path: / CustomResourceLambdaFunction: Type: AWS::Lambda::Function Metadata: Comment: "Fn::Sub": "Function for ${QSResourceTagPrefix}" DependsOn: [ CustomResourceLambdaFunctionExecutionRole ] Properties: Code: ZipFile: "Fn::Sub": | import boto3 import json import logging import traceback import cfnresponse #Logging configuration logging.basicConfig() logger = logging.getLogger(__name__) logger.setLevel(logging.INFO) DB_NAME = 'CUR' LOGGROUP_NAME = 'CURdashboard' def get_account_id(context): return context.invoked_function_arn.split(':')[4] def get_region(context): return context.invoked_function_arn.split(':')[3] def run_query(query, database, s3_output): client = boto3.client('athena') response = client.start_query_execution( QueryString=query, QueryExecutionContext={ 'Database': database }, ResultConfiguration={ 'OutputLocation': s3_output, } ) print('Execution ID: ' + response['QueryExecutionId']) return response def delete_athena_resources(context, db): # Delete Athena Database resp = run_query( "DROP DATABASE " + db + ' CASCADE', 'default', 's3://aws-athena-query-results-' + get_account_id(context) + '-' + get_region(context) + '/' ) def delete_s3_resources(bucket): delete_key_list = [] s3 = boto3.client('s3') paginator = s3.get_paginator('list_objects') try: for result in paginator.paginate(Bucket=bucket, Prefix='parquet-cur'): for content in result['Contents']: delete_key_list.append({'Key': content.get('Key')}) if len(delete_key_list) > 500: s3.delete_objects(Bucket=bucket, Delete={'Objects': delete_key_list, 'Quiet': True }) delete_key_list = [] if len(delete_key_list) > 0: s3.delete_objects(Bucket=bucket, Delete={'Objects': delete_key_list, 'Quiet': True }) except Exception as e: logger.error('Failed to delete S3 resources Exception: {0}'.format(traceback.format_exc())) def delete_cwlogs_resources(group): logs = boto3.client('logs') try: logs.delete_log_group( logGroupName=group ) except Exception as e: logger.error('Failed to delete CW Logs resources Exception: {0}'.format(traceback.format_exc())) def delete_resource(event, context): delete_athena_resources(context, DB_NAME) delete_s3_resources(event['ResourceProperties']['BucketName']) delete_cwlogs_resources(LOGGROUP_NAME) def handler(event, context): logger.info('Got Event: {}'.format(event)) operation = event['RequestType'] try: if operation == 'Delete': delete_resource(event, context) except Exception as e: logger.error('CloudFormation custom resource {0} failed. Exception: {1}'.format(operation, traceback.format_exc())) status = cfnresponse.FAILED else: status = cfnresponse.SUCCESS logger.info('CloudFormation custom resource {0} succeeded.'.format(operation)) cfnresponse.send(event, context, status, {'done': True}) Role: { "Fn::GetAtt": [ CustomResourceLambdaFunctionExecutionRole, Arn ] } Timeout: "30" # Seconds. Handler: index.handler Runtime: python3.6 MemorySize: "128" # MB. CleanUp: Type: Custom::CleanUp Properties: ServiceToken: { "Fn::GetAtt": [ CustomResourceLambdaFunction, Arn ] } Region: { Ref: "AWS::Region" } BucketName: "Fn::If": - NoUploadBucket - { Ref: CURBucket } - { Ref: CURUploadBucket } Outputs: ConfigRepoName: Value: "Fn::GetAtt": [ ConfigRepo, Name ] ConfigRepoSSH: Value: "Fn::GetAtt": [ ConfigRepo, CloneUrlSsh ] ConfigRepoHTTP: Value: "Fn::GetAtt": [ ConfigRepo, CloneUrlHttp ]