{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Streaming Data to your Data Lake\n", "\n", "This workshop will walk through another common scenario when building your data lake. We will generate a Kinesis Firehose to send data to our data lake in real-time. We will walk through manually adding the metadata to the Glue Data Catalog and add the partitions for the data coming in. Finally, we will create an aggregate query that can utilize the partitions of the data generated." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import boto3\n", "import botocore\n", "import json\n", "\n", "import project_path # path to helper methods\n", "from lib import workshop\n", "\n", "iam = boto3.client('iam')\n", "logs = boto3.client('logs')\n", "firehose = boto3.client('firehose')\n", "glue = boto3.client('glue')\n", "s3 = boto3.client('s3')\n", "\n", "# General variables for the region and account id for the location of the resources being created\n", "session = boto3.session.Session()\n", "region = session.region_name\n", "account_id = boto3.client('sts').get_caller_identity().get('Account')\n", "\n", "workshop_user = 'bdw'\n", "# Kinesis Firehose\n", "delivery_stream_name = 'taxi-streaming_{0}'.format(workshop_user) # Name of the firehose to send Apache log simulations\n", "firehose_role_name = 'taxi-firehose-role_{0}'.format(workshop_user) # Role used for the Kinesis firehose\n", "firehose_policy_name = 'taxi-firehose-policy_{0}'.format(workshop_user) # Inline policy of the Kinesis Firehose Role\n", "cloudwatch_logs_group_name = '/taxi_{0}'.format(workshop_user)\n", "cloudwatch_logs_stream_name = 'ingestion-stream'\n", "\n", "# Glue\n", "database_name = 'taxi_{0}'.format(workshop_user)\n", "table_name = 'yellow_streaming_{0}'.format(workshop_user)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### [Create S3 Bucket](https://docs.aws.amazon.com/AmazonS3/latest/gsg/CreatingABucket.html)\n", "\n", "We will create an S3 bucket that will be used throughout the workshop for storing our data.\n", "\n", "[s3.create_bucket](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Client.create_bucket) boto3 documentation" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "bucket = workshop.create_bucket(region, session, 'taxi-')\n", "print(bucket)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### [Create the Role for the Kinesis Firehose](https://docs.aws.amazon.com/firehose/latest/dev/controlling-access.html)\n", "\n", "When you're using an Amazon S3 destination, Kinesis Data Firehose delivers data to your S3 bucket and can optionally use an AWS KMS key that you own for data encryption. If error logging is enabled, Kinesis Data Firehose also sends data delivery errors to your CloudWatch log group and streams. You are required to have an IAM role when creating a delivery stream. Kinesis Data Firehose assumes that IAM role and gains access to the specified bucket, key, and CloudWatch log group and streams.\n", "\n", "[Kinesis Firehose access to S3 Destination](https://docs.aws.amazon.com/firehose/latest/dev/controlling-access.html#using-iam-s3)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "role_doc = {\n", " \"Version\": \"2012-10-17\", \n", " \"Statement\": [\n", " {\"Sid\": \"\", \n", " \"Effect\": \"Allow\", \n", " \"Principal\": {\n", " \"Service\": \"firehose.amazonaws.com\"\n", " }, \n", " \"Action\": \"sts:AssumeRole\", \n", " \"Condition\": {\n", " \"StringEquals\": {\n", " \"sts:ExternalId\": account_id\n", " }\n", " }\n", " }]\n", " }\n", "\n", "inline_policy = {\n", " \"Version\": \"2012-10-17\",\n", " \"Statement\": [\n", " {\n", " \"Action\": [\n", " \"cloudwatch:*\",\n", " \"cloudwatchlogs:*\"\n", " ],\n", " \"Resource\": [\n", " \"*\"\n", " ],\n", " \"Effect\": \"Allow\"\n", " },\n", " {\n", " \"Action\": [\n", " \"s3:*\"\n", " ],\n", " \"Resource\": [\n", " \"arn:aws:s3:::\" + bucket + \"/*\",\n", " \"arn:aws:s3:::\" + bucket\n", " ],\n", " \"Effect\": \"Allow\"\n", " }\n", " ]\n", " }\n", "\n", "role_arn = workshop.create_role(iam, firehose_role_name, json.dumps(role_doc), firehose_policy_name, json.dumps(inline_policy))\n", "print(role_arn)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create the CloudWatch Log Group and Stream\n", "\n", "[Monitoring Kinesis Data Firehose](https://docs.aws.amazon.com/firehose/latest/dev/monitoring-with-cloudwatch-logs.html)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response = logs.create_log_group(\n", " logGroupName=cloudwatch_logs_group_name,\n", ")\n", "\n", "print('CloudWatch Log Group status: ' + str(response['ResponseMetadata']['HTTPStatusCode']))\n", "\n", "response = logs.create_log_stream(\n", " logGroupName=cloudwatch_logs_group_name,\n", " logStreamName=cloudwatch_logs_stream_name\n", ")\n", "\n", "print('CloudWatch Log Stream status: ' + str(response['ResponseMetadata']['HTTPStatusCode']))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### [Copy Sample Data to S3 bucket](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-example-download-file.html) \n", "\n", "We will download some files from New York City Taxi and Limousine Commission (TLC) Trip Record Data dataset available on the [AWS Open Data Registry](https://registry.opendata.aws/nyc-tlc-trip-records-pds/).\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "!aws s3 cp s3://nyc-tlc/trip\\ data/yellow_tripdata_2017-01.csv ." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### [Create the Kinesis Firehose we will use to send Apache Logs to our Data Lake](https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html)\n", "\n", "Amazon Kinesis Data Firehose is a fully managed service for delivering real-time streaming data to destinations such as Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon Elasticsearch Service (Amazon ES), and Splunk. Kinesis Data Firehose is part of the Kinesis streaming data platform, along with Kinesis Data Streams, Kinesis Video Streams, and Amazon Kinesis Data Analytics. With Kinesis Data Firehose, you don't need to write applications or manage resources. You configure your data producers to send data to Kinesis Data Firehose, and it automatically delivers the data to the destination that you specified. You can also configure Kinesis Data Firehose to transform your data before delivering it.\n", "\n", "In this example, we will create custom S3 prefixes for when the data lands in S3. This will allow us to precreate the partitions that will be cataloged in the Glue Data Catalog. To find more information follow this [link](https://docs.aws.amazon.com/firehose/latest/dev/s3-prefixes.html)\n", "\n", "[firehose.create_delivery_stream](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/firehose.html#Firehose.Client.create_delivery_stream)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response = firehose.create_delivery_stream(\n", " DeliveryStreamName=delivery_stream_name,\n", " DeliveryStreamType='DirectPut',\n", " S3DestinationConfiguration={\n", " 'RoleARN': role_arn,\n", " 'BucketARN': 'arn:aws:s3:::' + bucket,\n", " 'Prefix': 'datalake/taxi/streaming/year=!{timestamp:yyyy}/month=!{timestamp:MM}/day=!{timestamp:dd}/hour=!{timestamp:HH}/',\n", " 'ErrorOutputPrefix': \"datalake/taxi-error/streaming/year=!{timestamp:yyyy}/month=!{timestamp:MM}/day=!{timestamp:dd}/hour=!{timestamp:HH}/!{firehose:error-output-type}\",\n", " 'BufferingHints': {\n", " 'SizeInMBs': 50,\n", " 'IntervalInSeconds': 60\n", " },\n", " 'CompressionFormat': 'GZIP',\n", " 'EncryptionConfiguration': {\n", " 'NoEncryptionConfig': 'NoEncryption'\n", " },\n", " 'CloudWatchLoggingOptions': {\n", " 'Enabled': True,\n", " 'LogGroupName': cloudwatch_logs_group_name,\n", " 'LogStreamName': cloudwatch_logs_stream_name\n", " }\n", " }\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Wait for the Kinesis Firehose to become 'Active'\n", "The Kinesis Firehose Delivery Stream is in the process of being created." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import time\n", "\n", "response = firehose.describe_delivery_stream(\n", " DeliveryStreamName=delivery_stream_name\n", ")\n", "\n", "status = response['DeliveryStreamDescription']['DeliveryStreamStatus']\n", "print(status)\n", "\n", "while status == 'CREATING':\n", " time.sleep(30)\n", " response = firehose.describe_delivery_stream(\n", " DeliveryStreamName=delivery_stream_name\n", " )\n", " status = response['DeliveryStreamDescription']['DeliveryStreamStatus']\n", " print(status)\n", "\n", "print('Kinesis Firehose created.')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Send simulated Taxi events to Kinesis Firehose\n", "\n", "The code below will generate taxi events from the file we downloaded and send them to the Kinesis Data Firehose. We will generate new times for the data to allow them to fit into the current time window. To optimize writes to the firehose we could also leverage the [firehose.put_record_batch](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/firehose.html#Firehose.Client.put_record_batch) call as well.\n", "\n", "[firehose.put_record](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/firehose.html#Firehose.Client.put_record)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import csv\n", "import datetime\n", "import random\n", "\n", "f = open('yellow_tripdata_2017-01.csv')\n", "csv_f = csv.reader(f)\n", "first = True\n", "max_records = 1000\n", "cnt = 0;\n", "\n", "for row in csv_f: \n", " if first: # skip the header record for the csv file\n", " first = False\n", " continue\n", " \n", " if row:\n", " drop_off = datetime.datetime.now()\n", " pick_up = drop_off - datetime.timedelta(minutes=random.randint(5,60))\n", " fmt = \"%Y-%m-%d %H:%M:%S\"\n", "\n", " data_row = {\n", " 'vendorid': row[0],\n", " 'tpep_pickup_datetime': pick_up.strftime(fmt), # change to updated time\n", " 'tpep_dropoff_datetime': drop_off.strftime(fmt), # change to updated time\n", " 'passenger_count': row[3],\n", " 'trip_distance': row[4],\n", " 'ratecodeid': row[5],\n", " 'store_and_fwd_flag': row[6],\n", " 'pulocationid': row[7],\n", " 'dolocationid': row[8],\n", " 'payment_type': row[9],\n", " 'fare_amount': row[10],\n", " 'extra': row[11],\n", " 'mta_tax': row[12],\n", " 'tip_amount': row[13],\n", " 'tolls_amount': row[14],\n", " 'improvement_surcharge': row[15],\n", " 'total_amount': row[16]\n", " }\n", " \n", " res = firehose.put_record(\n", " DeliveryStreamName=delivery_stream_name, \n", " Record={\n", " 'Data': json.dumps(data_row) + '\\n' \n", " }\n", " ) \n", " time.sleep(0.1)\n", " cnt = cnt + 1\n", "\n", " if cnt >= max_records:\n", " break\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Wait for Firehose to persist data to S3" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "time.sleep(60)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Finding out the current execution role of the Notebook\n", "We are using SageMaker Python SDK to retrieve the current role for this Notebook which needs to be enhanced to support the functionality in AWS Glue." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Import SageMaker Python SDK to get the Session and execution_role\n", "import sagemaker\n", "from sagemaker import get_execution_role\n", "sess = sagemaker.Session()\n", "role = get_execution_role()\n", "role_name = role[role.rfind('/') + 1:]\n", "print(role_name)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Adding AWS Glue as an additional trusted entity to this role\n", "This step is needed if you want to pass the execution role of this Notebook while calling Glue APIs as well without creating an additional **Role**. If you have not used AWS Glue before, then this step is mandatory. \n", "\n", "If you have used AWS Glue previously, then you should have an already existing role that can be used to invoke Glue APIs. In that case, you can pass that role while calling Glue (later in this notebook) and skip this next step." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "On the IAM dashboard, please click on **Roles** on the left sidenav and search for this Role. Once the Role appears, click on the Role to go to its **Summary** page. Click on the **Trust relationships** tab on the **Summary** page to add AWS Glue as an additional trusted entity. \n", "\n", "Click on **Edit trust relationship** and replace the JSON with this JSON.\n", "```\n", "{\n", " \"Version\": \"2012-10-17\",\n", " \"Statement\": [\n", " {\n", " \"Effect\": \"Allow\",\n", " \"Principal\": {\n", " \"Service\": [\n", " \"sagemaker.amazonaws.com\",\n", " \"glue.amazonaws.com\"\n", " ]\n", " },\n", " \"Action\": \"sts:AssumeRole\"\n", " }\n", " ]\n", "}\n", "```\n", "Once this is complete, click on **Update Trust Policy** and you are done.\n", "\n", "![IAM Roles](../../docs/assets/images/iam_roles_hl.png)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"https://console.aws.amazon.com/iam/home?region={0}#/roles/{1}\".format(region, role_name))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Create the [AWS Glue Catalog Database](https://docs.aws.amazon.com/glue/latest/dg/define-database.html)\n", "\n", "When you define a table in the AWS Glue Data Catalog, you add it to a database. A database is used to organize tables in AWS Glue. You can organize your tables using a crawler or using the AWS Glue console. A table can be in only one database at a time.\n", "\n", "There is a central Glue Catalog for each AWS account. When creating the database you will use your account id declared above as `account_id`\n", "\n", "[glue.create_database](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/glue.html#Glue.Client.create_database)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "workshop.create_db(glue, account_id, database_name, 'New York City Taxi and Limousine Commission (TLC) Trip Record Data')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### [Create the Streaming table in Glue](https://docs.aws.amazon.com/glue/latest/dg/tables-described.html)\n", "\n", "When you define a table in AWS Glue, you also specify the value of a classification field that indicates the type and format of the data that's stored in that table. If a crawler creates the table, these classifications are determined by either a built-in classifier or a custom classifier. If you create a table manually in the console or by using an API, you specify the classification when you define the table. For more information about creating a table using the AWS Glue console, see [Working with Tables on the AWS Glue Console](https://docs.aws.amazon.com/glue/latest/dg/console-tables.html).\n", "\n", "We also create the partitions that will be generated by using the Kinesis Firehose. You can find those values in the `PartitionKeys` section of the `create_table` call in Glue.\n", "\n", "[glue.create_table](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/glue.html#Glue.Client.create_table)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prefix = 'datalake/taxi/streaming/'\n", "location = 's3://{0}/{1}'.format(bucket,prefix)\n", "\n", "response = glue.create_table(\n", " CatalogId=account_id,\n", " DatabaseName=database_name,\n", " TableInput={\n", " 'Name': table_name,\n", " 'Description': 'Yellow Taxi Streaming dataset',\n", " 'StorageDescriptor': {\n", " 'Columns': [\n", " {\n", " 'Name': 'vendorid',\n", " 'Type': 'bigint'\n", " },\n", " {\n", " 'Name': 'tpep_pickup_datetime',\n", " 'Type': 'string'\n", " },\n", " {\n", " 'Name': 'tpep_dropoff_datetime',\n", " 'Type': 'string'\n", " },\n", " {\n", " 'Name': 'passenger_count',\n", " 'Type': 'bigint'\n", " },\n", " {\n", " 'Name': 'trip_distance',\n", " 'Type': 'double'\n", " },\n", " {\n", " 'Name': 'ratecodeid',\n", " 'Type': 'bigint'\n", " },\n", " {\n", " 'Name': 'store_and_fwd_flag',\n", " 'Type': 'string'\n", " },\n", " {\n", " 'Name': 'pulocationid',\n", " 'Type': 'bigint'\n", " },\n", " {\n", " 'Name': 'dolocationid',\n", " 'Type': 'bigint'\n", " },\n", " {\n", " 'Name': 'payment_type',\n", " 'Type': 'bigint'\n", " },\n", " {\n", " 'Name': 'fare_amount',\n", " 'Type': 'double'\n", " },\n", " {\n", " 'Name': 'extra',\n", " 'Type': 'double'\n", " },\n", " {\n", " 'Name': 'mta_tax',\n", " 'Type': 'double'\n", " },\n", " {\n", " 'Name': 'tip_amount',\n", " 'Type': 'double'\n", " },\n", " {\n", " 'Name': 'tolls_amount',\n", " 'Type': 'double'\n", " },\n", " {\n", " 'Name': 'improvement_surcharge',\n", " 'Type': 'double'\n", " },\n", " {\n", " 'Name': 'total_amount',\n", " 'Type': 'double'\n", " }\n", " ],\n", " 'Location': location,\n", " 'InputFormat': 'org.apache.hadoop.mapred.TextInputFormat',\n", " 'OutputFormat': 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat',\n", " 'SerdeInfo': {\n", " 'SerializationLibrary': 'org.openx.data.jsonserde.JsonSerDe',\n", " 'Parameters': {\n", " 'paths': 'vendorid,tpep_pickup_datetime,tpep_dropoff_datetime,passenger_count,trip_distance,ratecodeid,store_and_fwd_flag,pulocationid,dolocationid,payment_type,fare_amount,extra,mta_tax,tip_amount,tolls_amount,improvement_surcharge,total_amount'\n", " }\n", " },\n", " },\n", " 'PartitionKeys': [\n", " {\n", " 'Name': 'year',\n", " 'Type': 'bigint',\n", " },\n", " {\n", " 'Name': 'month',\n", " 'Type': 'bigint',\n", " },\n", " {\n", " 'Name': 'day',\n", " 'Type': 'bigint',\n", " },\n", " {\n", " 'Name': 'hour',\n", " 'Type': 'bigint',\n", " }\n", " ],\n", " 'TableType': 'EXTERNAL_TABLE',\n", " 'Parameters': {\n", " 'classification': 'json'\n", " }\n", " }\n", ")\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print('https://{0}.console.aws.amazon.com/glue/home?region={0}#database:name={1}'.format(region, database_name))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Update partitions for Glue Table\n", "\n", "When manually creating a table with partitions you need to notify the Glue Data Catalog of the partitions. There are a number of ways to populate the partitions, you could use Athena and run `MSCK REPAIR TABLE table_name` to recover partitions and data associated with partitions as an option, but in this workshop we will discover the associated partitions in the S3 bucket and add them with the Glue Partition API.\n", "\n", "[glue.batch_create_partition](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/glue.html#Glue.Client.batch_create_partition)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response = s3.list_objects_v2(\n", " Bucket=bucket,\n", " Prefix=prefix\n", ")\n", "\n", "# Load the table created above to get the StorageDescriptor def for columns, etc.\n", "streaming_table = glue.get_table(\n", " CatalogId=account_id,\n", " DatabaseName=database_name,\n", " Name=table_name\n", ")\n", "\n", "storage_descriptor= streaming_table['Table']['StorageDescriptor']\n", "\n", "# De-dupe partitions if there are any\n", "partitions = set()\n", "for obj in response['Contents']:\n", " # remove the first 3 prefixes for the datalake/taxi/streaming/ data lake location above and the last entry for the file\n", " keys = obj['Key'].split('/')[3:-1]\n", " # get the values of the prefixes in between. These are the year,month,day,hour values to be used for the partition\n", " values = [k.split('=')[1] for k in keys]\n", " storage_descriptor['Location'] = '{0}{1}'.format(location, '/'.join(keys))\n", " partitions.add(json.dumps({\"StorageDescriptor\": storage_descriptor ,\"Values\": list(values)}))\n", "\n", "#batch add partitions from the kinesis stream\n", "response = glue.batch_create_partition(\n", " CatalogId=account_id,\n", " DatabaseName=database_name,\n", " TableName=table_name,\n", " PartitionInputList=list(json.loads(part) for part in partitions)\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Get the result of the partition load\n", "\n", "If you run the above commands multiple times or over partitions that have already been registered you will see and erro message `Partition already exists.` and a list of partition values that already exist." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(response)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Query the Data Lake with Athena\n", "\n", "To query the tables created by the crawler we will be installing a python library for querying the data in the Glue Data Catalog with Athena. For more information jump to [PyAthena](https://pypi.org/project/PyAthena/)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install PyAthena" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Simple Select Query\n", "\n", "Notice you have the values for the partitions as part of the table returned. You can use these values to minimize the number of objects scanned in the S3 location helping to improve both performance and cost of the Athena queries." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from pyathena import connect\n", "from pyathena.util import as_pandas\n", "\n", "cursor = connect(region_name=region, s3_staging_dir='s3://'+bucket+'/athena/temp').cursor()\n", "cursor.execute('select * from ' + database_name + '.' + table_name + ' limit 10')\n", "\n", "df = as_pandas(cursor)\n", "df.head(5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Query with where clause using partitions\n", "\n", "In this example, if we had more partition you would benefit from the partitions in S3 for the `year`, `month`, and `day` whenlooking for specific data that leverage the partitions. Here we parse the variables from the partition we just added to Glue and use them in an aggregate query to return the records generated for the day." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "path = json.loads(list(partitions)[0])['StorageDescriptor']['Location'].split('/')\n", "year = path[6].split('=')[1]\n", "month = path[7].split('=')[1]\n", "day = path[8].split('=')[1]\n", "hour = path[9].split('=')[1]\n", "\n", "print('{0}-{1}-{2} {3}'.format(year, month, day, hour))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "cursor.execute('''SELECT count(vendorid) as cnt FROM ''' + database_name + '''.''' + table_name + '''\n", "WHERE year = ''' + year + '''\n", "AND month = ''' + month + ''' \n", "AND day = ''' + day)\n", "\n", "df = as_pandas(cursor)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Clean Up\n", "\n", "That's it, in this notebook we pulled data from the open data registry, cataloged it by creating a new table and partition in the Glue Data Catalog, and finally queried the results with Athena." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response = iam.delete_role_policy(\n", " RoleName=firehose_role_name,\n", " PolicyName=firehose_policy_name\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "iam.delete_role(RoleName=firehose_role_name)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response = logs.delete_log_group(\n", " logGroupName=cloudwatch_logs_group_name\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "firehose.delete_delivery_stream(\n", " DeliveryStreamName=delivery_stream_name\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response = glue.delete_database(\n", " CatalogId = account_id,\n", " Name = database_name\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!aws s3 rb s3://$bucket --force " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "conda_python3", "language": "python", "name": "conda_python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.5" } }, "nbformat": 4, "nbformat_minor": 2 }