{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Import Amazon Redshift tables as datasets in FinSpace\n", "This notebook shows how to connect to Amazon Redshift from FinSpace. It also demonstrated how to create a FinSpace dataset for every table in Amazon Redshift and populate it with an associated attribute set with the values necessary to retrieve tables from Redshift directly (catalog, schema, and table names). The attribute set contains 'Source - Redshift' sub-category. The sub-category helps with data discovery - analysis can browse for data from a specific source in FinSpace by going to a category under 'Source' in the navigation menu.\n", "\n", "## Prerequisites \n", "To follow the steps in this notebook, create a sub-category 'Redshift' under 'Source' category in FinSpace. After that, create an attribute set in FinSpace to hold metadata about the table's location in Redshift: catalog (database), schema, table name, and source. In this notebook, the attribute set's name is 'Redshift Table Attributes'.\n", "\n", "## Outline\n", "- Given a name of a database:\n", " - Get all tables in the database (exclude system tables)\n", " - For each table...\n", " - check if a table already exists in FinSpace\n", " - create a FinSpace dataset (tabular, give schema)\n", " - populate the Attribute Set (defined above) with the metadata about the table (catalog, schema, and table name)\n", " - associate the populated attribute set to the created dataset\n", "\n", "## References\n", "- [Spark SQL - JDBC To Other Databases](https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html)\n", "- [Redshift JDBC](https://docs.aws.amazon.com/redshift/latest/mgmt/jdbc20-download-driver.html)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 1. Configure Connections and Environment" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1.1. Connect to the Spark cluster" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%local\n", "from aws.finspace.cluster import FinSpaceClusterManager\n", "\n", "# if this was already run, no need to run again\n", "if 'finspace_clusters' not in globals():\n", " finspace_clusters = FinSpaceClusterManager()\n", " finspace_clusters.auto_connect()\n", "else:\n", " print(f'connected to cluster: {finspace_clusters.get_connected_cluster_id()}')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1.2. Configure JDBC driver for Amazon Redshift\n", "\n", "Use [this page](https://docs.aws.amazon.com/redshift/latest/mgmt/jdbc20-download-driver.html) to get the latest JDBC driver for Redshift.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%configure -f\n", "{ \"conf\":{\n", " \"spark.jars\": \"https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/2.0.0.7/redshift-jdbc42-2.0.0.7.jar\"\n", " }\n", "}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1.3. Import Python Helper Classes and Utility Functions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1.3.1. Python Helper\n", "These are the FinSpace helper classes found in the FinSpace samples and examples github" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "jupyter": { "source_hidden": true } }, "outputs": [], "source": [ "# %load finspace.py\n", "import datetime\n", "import time\n", "import boto3\n", "import os\n", "import pandas as pd\n", "import urllib\n", "\n", "from urllib.parse import urlparse\n", "from botocore.config import Config\n", "from boto3.session import Session\n", "\n", "\n", "# Base FinSpace class\n", "class FinSpace:\n", "\n", " def __init__(\n", " self,\n", " config=Config(retries={'max_attempts': 3, 'mode': 'standard'}),\n", " boto_session: Session = None,\n", " dev_overrides: dict = None,\n", " service_name = 'finspace-data'):\n", " \"\"\"\n", " To configure this class object, simply instantiate with no-arg if hitting prod endpoint, or else override it:\n", " e.g.\n", " `hab = FinSpaceAnalyticsManager(region_name = 'us-east-1',\n", " dev_overrides = {'hfs_endpoint': 'https://39g32x40jk.execute-api.us-east-1.amazonaws.com/alpha'})`\n", " \"\"\"\n", " self.hfs_endpoint = None\n", " self.region_name = None\n", "\n", " if dev_overrides is not None:\n", " if 'hfs_endpoint' in dev_overrides:\n", " self.hfs_endpoint = dev_overrides['hfs_endpoint']\n", "\n", " if 'region_name' in dev_overrides:\n", " self.region_name = dev_overrides['region_name']\n", " else:\n", " if boto_session is not None:\n", " self.region_name = boto_session.region_name\n", " else:\n", " self.region_name = self.get_region_name()\n", "\n", " self.config = config\n", "\n", " self._boto3_session = boto3.session.Session(region_name=self.region_name) if boto_session is None else boto_session\n", "\n", " print(f\"service_name: {service_name}\")\n", " print(f\"endpoint: {self.hfs_endpoint}\")\n", " print(f\"region_name: {self.region_name}\")\n", "\n", " self.client = self._boto3_session.client(service_name, endpoint_url=self.hfs_endpoint, config=self.config)\n", "\n", " @staticmethod\n", " def get_region_name():\n", " req = urllib.request.Request(\"http://169.254.169.254/latest/meta-data/placement/region\")\n", " with urllib.request.urlopen(req) as response:\n", " return response.read().decode(\"utf-8\")\n", "\n", " # --------------------------------------\n", " # Utility Functions\n", " # --------------------------------------\n", " @staticmethod\n", " def get_list(all_list: dir, name: str):\n", " \"\"\"\n", " Search for name found in the all_list dir and return that list of things.\n", " Removes repetitive code found in functions that call boto apis then search for the expected returned items\n", "\n", " :param all_list: list of things to search\n", " :type: dir:\n", "\n", " :param name: name to search for in all_lists\n", " :type: str\n", "\n", " :return: list of items found in name\n", " \"\"\"\n", " r = []\n", "\n", " # is the given name found, is found, add to list\n", " if name in all_list:\n", " for s in all_list[name]:\n", " r.append(s)\n", "\n", " # return the list\n", " return r\n", "\n", " # --------------------------------------\n", " # Classification Functions\n", " # --------------------------------------\n", "\n", " def list_classifications(self):\n", " \"\"\"\n", " Return list of all classifications\n", "\n", " :return: all classifications\n", " \"\"\"\n", " all_list = self.client.list_classifications(sort='NAME')\n", "\n", " return self.get_list(all_list, 'classifications')\n", "\n", " def classification_names(self):\n", " \"\"\"\n", " Get the classifications names\n", "\n", " :return list of classifications names only\n", " \"\"\"\n", " classification_names = []\n", " all_classifications = self.list_classifications()\n", " for c in all_classifications:\n", " classification_names.append(c['name'])\n", " return classification_names\n", "\n", " def classification(self, name: str):\n", " \"\"\"\n", " Exact name search for a classification of the given name\n", "\n", " :param name: name of the classification to find\n", " :type: str\n", "\n", " :return\n", " \"\"\"\n", "\n", " all_classifications = self.list_classifications()\n", " existing_classification = next((c for c in all_classifications if c['name'].lower() == name.lower()), None)\n", " if existing_classification:\n", " return existing_classification\n", "\n", " def describe_classification(self, classification_id: str):\n", " \"\"\"\n", " Calls the describe classification API function and only returns the taxonomy portion of the response.\n", "\n", " :param classification_id: the GUID of the classification to get description of\n", " :type: str\n", " \"\"\"\n", " resp = None\n", " taxonomy_details_resp = self.client.describe_taxonomy(taxonomyId=classification_id)\n", "\n", " if 'taxonomy' in taxonomy_details_resp:\n", " resp = taxonomy_details_resp['taxonomy']\n", "\n", " return (resp)\n", "\n", " def create_classification(self, classification_definition):\n", " resp = self.client.create_taxonomy(taxonomyDefinition=classification_definition)\n", "\n", " taxonomy_id = resp[\"taxonomyId\"]\n", "\n", " return (taxonomy_id)\n", "\n", " def delete_classification(self, classification_id):\n", " resp = self.client.delete_taxonomy(taxonomyId=classification_id)\n", "\n", " if resp['ResponseMetadata']['HTTPStatusCode'] != 200:\n", " return resp\n", "\n", " return True\n", "\n", " # --------------------------------------\n", " # Attribute Set Functions\n", " # --------------------------------------\n", "\n", " def list_attribute_sets(self):\n", " \"\"\"\n", " Get list of all dataset_types in the system\n", "\n", " :return: list of dataset types\n", " \"\"\"\n", " resp = self.client.list_dataset_types()\n", " results = resp['datasetTypeSummaries']\n", "\n", " while \"nextToken\" in resp:\n", " resp = self.client.list_dataset_types(nextToken=resp['nextToken'])\n", " results.extend(resp['datasetTypeSummaries'])\n", "\n", " return (results)\n", "\n", " def attribute_set_names(self):\n", " \"\"\"\n", " Get the list of all dataset type names\n", "\n", " :return list of all dataset type names\n", " \"\"\"\n", "\n", " dataset_type_names = []\n", " all_dataset_types = self.list_dataset_types()\n", " for c in all_dataset_types:\n", " dataset_type_names.append(c['name'])\n", " return dataset_type_names\n", "\n", " def attribute_set(self, name: str):\n", " \"\"\"\n", " Exact name search for a dataset type of the given name\n", "\n", " :param name: name of the dataset type to find\n", " :type: str\n", "\n", " :return\n", " \"\"\"\n", "\n", " all_dataset_types = self.list_dataset_types()\n", " existing_dataset_type = next((c for c in all_dataset_types if c['name'].lower() == name.lower()), None)\n", " if existing_dataset_type:\n", " return existing_dataset_type\n", "\n", " def describe_attribute_set(self, attribute_set_id: str):\n", " \"\"\"\n", " Calls the describe dataset type API function and only returns the dataset type portion of the response.\n", "\n", " :param attribute_set_id: the GUID of the dataset type to get description of\n", " :type: str\n", " \"\"\"\n", " resp = None\n", " dataset_type_details_resp = self.client.describe_dataset_type(datasetTypeId=attribute_set_id)\n", "\n", " if 'datasetType' in dataset_type_details_resp:\n", " resp = dataset_type_details_resp['datasetType']\n", "\n", " return (resp)\n", "\n", " def create_attribute_set(self, attribute_set_def):\n", " resp = self.client.create_dataset_type(datasetTypeDefinition=attribute_set_def)\n", "\n", " att_id = resp[\"datasetTypeId\"]\n", "\n", " return (att_id)\n", "\n", " def delete_attribute_set(self, attribute_set_id: str):\n", " resp = self.client.delete_attribute_set(attributeSetId=attribute_set_id)\n", "\n", " if resp['ResponseMetadata']['HTTPStatusCode'] != 200:\n", " return resp\n", "\n", " return True\n", "\n", " def associate_attribute_set(self, att_name: str, att_values: list, dataset_id: str):\n", " fncs = ['dissociate_dataset_from_attribute_set', 'associate_dataset_with_attribute_set', 'update_dataset_attribute_set_context']\n", " if self.check_functions(fncs) is False:\n", " raise Exception(f\"not all functions found in client {fncs}\")\n", "\n", " # get the attribute set by name, will need its id\n", " att_set = self.attribute_set(att_name)\n", "\n", " # get the dataset's information, will need the arn\n", " dataset = self.describe_dataset_details(dataset_id=dataset_id)\n", "\n", " # disassociate any existing relationship\n", " try:\n", " self.client.dissociate_dataset_from_attribute_set(datasetArn=dataset['arn'],\n", " datasetTypeId=att_set['id'])\n", " except:\n", " print(\"Nothing to disassociate\")\n", "\n", " arn = dataset['arn']\n", " dataset_type_id = att_set['id']\n", "\n", " self.client.associate_dataset_with_attribute_set(datasetId=dataset_id, datasetArn=arn, attributeSetId=dataset_type_id)\n", "\n", " resp = self.client.update_dataset_attribute_set_context(datasetId=dataset_id, datasetArn=arn, attributeSetId=dataset_type_id, values=att_values)\n", "\n", " if resp['ResponseMetadata']['HTTPStatusCode'] != 200:\n", " return resp\n", "\n", " return True\n", "\n", " # --------------------------------------\n", " # Permission Group Functions\n", " # --------------------------------------\n", "\n", " def list_permission_groups(self, max_results: int):\n", " all_perms = self.client.list_permission_groups(MaxResults=max_results)\n", " return (self.get_list(all_perms, 'permissionGroups'))\n", "\n", " def permission_group(self, name):\n", " all_groups = self.list_permission_groups(max_results = 100)\n", "\n", " existing_group = next((c for c in all_groups if c['name'].lower() == name.lower()), None)\n", "\n", " if existing_group:\n", " return existing_group\n", "\n", " def describe_permission_group(self, permission_group_id: str):\n", " resp = None\n", "\n", " perm_resp = self.client.describe_permission_group(permissionGroupId=permission_group_id)\n", "\n", " if 'permissionGroup' in perm_resp:\n", " resp = perm_resp['permissionGroup']\n", "\n", " return (resp)\n", "\n", " # --------------------------------------\n", " # Dataset Functions\n", " # --------------------------------------\n", "\n", " def describe_dataset_details(self, dataset_id: str):\n", " \"\"\"\n", " Calls the describe dataset details API function and only returns the dataset details portion of the response.\n", "\n", " :param dataset_id: the GUID of the dataset to get description of\n", " :type: str\n", " \"\"\"\n", " resp = None\n", " dataset_details_resp = self.client.describe_dataset_details(datasetId=dataset_id)\n", "\n", " if 'dataset' in dataset_details_resp:\n", " resp = dataset_details_resp[\"dataset\"]\n", "\n", " return (resp)\n", "\n", " def create_dataset(self, name: str, description: str, permission_group_id: str, dataset_permissions: [], kind: str,\n", " owner_info, schema):\n", " \"\"\"\n", " Create a dataset\n", "\n", " Warning, dataset names are not unique, be sure to check for the same name dataset before creating a new one\n", "\n", " :param name: Name of the dataset\n", " :type: str\n", "\n", " :param description: Description of the dataset\n", " :type: str\n", "\n", " :param permission_group_id: permission group for the dataset\n", " :type: str\n", "\n", " :param dataset_permissions: permissions for the group on the dataset\n", "\n", " :param kind: Kind of dataset, choices: TABULAR\n", " :type: str\n", "\n", " :param owner_info: owner information for the dataset\n", "\n", " :param schema: Schema of the dataset\n", "\n", " :return: the dataset_id of the created dataset\n", " \"\"\"\n", "\n", " if dataset_permissions:\n", " request_dataset_permissions = [{\"permission\": permissionName} for permissionName in dataset_permissions]\n", " else:\n", " request_dataset_permissions = []\n", "\n", " response = self.client.create_dataset(name=name,\n", " permissionGroupId = permission_group_id,\n", " datasetPermissions = request_dataset_permissions,\n", " kind=kind,\n", " description = description.replace('\\n', ' '),\n", " ownerInfo = owner_info,\n", " schema = schema)\n", "\n", " return response[\"datasetId\"]\n", "\n", " def ingest_from_s3(self,\n", " s3_location: str,\n", " dataset_id: str,\n", " change_type: str,\n", " wait_for_completion: bool = True,\n", " format_type: str = \"CSV\",\n", " format_params: dict = {'separator': ',', 'withHeader': 'true'}):\n", " \"\"\"\n", " Creates a changeset and ingests the data given in the S3 location into the changeset\n", "\n", " :param s3_location: the source location of the data for the changeset, will be copied into the changeset\n", " :stype: str\n", "\n", " :param dataset_id: the identifier of the containing dataset for the changeset to be created for this data\n", " :type: str\n", "\n", " :param change_type: What is the kind of changetype? \"APPEND\", \"REPLACE\" are the choices\n", " :type: str\n", "\n", " :param wait_for_completion: Boolean, should the function wait for the operation to complete?\n", " :type: str\n", "\n", " :param format_type: format type, CSV, PARQUET, XML, JSON\n", " :type: str\n", "\n", " :param format_params: dictionary of format parameters\n", " :type: dict\n", "\n", " :return: the id of the changeset created\n", " \"\"\"\n", " create_changeset_response = self.client.create_changeset(\n", " datasetId=dataset_id,\n", " changeType=change_type,\n", " sourceType='S3',\n", " sourceParams={'s3SourcePath': s3_location},\n", " formatType=format_type.upper(),\n", " formatParams=format_params\n", " )\n", "\n", " changeset_id = create_changeset_response['changeset']['id']\n", "\n", " if wait_for_completion:\n", " self.wait_for_ingestion(dataset_id, changeset_id)\n", " return changeset_id\n", "\n", " def describe_changeset(self, dataset_id: str, changeset_id: str):\n", " \"\"\"\n", " Function to get a description of the the givn changeset for the given dataset\n", "\n", " :param dataset_id: identifier of the dataset\n", " :type: str\n", "\n", " :param changeset_id: the idenfitier of the changeset\n", " :type: str\n", "\n", " :return: all information about the changeset, if found\n", " \"\"\"\n", " describe_changeset_resp = self.client.describe_changeset(datasetId=dataset_id, id=changeset_id)\n", "\n", " return describe_changeset_resp['changeset']\n", "\n", " def create_as_of_view(self, dataset_id: str, as_of_date: datetime, destination_type: str,\n", " partition_columns: list = [], sort_columns: list = [], destination_properties: dict = {},\n", " wait_for_completion: bool = True):\n", " \"\"\"\n", " Creates an 'as of' static view up to and including the requested 'as of' date provided.\n", "\n", " :param dataset_id: identifier of the dataset\n", " :type: str\n", "\n", " :param as_of_date: as of date, will include changesets up to this date/time in the view\n", " :type: datetime\n", "\n", " :param destination_type: destination type\n", " :type: str\n", "\n", " :param partition_columns: columns to partition the data by for the created view\n", " :type: list\n", "\n", " :param sort_columns: column to sort the view by\n", " :type: list\n", "\n", " :param destination_properties: destination properties\n", " :type: dict\n", "\n", " :param wait_for_completion: should the function wait for the system to create the view?\n", " :type: bool\n", "\n", " :return str: GUID of the created view if successful\n", "\n", " \"\"\"\n", " create_materialized_view_resp = self.client.create_materialized_snapshot(\n", " datasetId=dataset_id,\n", " asOfTimestamp=as_of_date,\n", " destinationType=destination_type,\n", " partitionColumns=partition_columns,\n", " sortColumns=sort_columns,\n", " autoUpdate=False,\n", " destinationProperties=destination_properties\n", " )\n", " view_id = create_materialized_view_resp['id']\n", " if wait_for_completion:\n", " self.wait_for_view(dataset_id=dataset_id, view_id=view_id)\n", " return view_id\n", "\n", " def create_auto_update_view(self, dataset_id: str, destination_type: str,\n", " partition_columns=[], sort_columns=[], destination_properties={},\n", " wait_for_completion=True):\n", " \"\"\"\n", " Creates an auto-updating view of the given dataset\n", "\n", " :param dataset_id: identifier of the dataset\n", " :type: str\n", "\n", " :param destination_type: destination type\n", " :type: str\n", "\n", " :param partition_columns: columns to partition the data by for the created view\n", " :type: list\n", "\n", " :param sort_columns: column to sort the view by\n", " :type: list\n", "\n", " :param destination_properties: destination properties\n", " :type: str\n", "\n", " :param wait_for_completion: should the function wait for the system to create the view?\n", " :type: bool\n", "\n", " :return str: GUID of the created view if successful\n", "\n", " \"\"\"\n", " create_materialized_view_resp = self.client.create_materialized_snapshot(\n", " datasetId=dataset_id,\n", " destinationType=destination_type,\n", " partitionColumns=partition_columns,\n", " sortColumns=sort_columns,\n", " autoUpdate=True,\n", " destinationProperties=destination_properties\n", " )\n", " view_id = create_materialized_view_resp['id']\n", " if wait_for_completion:\n", " self.wait_for_view(dataset_id=dataset_id, view_id=view_id)\n", " return view_id\n", "\n", " def wait_for_ingestion(self, dataset_id: str, changeset_id: str, sleep_sec=10):\n", " \"\"\"\n", " function that will continuously poll the changeset creation to ensure it completes or fails before returning.\n", "\n", " :param dataset_id: GUID of the dataset\n", " :type: str\n", "\n", " :param changeset_id: GUID of the changeset\n", " :type: str\n", "\n", " :param sleep_sec: seconds to wait between checks\n", " :type: int\n", "\n", " \"\"\"\n", " while True:\n", " status = self.describe_changeset(dataset_id=dataset_id, changeset_id=changeset_id)['status']\n", " if status == 'SUCCESS':\n", " print(f\"Changeset complete\")\n", " break\n", " elif status == 'PENDING' or status == 'RUNNING':\n", " print(f\"Changeset status is still PENDING, waiting {sleep_sec} sec ...\")\n", " time.sleep(sleep_sec)\n", " continue\n", " else:\n", " raise Exception(f\"Bad changeset status: {status}, failing now.\")\n", "\n", " def wait_for_view(self, dataset_id: str, view_id: str, sleep_sec=10):\n", " \"\"\"\n", " function that will continuously poll the view creation to ensure it completes or fails before returning.\n", "\n", " :param dataset_id: GUID of the dataset\n", " :type: str\n", "\n", " :param view_id: GUID of the view\n", " :type: str\n", "\n", " :param sleep_sec: seconds to wait between checks\n", " :type: int\n", "\n", " \"\"\"\n", " while True:\n", " list_views_resp = self.client.list_materialization_snapshots(datasetId=dataset_id, maxResults=100)\n", " matched_views = list(filter(lambda d: d['id'] == view_id, list_views_resp['materializationSnapshots']))\n", "\n", " if len(matched_views) != 1:\n", " size = len(matched_views)\n", " raise Exception(f\"Unexpected error: found {size} views that match the view Id: {view_id}\")\n", "\n", " status = matched_views[0]['status']\n", " if status == 'SUCCESS':\n", " print(f\"View complete\")\n", " break\n", " elif status == 'PENDING' or status == 'RUNNING':\n", " print(f\"View status is still PENDING, continue to wait till finish...\")\n", " time.sleep(sleep_sec)\n", " continue\n", " else:\n", " raise Exception(f\"Bad view status: {status}, failing now.\")\n", "\n", " def list_changesets(self, dataset_id: str):\n", " resp = self.client.list_changesets(datasetId=dataset_id, sortKey='CREATE_TIMESTAMP')\n", " results = resp['changesets']\n", "\n", " while \"nextToken\" in resp:\n", " resp = self.client.list_changesets(datasetId=dataset_id, sortKey='CREATE_TIMESTAMP',\n", " nextToken=resp['nextToken'])\n", " results.extend(resp['changesets'])\n", "\n", " return (results)\n", "\n", " def list_views(self, dataset_id: str, max_results=50):\n", " resp = self.client.list_materialization_snapshots(datasetId=dataset_id, maxResults=max_results)\n", " results = resp['materializationSnapshots']\n", "\n", " while \"nextToken\" in resp:\n", " resp = self.client.list_materialization_snapshots(datasetId=dataset_id, maxResults=max_results,\n", " nextToken=resp['nextToken'])\n", " results.extend(resp['materializationSnapshots'])\n", "\n", " return (results)\n", "\n", " def list_datasets(self, max_results: int):\n", " all_datasets = self.client.list_datasets(maxResults=max_results)\n", " return (self.get_list(all_datasets, 'datasets'))\n", "\n", " def list_dataset_types(self):\n", " resp = self.client.list_dataset_types(sort='NAME')\n", " results = resp['datasetTypeSummaries']\n", "\n", " while \"nextToken\" in resp:\n", " resp = self.client.list_dataset_types(sort='NAME', nextToken=resp['nextToken'])\n", " results.extend(resp['datasetTypeSummaries'])\n", "\n", " return (results)\n", "\n", " @staticmethod\n", " def get_execution_role():\n", " \"\"\"\n", " Convenience function from SageMaker to get the execution role of the user of the sagemaker studio notebook\n", "\n", " :return: the ARN of the execution role in the sagemaker studio notebook\n", " \"\"\"\n", " import sagemaker as sm\n", "\n", " e_role = sm.get_execution_role()\n", " return (f\"{e_role}\")\n", "\n", " def get_user_ingestion_info(self):\n", " return (self.client.get_user_ingestion_info())\n", "\n", " def upload_pandas(self, data_frame: pd.DataFrame):\n", " import awswrangler as wr\n", " resp = self.client.get_working_location(locationType='INGESTION')\n", " upload_location = resp['s3Uri']\n", " wr.s3.to_parquet(data_frame, f\"{upload_location}data.parquet\", index=False, boto3_session=self._boto3_session)\n", " return upload_location\n", "\n", " def ingest_pandas(self, data_frame: pd.DataFrame, dataset_id: str, change_type: str, wait_for_completion=True):\n", " print(\"Uploading the pandas dataframe ...\")\n", " upload_location = self.upload_pandas(data_frame)\n", "\n", " print(\"Data upload finished. Ingesting data ...\")\n", " return self.ingest_from_s3(upload_location, dataset_id, change_type, wait_for_completion, format_type='PARQUET')\n", "\n", " def read_view_as_pandas(self, dataset_id: str, view_id: str):\n", " \"\"\"\n", " Returns a pandas dataframe the view of the given dataset. Views in FinSpace can be quite large, be careful!\n", "\n", " :param dataset_id:\n", " :param view_id:\n", "\n", " :return: Pandas dataframe with all data of the view\n", " \"\"\"\n", " import awswrangler as wr # use awswrangler to read the table\n", "\n", " # @todo: switch to DescribeMateriliazation when available in HFS\n", " views = self.list_views(dataset_id=dataset_id, max_results=50)\n", " filtered = [v for v in views if v['id'] == view_id]\n", "\n", " if len(filtered) == 0:\n", " raise Exception('No such view found')\n", " if len(filtered) > 1:\n", " raise Exception('Internal Server error')\n", " view = filtered[0]\n", "\n", " # 0. Ensure view is ready to be read\n", " if (view['status'] != 'SUCCESS'):\n", " status = view['status']\n", " print(f'view run status is not ready: {status}. Returning empty.')\n", " return\n", "\n", " glue_db_name = view['destinationTypeProperties']['databaseName']\n", " glue_table_name = view['destinationTypeProperties']['tableName']\n", "\n", " # determine if the table has partitions first, different way to read is there are partitions\n", " p = wr.catalog.get_partitions(table=glue_table_name, database=glue_db_name, boto3_session=self._boto3_session)\n", "\n", " def no_filter(partitions):\n", " if len(partitions.keys()) > 0:\n", " return True\n", "\n", " return False\n", "\n", " df = None\n", "\n", " if len(p) == 0:\n", " df = wr.s3.read_parquet_table(table=glue_table_name, database=glue_db_name,\n", " boto3_session=self._boto3_session)\n", " else:\n", " spath = wr.catalog.get_table_location(table=glue_table_name, database=glue_db_name,\n", " boto3_session=self._boto3_session)\n", " cpath = wr.s3.list_directories(f\"{spath}/*\", boto3_session=self._boto3_session)\n", "\n", " read_path = f\"{spath}/\"\n", "\n", " # just one? Read it\n", " if len(cpath) == 1:\n", " read_path = cpath[0]\n", "\n", " df = wr.s3.read_parquet(read_path, dataset=True, partition_filter=no_filter,\n", " boto3_session=self._boto3_session)\n", "\n", " # Query Glue table directly with wrangler\n", " return df\n", "\n", " @staticmethod\n", " def get_schema_from_pandas(df: pd.DataFrame):\n", " \"\"\"\n", " Returns the FinSpace schema columns from the given pandas dataframe.\n", "\n", " :param df: pandas dataframe to interrogate for the schema\n", "\n", " :return: FinSpace column schema list\n", " \"\"\"\n", "\n", " # for translation to FinSpace's schema\n", " # 'STRING'|'CHAR'|'INTEGER'|'TINYINT'|'SMALLINT'|'BIGINT'|'FLOAT'|'DOUBLE'|'DATE'|'DATETIME'|'BOOLEAN'|'BINARY'\n", " DoubleType = \"DOUBLE\"\n", " FloatType = \"FLOAT\"\n", " DateType = \"DATE\"\n", " StringType = \"STRING\"\n", " IntegerType = \"INTEGER\"\n", " LongType = \"BIGINT\"\n", " BooleanType = \"BOOLEAN\"\n", " TimestampType = \"DATETIME\"\n", "\n", " hab_columns = []\n", "\n", " for name in dict(df.dtypes):\n", " p_type = df.dtypes[name]\n", "\n", " switcher = {\n", " \"float64\": DoubleType,\n", " \"int64\": IntegerType,\n", " \"datetime64[ns, UTC]\": TimestampType,\n", " \"datetime64[ns]\": DateType\n", " }\n", "\n", " habType = switcher.get(str(p_type), StringType)\n", "\n", " hab_columns.append({\n", " \"dataType\": habType,\n", " \"name\": name,\n", " \"description\": \"\"\n", " })\n", "\n", " return (hab_columns)\n", "\n", " @staticmethod\n", " def get_date_cols(df: pd.DataFrame):\n", " \"\"\"\n", " Returns which are the data columns found in the pandas dataframe.\n", " Pandas does the hard work to figure out which of the columns can be considered to be date columns.\n", "\n", " :param df: pandas dataframe to interrogate for the schema\n", "\n", " :return: list of column names that can be parsed as dates by pandas\n", "\n", " \"\"\"\n", " date_cols = []\n", "\n", " for name in dict(df.dtypes):\n", "\n", " p_type = df.dtypes[name]\n", "\n", " if str(p_type).startswith(\"date\"):\n", " date_cols.append(name)\n", "\n", " return (date_cols)\n", "\n", " def get_best_schema_from_csv(self, path, is_s3=True, read_rows=500, sep=','):\n", " \"\"\"\n", " Uses multiple reads of the file with pandas to determine schema of the referenced files.\n", " Files are expected to be csv.\n", "\n", " :param path: path to the files to read\n", " :type: str\n", "\n", " :param is_s3: True if the path is s3; False if filesystem\n", " :type: bool\n", "\n", " :param read_rows: number of rows to sample for determining schema\n", "\n", " :param sep:\n", "\n", " :return dict: schema for FinSpace\n", " \"\"\"\n", " #\n", " # best efforts to determine the schema, sight unseen\n", " import awswrangler as wr\n", "\n", " # 1: get the base schema\n", " df1 = None\n", "\n", " if is_s3:\n", " df1 = wr.s3.read_csv(path, nrows=read_rows, sep=sep)\n", " else:\n", " df1 = pd.read_csv(path, nrows=read_rows, sep=sep)\n", "\n", " num_cols = len(df1.columns)\n", "\n", " # with number of columns, try to infer dates\n", " df2 = None\n", "\n", " if is_s3:\n", " df2 = wr.s3.read_csv(path, parse_dates=list(range(0, num_cols)), infer_datetime_format=True,\n", " nrows=read_rows, sep=sep)\n", " else:\n", " df2 = pd.read_csv(path, parse_dates=list(range(0, num_cols)), infer_datetime_format=True, nrows=read_rows,\n", " sep=sep)\n", "\n", " date_cols = self.get_date_cols(df2)\n", "\n", " # with dates known, parse the file fully\n", " df = None\n", "\n", " if is_s3:\n", " df = wr.s3.read_csv(path, parse_dates=date_cols, infer_datetime_format=True, nrows=read_rows, sep=sep)\n", " else:\n", " df = pd.read_csv(path, parse_dates=date_cols, infer_datetime_format=True, nrows=read_rows, sep=sep)\n", "\n", " schema_cols = self.get_schema_from_pandas(df)\n", "\n", " return (schema_cols)\n", "\n", " def s3_upload_file(self, source_file: str, s3_destination: str):\n", " \"\"\"\n", " Uploads a local file (full path) to the s3 destination given (expected form: s3:////).\n", " The filename will have spaces replaced with _.\n", "\n", " :param source_file: path of file to upload\n", " :param s3_destination: full path to where to save the file\n", " :type: str\n", "\n", " \"\"\"\n", " hab_s3_client = self._boto3_session.client(service_name='s3')\n", "\n", " o = urlparse(s3_destination)\n", " bucket = o.netloc\n", " prefix = o.path.lstrip('/')\n", "\n", " fname = os.path.basename(source_file)\n", "\n", " hab_s3_client.upload_file(source_file, bucket, f\"{prefix}{fname.replace(' ', '_')}\")\n", "\n", " def list_objects(self, s3_location: str):\n", " \"\"\"\n", " lists the objects found at the s3_location. Strips out the boto API response header,\n", " just returns the contents of the location. Internally uses the list_objects_v2.\n", "\n", " :param s3_location: path, starting with s3:// to get the list of objects from\n", " :type: str\n", "\n", " \"\"\"\n", " o = urlparse(s3_location)\n", " bucket = o.netloc\n", " prefix = o.path.lstrip('/')\n", "\n", " results = []\n", "\n", " hab_s3_client = self._boto3_session.client(service_name='s3')\n", "\n", " paginator = hab_s3_client.get_paginator('list_objects_v2')\n", " pages = paginator.paginate(Bucket=bucket, Prefix=prefix)\n", "\n", " for page in pages:\n", " if 'Contents' in page:\n", " results.extend(page['Contents'])\n", "\n", " return (results)\n", "\n", " def list_clusters(self, status: str = None):\n", " \"\"\"\n", " Lists current clusters and their statuses\n", "\n", " :param status: status to filter for\n", "\n", " :return dict: list of clusters\n", " \"\"\"\n", "\n", " resp = self.client.list_clusters()\n", "\n", " clusters = []\n", "\n", " if 'clusters' not in resp:\n", " return (clusters)\n", "\n", " for c in resp['clusters']:\n", " if status is None:\n", " clusters.append(c)\n", " else:\n", " if c['clusterStatus']['state'] in status:\n", " clusters.append(c)\n", "\n", " return (clusters)\n", "\n", " def get_cluster(self, cluster_id):\n", " \"\"\"\n", " Resize the given cluster to desired template\n", "\n", " :param cluster_id: cluster id\n", " \"\"\"\n", "\n", " clusters = self.list_clusters()\n", "\n", " for c in clusters:\n", " if c['clusterId'] == cluster_id:\n", " return (c)\n", "\n", " return (None)\n", "\n", " def update_cluster(self, cluster_id: str, template: str):\n", " \"\"\"\n", " Resize the given cluster to desired template\n", "\n", " :param cluster_id: cluster id\n", " :param template: target template to resize to\n", " \"\"\"\n", "\n", " cluster = self.get_cluster(cluster_id=cluster_id)\n", "\n", " if cluster['currentTemplate'] == template:\n", " print(f\"Already using template: {template}\")\n", " return (cluster)\n", "\n", " self.client.update_cluster(clusterId=cluster_id, template=template)\n", "\n", " return (self.get_cluster(cluster_id=cluster_id))\n", "\n", " def wait_for_status(self, clusterId: str, status: str, sleep_sec=10, max_wait_sec=900):\n", " \"\"\"\n", " Function polls service until cluster is in desired status.\n", "\n", " :param clusterId: the cluster's ID\n", " :param status: desired status for clsuter to reach\n", " :\n", " \"\"\"\n", " total_wait = 0\n", "\n", " while True and total_wait < max_wait_sec:\n", " resp = self.client.list_clusters()\n", "\n", " this_cluster = None\n", "\n", " # is this the cluster?\n", " for c in resp['clusters']:\n", " if clusterId == c['clusterId']:\n", " this_cluster = c\n", "\n", " if this_cluster is None:\n", " print(f\"clusterId:{clusterId} not found\")\n", " return (None)\n", "\n", " this_status = this_cluster['clusterStatus']['state']\n", "\n", " if this_status.upper() != status.upper():\n", " print(f\"Cluster status is {this_status}, waiting {sleep_sec} sec ...\")\n", " time.sleep(sleep_sec)\n", " total_wait = total_wait + sleep_sec\n", " continue\n", " else:\n", " return (this_cluster)\n", "\n", " def get_working_location(self, locationType='SAGEMAKER'):\n", " resp = None\n", " location = self.client.get_working_location(locationType=locationType)\n", "\n", " if 's3Uri' in location:\n", " resp = location['s3Uri']\n", "\n", " return (resp)\n" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "jupyter": { "source_hidden": true } }, "outputs": [], "source": [ "# %load finspace_spark.py\n", "import datetime\n", "import time\n", "import boto3\n", "from botocore.config import Config\n", "\n", "# FinSpace class with Spark bindings\n", "\n", "class SparkFinSpace(FinSpace):\n", " import pyspark\n", " def __init__(\n", " self, \n", " spark: pyspark.sql.session.SparkSession = None,\n", " config = Config(retries = {'max_attempts': 0, 'mode': 'standard'}),\n", " dev_overrides: dict = None\n", " ):\n", " FinSpace.__init__(self, config=config, dev_overrides=dev_overrides)\n", " self.spark = spark # used on Spark cluster for reading views, creating changesets from DataFrames\n", " \n", " def upload_dataframe(self, data_frame: pyspark.sql.dataframe.DataFrame):\n", " resp = self.client.get_user_ingestion_info()\n", " upload_location = resp['ingestionPath']\n", "# data_frame.write.option('header', 'true').csv(upload_location)\n", " data_frame.write.parquet(upload_location)\n", " return upload_location\n", " \n", " def ingest_dataframe(self, data_frame: pyspark.sql.dataframe.DataFrame, dataset_id: str, change_type: str, wait_for_completion=True):\n", " print(\"Uploading data...\")\n", " upload_location = self.upload_dataframe(data_frame)\n", " \n", " print(\"Data upload finished. Ingesting data...\")\n", " \n", " return self.ingest_from_s3(upload_location, dataset_id, change_type, wait_for_completion, format_type='parquet', format_params={})\n", " \n", " def read_view_as_spark(\n", " self,\n", " dataset_id: str,\n", " view_id: str\n", " ):\n", " # TODO: switch to DescribeMatz when available in HFS\n", " views = self.list_views(dataset_id=dataset_id, max_results=50)\n", " filtered = [v for v in views if v['id'] == view_id]\n", "\n", " if len(filtered) == 0:\n", " raise Exception('No such view found')\n", " if len(filtered) > 1:\n", " raise Exception('Internal Server error')\n", " view = filtered[0]\n", " \n", " # 0. Ensure view is ready to be read\n", " if (view['status'] != 'SUCCESS'): \n", " status = view['status'] \n", " print(f'view run status is not ready: {status}. Returning empty.')\n", " return\n", "\n", " glue_db_name = view['destinationTypeProperties']['databaseName']\n", " glue_table_name = view['destinationTypeProperties']['tableName']\n", " \n", " # Query Glue table directly with catalog function of spark\n", " return self.spark.table(f\"`{glue_db_name}`.`{glue_table_name}`\")\n", " \n", " def get_schema_from_spark(self, data_frame: pyspark.sql.dataframe.DataFrame):\n", " from pyspark.sql.types import StructType\n", "\n", " # for translation to FinSpace's schema\n", " # 'STRING'|'CHAR'|'INTEGER'|'TINYINT'|'SMALLINT'|'BIGINT'|'FLOAT'|'DOUBLE'|'DATE'|'DATETIME'|'BOOLEAN'|'BINARY'\n", " DoubleType = \"DOUBLE\"\n", " FloatType = \"FLOAT\"\n", " DateType = \"DATE\"\n", " StringType = \"STRING\"\n", " IntegerType = \"INTEGER\"\n", " LongType = \"BIGINT\"\n", " BooleanType = \"BOOLEAN\"\n", " TimestampType = \"DATETIME\"\n", " \n", " hab_columns = []\n", "\n", " items = [i for i in data_frame.schema] \n", "\n", " switcher = {\n", " \"BinaryType\" : StringType,\n", " \"BooleanType\" : BooleanType,\n", " \"ByteType\" : IntegerType,\n", " \"DateType\" : DateType,\n", " \"DoubleType\" : FloatType,\n", " \"IntegerType\" : IntegerType,\n", " \"LongType\" : IntegerType,\n", " \"NullType\" : StringType,\n", " \"ShortType\" : IntegerType,\n", " \"StringType\" : StringType,\n", " \"TimestampType\" : TimestampType,\n", " }\n", "\n", " \n", " for i in items:\n", "# print( f\"name: {i.name} type: {i.dataType}\" )\n", "\n", " habType = switcher.get( str(i.dataType), StringType)\n", "\n", " hab_columns.append({\n", " \"dataType\" : habType, \n", " \"name\" : i.name,\n", " \"description\" : \"\"\n", " })\n", "\n", " return( hab_columns )\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# initialize the FinSpace helper\n", "finspace = SparkFinSpace(spark=spark)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1.3.2. Add Utility Functions for dataset import\n", "These functions help translate Redshift data to FinSpace schema." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "source_hidden": true } }, "outputs": [], "source": [ "#\n", "# Utility functions\n", "#------------------------------------------------------------\n", "\n", "def get_field_by_name(f_list, title, name = 'name'):\n", " for f in f_list:\n", " if f['title'] == title:\n", " return f[name]\n", " return None\n", "\n", "def get_finspace_schema(table_schema_pdf): \n", "\n", " DoubleType = \"DOUBLE\"\n", " FloatType = \"FLOAT\"\n", " DateType = \"DATE\"\n", " StringType = \"STRING\"\n", " IntegerType = \"INTEGER\"\n", " LongType = \"BIGINT\"\n", " BooleanType = \"BOOLEAN\"\n", " TimestampType = \"DATETIME\"\n", "\n", " switcher = {\n", " \"DATE\" : DateType,\n", " \"FLOAT\": FloatType,\n", " \"NUMBER\": DoubleType,\n", " \"TEXT\": StringType\n", " }\n", "\n", " columns = []\n", "\n", " for index, row in table_schema_pdf.iterrows():\n", " name = row['COLUMN_NAME']\n", " description = row['COMMENT']\n", " data_type = row['DATA_TYPE']\n", "\n", " if description is None: description = ''\n", " \n", " habType = switcher.get(str(data_type), StringType)\n", "\n", " columns.append( {'dataType': habType, 'name': name, 'description': description} )\n", "\n", "\n", " schema = {\n", " 'primaryKeyColumns': [],\n", " 'columns' : columns\n", " }\n", "\n", " return schema\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 2. Import Tables from Amazon Redshift database as FinSpace datasets\n", "Get a list of tables, columns, and associated comments from Amazon Redshift. Add metadata for each table into a FinSpace attribute set. Ths attribute set is then associated with the FinSpace dataset. Each FinSpace dataset represents a table in the given Redshift database (with the exception of information_schema and system tables). " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2.1. Setup parameters\n", "Replace parameters below with connection information to Amaozn Reshift (cluster, port, user, password). Also specify FinSpace Group ID who should get access to datasets." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Amazon Redshift database name\n", "dbName = '' ### <------ REPLACE WITH YOURS ###\n", "\n", "# Attribute Set name in FinSpace \n", "att_name = 'Redshift Table Attributes'\n", "\n", "# User Group to grant access to the dataset\n", "group_id = ''### <------ REPLACE WITH YOURS ###\n", "\n", "# Source name\n", "source_name = 'Redshift'\n", "\n", "# Redshift connection details\n", "dbCluster = '' ### <------ REPLACE WITH YOURS ###\n", "dbPort = '' ### <------ REPLACE WITH YOURS ###\n", "dbUser = ''### <------ REPLACE WITH YOURS ###\n", "dbPassword = ''### <------ REPLACE WITH YOURS ###" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2.2. Get tables and associated metadata From Amazon Redshift" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# SQL query to get a list of tables and columns with associate comments\n", "tablesQuery = f\"select tb.table_catalog, tb.table_schema, tb.table_name, ad.description as comment from {dbName}.information_schema.tables tb left join (select * from pg_catalog.pg_description mt left join (select cl.oid as table_id, cl.relname as table_name, cl.relnamespace as schema_id, np.nspname as schema_name from pg_class as cl left join pg_catalog.pg_namespace as np on np.oid = cl.relnamespace) et on mt.objoid = et.table_id where mt.objsubid = 0) ad on tb.table_name = ad.table_name and tb.table_schema = ad.schema_name where tb.table_schema <> 'information_schema' and tb.table_schema not like 'pg_%'\"\n", "\n", "schemaQuery = f\"select tb.*, ad.description as comment from {dbName}.information_schema.columns tb left join (select * from pg_catalog.pg_description mt left join (select cl.oid as table_id, cl.relname as table_name, cl.relnamespace as schema_id, np.nspname as schema_name from pg_class as cl left join pg_catalog.pg_namespace as np on np.oid = cl.relnamespace) et on mt.objoid = et.table_id) ad on tb.table_name = ad.table_name and tb.table_schema = ad.schema_name and tb.ordinal_position = ad.objsubid where tb.table_schema <> 'information_schema' and tb.table_schema not like 'pg_%'\"\n", "\n", "# Redshift connection URL\n", "urlStr = f\"jdbc:redshift://{dbCluster}:{dbPort}/{dbName}?user={dbUser}&password={dbPassword}\"\n", "\n", " # Get all the tables in a database (exclude information and system tables)\n", "tablesDF = spark.read \\\n", " .format(\"jdbc\") \\\n", " .option(\"driver\", \"com.amazon.redshift.jdbc42.Driver\") \\\n", " .option(\"url\", urlStr) \\\n", " .option(\"query\", tablesQuery) \\\n", " .load()\n", "\n", "tablesDF.show(5)\n", "\n", "\n", "# Get all columns in the database\n", "schemaDF = spark.read \\\n", " .format(\"jdbc\") \\\n", " .option(\"driver\", \"com.amazon.redshift.jdbc42.Driver\") \\\n", " .option(\"url\", urlStr) \\\n", " .option(\"query\", schemaQuery) \\\n", " .load() " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2.3. Get the Attribute Set\n", "Retrieve Redshift attribute set by name. We will use its identifiers for populating the attribute set for asociation to the datasets. We need the exact IDs of the fields, not just their names." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Get the attribute set\n", "sfAttrSet = finspace.attribute_set(att_name)\n", "\n", "att_def = None\n", "att_fields = None\n", "\n", "# Get the fields of the attribute set\n", "att_resp = finspace.describe_attribute_set(sfAttrSet['id'])\n", "\n", "if 'definition' in att_resp: \n", " att_def = att_resp['definition']\n", " \n", "if 'fields' in att_def:\n", " att_fields = att_def['fields']\n", " \n", "print(att_fields)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2.4. Get Amazon Redshift Source\n", "\n", "One of the fields in the Redshift attribute set identifies the Source as Redshift. We need to extract classification values from FinSpace and then use them to populate the Redshift data attribtue set and associate it to the FinSpace datasets which we will create in the next steps." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Get the key for \"Redshift\" sub-category in the classification 'Source'\n", "source_cls = finspace.classification('Source')\n", "\n", "source_fields = finspace.describe_classification(source_cls['id'])\n", "source_key = None\n", "\n", "for n in source_fields['definition']['nodes']:\n", " if n['fields']['name'] == source_name: \n", " source_key = n['key']\n", "\n", "# this is the key for source in the Category\n", "print(f'Source: {source_name} Key: {source_key}')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# all the tables into a pandas dataframe to then iterate on\n", "tablesDF.select('TABLE_CATALOG', 'TABLE_SCHEMA', 'TABLE_NAME', 'COMMENT').show(10, False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2.5. Get a list of datsets in FinSpace to avoid duplicates" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Get all the datasets from Redshift (classification type Source, with values ‘Redshift’)\n", "resp = finspace.client.list_dataset_metadata_by_taxonomy_node(taxonomyId=source_cls['id'], taxonomyNodeKey=source_key)\n", "\n", "# Get a list of datasets to iterate over\n", "datasets = resp['datasetMetadataSummaries']\n", "\n", "\n", "# Build the lookup table for existing datasets from Redshift to avoid creating duplicates\n", "types_list = []\n", "\n", "for s in datasets:\n", "\n", " # end of the arn is the dataset ID\n", " dataset_id = os.path.basename(s['datasetArn'])\n", "\n", " # get the details of the dataset (name, description, etc)\n", " dataset_details_resp = finspace.client.describe_dataset_details(datasetId=dataset_id)\n", "\n", " dataset_details = None\n", " dataset_types = None\n", " owner_info = None\n", " taxonomy_info = None\n", "\n", " if 'dataset' in dataset_details_resp:\n", " dataset_details = dataset_details_resp[\"dataset\"]\n", "\n", " if 'datasetTypeContexts' in dataset_details_resp:\n", " dataset_types = dataset_details_resp[\"datasetTypeContexts\"]\n", "\n", " if 'ownerinfo' in dataset_details_resp:\n", " owner_info = dataset_details_resp[\"ownerinfo\"]\n", "\n", " if 'taxonomyNodesinfo' in dataset_details_resp:\n", " taxonomy_info = dataset_details_resp[\"taxonomyNodesinfo\"]\n", " \n", " # Pull Redshift attribute set from the list of dataset_types\n", "\n", " # first check the definition, then extract the values against the definition\n", " # have the keys of values/labels as the column header?\n", " for dt in dataset_types:\n", " if (dt['definition']['name'] != att_name):\n", " continue\n", "\n", " dd = {\n", " 'dataset_id' : dataset_id\n", " }\n", "\n", " # used to map the field name (id) to the tile seen in the UI\n", " field_map = {}\n", "\n", " # get the field titles for name\n", " for f in dt['definition']['fields']:\n", " field_map[f['name']] = f['title']\n", "\n", " # human readable, else the keys would be numbers\n", " for v in dt['values']:\n", " dd[field_map[v['field']]] = v['values']\n", "\n", " types_list.append(dd)\n", "\n", "types_pdf = pd.DataFrame(types_list)\n", "print(types_pdf)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2.6. For each table in Amazon Redshift, create a datasets in FinSpace and attach an Attribute Set" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Permissions to grant the above group for the created dataset\n", "basicPermissions = [\n", " \"ViewDatasetDetails\",\n", " \"ReadDatasetData\",\n", " \"AddDatasetData\",\n", " \"CreateSnapshot\",\n", " \"EditDatasetMetadata\",\n", " \"ManageDatasetPermissions\",\n", " \"DeleteDataset\"\n", "]\n", "\n", "# All datasets have ownership\n", "basicOwnerInfo = {\n", " \"phoneNumber\" : \"12125551000\",\n", " \"email\" : \"jdoe@amazon.com\",\n", " \"name\" : \"Jane Doe\"\n", "}\n", "\n", "# Add the tables into a pandas dataframe to then iterate on\n", "tablesPDF = tablesDF.select('TABLE_CATALOG', 'TABLE_SCHEMA', 'TABLE_NAME', 'COMMENT').toPandas()\n", "\n", "c = 0\n", "# create=False\n", "create=True\n", "\n", "\n", "# For each table, create a dataset with the necessary attribute set populated and associated to the dataset\n", "for index, row in tablesPDF.iterrows():\n", " \n", " c = c + 1\n", " \n", " catalog = row.TABLE_CATALOG\n", " schema = row.TABLE_SCHEMA\n", " table = row.TABLE_NAME\n", " \n", " # do we already have this dataset?\n", " exist_i = None\n", " for ee_i, ee in types_pdf.iterrows():\n", " if catalog in ee.Catalog:\n", " if schema in ee.Schema:\n", " if table in ee.Table:\n", " exist_i = ee_i\n", "\n", " if exist_i is not None:\n", " print(f\"Table {table} already exists in FinSpace\")\n", " continue\n", "\n", " # Attributes and their populated values\n", " att_values = [\n", " { 'field' : get_field_by_name(att_fields, 'Catalog'), 'type' : get_field_by_name(att_fields, 'Catalog', 'type')['name'], 'values' : [ catalog ] },\n", " { 'field' : get_field_by_name(att_fields, 'Schema'), 'type' : get_field_by_name(att_fields, 'Schema', 'type')['name'], 'values' : [ schema ] },\n", " { 'field' : get_field_by_name(att_fields, 'Table'), 'type' : get_field_by_name(att_fields, 'Table', 'type')['name'], 'values' : [ table ] },\n", " { 'field' : get_field_by_name(att_fields, 'Source'), 'type' : get_field_by_name(att_fields, 'Source', 'type')['name'], 'values' : [ source_key ] },\n", " ]\n", "\n", " # get this table's schema from Redshift\n", " tableSchemaPDF = schemaDF.filter(schemaDF.table_name == table).filter(schemaDF.table_schema == schema).select('ORDINAL_POSITION', 'COLUMN_NAME', 'IS_NULLABLE', 'DATA_TYPE', 'COMMENT').orderBy('ORDINAL_POSITION').toPandas()\n", "\n", " print(tableSchemaPDF)\n", " # translate Redshift schema to FinSpace Schema\n", " fs_schema = get_finspace_schema(tableSchemaPDF)\n", "\n", " # name and description of the dataset to create\n", " name = f'{table}'\n", " description = f'Redshift table from catalog: {catalog}'\n", " \n", " if row.COMMENT is not None:\n", " description = row.COMMENT\n", " \n", " print(f'name: {name}')\n", " print(f'description: {description}')\n", "\n", " print(\"att_values:\")\n", " for i in att_values:\n", " print(i)\n", "\n", " print(\"schema:\")\n", " for i in fs_schema['columns']:\n", " print(i)\n", " \n", " if (create):\n", " # create the dataset\n", " dataset_id = finspace.create_dataset(\n", " name = name,\n", " description = description,\n", " permission_group_id = group_id,\n", " dataset_permissions = basicPermissions,\n", " kind = \"TABULAR\",\n", " owner_info = basicOwnerInfo,\n", " schema = fs_schema\n", " )\n", "\n", " print(f'Created, dataset_id: {dataset_id}')\n", "\n", " time.sleep(20)\n", "\n", " # associate tha attributes to the dataset\n", " if (att_name is not None and att_values is not None):\n", " print(f\"Associating values to attribute set: {att_name}\")\n", " finspace.associate_attribute_set(att_name=att_name, att_values=att_values, dataset_id=dataset_id) " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "instance_type": "ml.t3.medium", "interpreter": { "hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6" }, "kernelspec": { "display_name": "FinSpace PySpark (finspace-sparkmagic-84084/latest)", "language": "python", "name": "pysparkkernel__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:489461498020:image/finspace-sparkmagic-84084" }, "language_info": { "codemirror_mode": { "name": "python", "version": 3 }, "mimetype": "text/x-python", "name": "pyspark", "pygments_lexer": "python3" }, "widgets": { "application/vnd.jupyter.widget-state+json": { "state": {}, "version_major": 2, "version_minor": 0 } } }, "nbformat": 4, "nbformat_minor": 5 }