{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[](https://github.com/aws/aws-sdk-pandas)\n",
"\n",
"# 11 - CSV Datasets\n",
"\n",
"awswrangler has 3 different write modes to store CSV Datasets on Amazon S3.\n",
"\n",
"- **append** (Default)\n",
"\n",
" Only adds new files without any delete.\n",
" \n",
"- **overwrite**\n",
"\n",
" Deletes everything in the target directory and then add new files.\n",
" \n",
"- **overwrite_partitions** (Partition Upsert)\n",
"\n",
" Only deletes the paths of partitions that should be updated and then writes the new partitions files. It's like a \"partition Upsert\"."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from datetime import date\n",
"import awswrangler as wr\n",
"import pandas as pd"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Enter your bucket name:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdin",
"output_type": "stream",
"text": [
" ············\n"
]
}
],
"source": [
"import getpass\n",
"bucket = getpass.getpass()\n",
"path = f\"s3://{bucket}/dataset/\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Checking/Creating Glue Catalog Databases"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"if \"awswrangler_test\" not in wr.catalog.databases().values:\n",
" wr.catalog.create_database(\"awswrangler_test\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Creating the Dataset"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"
\n",
"\n",
"
\n",
" \n",
" \n",
" | \n",
" id | \n",
" value | \n",
" date | \n",
"
\n",
" \n",
" \n",
" \n",
" 0 | \n",
" 1 | \n",
" foo | \n",
" 2020-01-01 | \n",
"
\n",
" \n",
" 1 | \n",
" 2 | \n",
" boo | \n",
" 2020-01-02 | \n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
" id value date\n",
"0 1 foo 2020-01-01\n",
"1 2 boo 2020-01-02"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"df = pd.DataFrame({\n",
" \"id\": [1, 2],\n",
" \"value\": [\"foo\", \"boo\"],\n",
" \"date\": [date(2020, 1, 1), date(2020, 1, 2)]\n",
"})\n",
"\n",
"wr.s3.to_csv(\n",
" df=df,\n",
" path=path,\n",
" index=False,\n",
" dataset=True,\n",
" mode=\"overwrite\",\n",
" database=\"awswrangler_test\",\n",
" table=\"csv_dataset\"\n",
")\n",
"\n",
"wr.athena.read_sql_table(database=\"awswrangler_test\", table=\"csv_dataset\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Appending"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" | \n",
" id | \n",
" value | \n",
" date | \n",
"
\n",
" \n",
" \n",
" \n",
" 0 | \n",
" 3 | \n",
" bar | \n",
" 2020-01-03 | \n",
"
\n",
" \n",
" 1 | \n",
" 1 | \n",
" foo | \n",
" 2020-01-01 | \n",
"
\n",
" \n",
" 2 | \n",
" 2 | \n",
" boo | \n",
" 2020-01-02 | \n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
" id value date\n",
"0 3 bar 2020-01-03\n",
"1 1 foo 2020-01-01\n",
"2 2 boo 2020-01-02"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"df = pd.DataFrame({\n",
" \"id\": [3],\n",
" \"value\": [\"bar\"],\n",
" \"date\": [date(2020, 1, 3)]\n",
"})\n",
"\n",
"wr.s3.to_csv(\n",
" df=df,\n",
" path=path,\n",
" index=False,\n",
" dataset=True,\n",
" mode=\"append\",\n",
" database=\"awswrangler_test\",\n",
" table=\"csv_dataset\"\n",
")\n",
"\n",
"wr.athena.read_sql_table(database=\"awswrangler_test\", table=\"csv_dataset\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Overwriting"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" | \n",
" id | \n",
" value | \n",
" date | \n",
"
\n",
" \n",
" \n",
" \n",
" 0 | \n",
" 3 | \n",
" bar | \n",
" 2020-01-03 | \n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
" id value date\n",
"0 3 bar 2020-01-03"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"wr.s3.to_csv(\n",
" df=df,\n",
" path=path,\n",
" index=False,\n",
" dataset=True,\n",
" mode=\"overwrite\",\n",
" database=\"awswrangler_test\",\n",
" table=\"csv_dataset\"\n",
")\n",
"\n",
"wr.athena.read_sql_table(database=\"awswrangler_test\", table=\"csv_dataset\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Creating a **Partitioned** Dataset"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" | \n",
" id | \n",
" value | \n",
" date | \n",
"
\n",
" \n",
" \n",
" \n",
" 0 | \n",
" 2 | \n",
" boo | \n",
" 2020-01-02 | \n",
"
\n",
" \n",
" 1 | \n",
" 1 | \n",
" foo | \n",
" 2020-01-01 | \n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
" id value date\n",
"0 2 boo 2020-01-02\n",
"1 1 foo 2020-01-01"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"df = pd.DataFrame({\n",
" \"id\": [1, 2],\n",
" \"value\": [\"foo\", \"boo\"],\n",
" \"date\": [date(2020, 1, 1), date(2020, 1, 2)]\n",
"})\n",
"\n",
"wr.s3.to_csv(\n",
" df=df,\n",
" path=path,\n",
" index=False,\n",
" dataset=True,\n",
" mode=\"overwrite\",\n",
" database=\"awswrangler_test\",\n",
" table=\"csv_dataset\",\n",
" partition_cols=[\"date\"]\n",
")\n",
"\n",
"wr.athena.read_sql_table(database=\"awswrangler_test\", table=\"csv_dataset\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Upserting partitions (overwrite_partitions)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" | \n",
" id | \n",
" value | \n",
" date | \n",
"
\n",
" \n",
" \n",
" \n",
" 0 | \n",
" 1 | \n",
" foo | \n",
" 2020-01-01 | \n",
"
\n",
" \n",
" 1 | \n",
" 2 | \n",
" xoo | \n",
" 2020-01-02 | \n",
"
\n",
" \n",
" 0 | \n",
" 3 | \n",
" bar | \n",
" 2020-01-03 | \n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
" id value date\n",
"0 1 foo 2020-01-01\n",
"1 2 xoo 2020-01-02\n",
"0 3 bar 2020-01-03"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"\n",
"df = pd.DataFrame({\n",
" \"id\": [2, 3],\n",
" \"value\": [\"xoo\", \"bar\"],\n",
" \"date\": [date(2020, 1, 2), date(2020, 1, 3)]\n",
"})\n",
"\n",
"wr.s3.to_csv(\n",
" df=df,\n",
" path=path,\n",
" index=False,\n",
" dataset=True,\n",
" mode=\"overwrite_partitions\",\n",
" database=\"awswrangler_test\",\n",
" table=\"csv_dataset\",\n",
" partition_cols=[\"date\"]\n",
")\n",
"\n",
"wr.athena.read_sql_table(database=\"awswrangler_test\", table=\"csv_dataset\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## BONUS - Glue/Athena integration"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" | \n",
" id | \n",
" value | \n",
" date | \n",
"
\n",
" \n",
" \n",
" \n",
" 0 | \n",
" 1 | \n",
" foo | \n",
" 2020-01-01 | \n",
"
\n",
" \n",
" 1 | \n",
" 2 | \n",
" boo | \n",
" 2020-01-02 | \n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
" id value date\n",
"0 1 foo 2020-01-01\n",
"1 2 boo 2020-01-02"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"df = pd.DataFrame({\n",
" \"id\": [1, 2],\n",
" \"value\": [\"foo\", \"boo\"],\n",
" \"date\": [date(2020, 1, 1), date(2020, 1, 2)]\n",
"})\n",
"\n",
"wr.s3.to_csv(\n",
" df=df,\n",
" path=path,\n",
" dataset=True,\n",
" index=False,\n",
" mode=\"overwrite\",\n",
" database=\"aws_sdk_pandas\",\n",
" table=\"my_table\",\n",
" compression=\"gzip\"\n",
")\n",
"\n",
"wr.athena.read_sql_query(\"SELECT * FROM my_table\", database=\"aws_sdk_pandas\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.9.14",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.14"
},
"pycharm": {
"stem_cell": {
"cell_type": "raw",
"metadata": {
"collapsed": false
},
"source": []
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}