{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "a29714f1",
"metadata": {},
"outputs": [],
"source": [
"# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\n",
"# SPDX-License-Identifier: MIT-0"
]
},
{
"cell_type": "markdown",
"id": "736ff485",
"metadata": {},
"source": [
"# Decrease geospatial query latency from minutes to seconds using Zarr on Amazon S3\n",
"\n",
"***\n",
"\n",
"This notebook provides sample code to convert NetCF files to Zarr format."
]
},
{
"cell_type": "markdown",
"id": "f018a4ab",
"metadata": {},
"source": [
"## Table of contents \n",
"\n",
"***\n",
"\n",
"- [Before you begin](#1)\n",
"- [Load libraries](#2)\n",
"- [Connect to Dask cluster](#3)\n",
"- [Scale out Dask cluster](#3a)\n",
"- [Explore MERRA-2 dataset](#3z)\n",
"- [rechunker: Batch convert NetCDF files to Zarr](#4)\n",
" - [Open the NetCDF dataset](#4a)\n",
" - [Query the NetCDF dataset](#4aa)\n",
" - [Define Zarr chunking strategy](#4b)\n",
" - [Set up Zarr store](#4c)\n",
" - [Batch convert NetCDF to Zarr](#4d)\n",
" - [Query the Zarr store](#4e)\n",
"- [xarray: Append NetCDF files to Zarr store](#5)\n",
"- [Scale in Dask cluster](#7)"
]
},
{
"cell_type": "markdown",
"id": "90fe3979",
"metadata": {},
"source": [
"## Before you begin [TOC](#top)\n",
"\n",
"\n",
"***\n",
"\n",
"Before using this notebook, you will need: \n",
"\n",
"- a Dask cluster.\n",
"- a Jupyter kernel with the required libraries installed.\n",
"\n",
"If you have not done so, follow the instructions in the blog post to deploy the Dask infrastructure and create the Juypter kernel."
]
},
{
"cell_type": "markdown",
"id": "a2a19218",
"metadata": {},
"source": [
"Paste in the name of the S3 bucket that was created by the CDK deployment (it will start with \"converttozarrstack\"). The notebook will write the converted Zarr store to this bucket."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a6d92948",
"metadata": {},
"outputs": [],
"source": [
"S3_BUCKET_ZARR = \"\""
]
},
{
"cell_type": "markdown",
"id": "f65b7aae",
"metadata": {},
"source": [
"## Load libraries [TOC](#top)\n",
"\n",
"***"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f5e5e650",
"metadata": {},
"outputs": [],
"source": [
"import xarray as xr\n",
"import hvplot.xarray\n",
"import dask\n",
"from dask.distributed import Client\n",
"import zarr\n",
"import s3fs\n",
"from rechunker import rechunk"
]
},
{
"cell_type": "markdown",
"id": "aa48a944",
"metadata": {},
"source": [
"## Connect to Dask cluster [TOC](#top)\n",
"\n",
"***"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0f99b865",
"metadata": {},
"outputs": [],
"source": [
"DASK_SCHEDULER_URL = \"Dask-Scheduler.local-dask:8786\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a3763c63",
"metadata": {},
"outputs": [],
"source": [
"client = Client(DASK_SCHEDULER_URL)"
]
},
{
"cell_type": "markdown",
"id": "d80ccd56",
"metadata": {},
"source": [
"The notebook is now a client of the Dask scheduler at the URL provided. You can retrieve more information about the scheduler by clicking on Scheduler Info in the HTML representation below. Once Scheduler Info is opened, you can click on Workers to see the list of workers."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "060f6cfb",
"metadata": {},
"outputs": [],
"source": [
"client"
]
},
{
"cell_type": "markdown",
"id": "f002a9e3",
"metadata": {},
"source": [
"## Scale out Dask cluster [TOC](#top)\n",
"\n",
"***\n",
"\n",
"The CDK template creates a single Dask worker. The code below increases the number of Dask workers to 30. While the cell returns immediately, wait 2-3 minutes for the process to complete. (Estimated times given in the rest of this notebook assume 30 dask workers.) "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f20f18c9",
"metadata": {},
"outputs": [],
"source": [
"NUM_DASK_WORKERS = 30\n",
"\n",
"DASK_CLUSTER_NAME = \"Dask-Cluster\"\n",
"DASK_CLUSTER_SERVICE = \"Dask-Worker\"\n",
"\n",
"!aws ecs update-service --service {DASK_CLUSTER_SERVICE} --desired-count {NUM_DASK_WORKERS} --cluster {DASK_CLUSTER_NAME} "
]
},
{
"cell_type": "markdown",
"id": "9e8b8a8d",
"metadata": {},
"source": [
"## Explore MERRA-2 Dataset [TOC](#top)\n",
"\n",
"***\n",
"\n",
"To demonstrate converting files from NetCDF to Zarr, we'll use the [NASA Prediction of Worldwide Energy Resources (NASA POWER)](https://registry.opendata.aws/nasa-power/) MERRA-2 dataset. The MERRA-2 (UTC) dataset contains hourly observations of 49 variables. "
]
},
{
"cell_type": "markdown",
"id": "a48f42ab",
"metadata": {},
"source": [
"The cell below shows the data files available in the NASA POWER dataset for January of 2021. The data is organized in subfolders labelled by year and month number, and within each month, there is one file per dataset per day. We'll use the ```merra2_utc``` files which are rougly 180 MiB each day."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3ded5f9b",
"metadata": {},
"outputs": [],
"source": [
"!aws s3 ls --no-sign-request s3://power-datastore/v9/hourly/2021/01/ --human-readable"
]
},
{
"cell_type": "markdown",
"id": "479bddcc",
"metadata": {},
"source": [
"## Rechunker: Batch Convert NetCDF Files to Zarr [TOC](#top)\n",
"\n",
"***"
]
},
{
"cell_type": "markdown",
"id": "31500570",
"metadata": {},
"source": [
"### Open the NetCDF dataset [TOC](#top)"
]
},
{
"cell_type": "markdown",
"id": "41f1dd78",
"metadata": {},
"source": [
"First, let's build a list of two months of daily MERRA-2 NetCDF files on Amazon S3 for January and February 2021 (59 files)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5d508eac",
"metadata": {},
"outputs": [],
"source": [
"from datetime import datetime as dt\n",
"from datetime import timedelta as td\n",
"\n",
"def make_day_range(start, end):\n",
" start_dt = dt.strptime(start, \"%Y%m%d\")\n",
" end_dt = dt.strptime(end, \"%Y%m%d\")\n",
" delta = end_dt - start_dt\n",
" r = [ start_dt + td(days=i) for i in range(delta.days + 1) ]\n",
" return r"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e44940ef",
"metadata": {},
"outputs": [],
"source": [
"start = '20210101'\n",
"end = '20210228'\n",
"\n",
"nc_files = ['s3://power-datastore/v9/hourly/{}/{}/power_901_hourly_{}_merra2_utc.nc'.format(\n",
" d.strftime('%Y'), \n",
" d.strftime('%m'),\n",
" d.strftime('%Y%m%d')) for d in make_day_range(start, end)]\n",
"nc_files[0:5]"
]
},
{
"cell_type": "markdown",
"id": "8e1a0f4d",
"metadata": {},
"source": [
"Next, define a helper function that wraps opening files on S3 in a ```dask.delayed``` decorator. Doing this allows the Dask scheduler to open the files in parallel on the Dask cluster. To create the dataset, xarray will need to open every file."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "382974cb",
"metadata": {},
"outputs": [],
"source": [
"s3 = s3fs.S3FileSystem(anon=True, default_fill_cache=False)\n",
"\n",
"@dask.delayed\n",
"def s3open(path):\n",
" return s3.open(path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "38229b26",
"metadata": {},
"outputs": [],
"source": [
"nc_files_map = [s3open(f) for f in nc_files]"
]
},
{
"cell_type": "markdown",
"id": "63581daa",
"metadata": {},
"source": [
"We pass the files map to xarray's ```open_mfdataset``` function, telling it to open the files in parallel. The ```chunks``` parameter tells xarray to use Dask arrays and the chunking structure of the underlying files to load the data in memory. Opening the dataset should take several minutes. \n",
"\n",
"If you have enabled the Dask dashboard, you can track the progress on the Dask cluster."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2f311b17",
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"\n",
"ds_nc = xr.open_mfdataset(nc_files_map, \n",
" engine='h5netcdf', \n",
" chunks={}, # tells xarray to use Dask arrays\n",
" parallel=True)"
]
},
{
"cell_type": "markdown",
"id": "942fe690",
"metadata": {},
"source": [
"Let's look at the xarray HTML representation of the dataset. The **Dimensions** section shows 1,416 hourly observations of a 361 x 576 grid, and there are 49 data variables."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "75f2231c",
"metadata": {},
"outputs": [],
"source": [
"ds_nc"
]
},
{
"cell_type": "markdown",
"id": "ce95780a",
"metadata": {},
"source": [
"For this example let's focus on a single data variable: T2M."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2613b972",
"metadata": {},
"outputs": [],
"source": [
"var_name = \"T2M\"\n",
"ds_nc = ds_nc[[var_name]]"
]
},
{
"cell_type": "markdown",
"id": "02c644f3",
"metadata": {},
"source": [
"Run the cell below and click on the disk icon at the end of the T2M row."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e7fbbbae",
"metadata": {},
"outputs": [],
"source": [
"ds_nc"
]
},
{
"cell_type": "markdown",
"id": "ab7fdb05",
"metadata": {},
"source": [
"The xarray representation for the T2M data variable shows it requires 2.2 GiB of memory, and is divided into 59 chunks of size 38 MiB. Each chunk contains (24, 361, 576) observations, or one day's worth of hourly data for the whole latitude/longitude grid."
]
},
{
"cell_type": "markdown",
"id": "d5254998",
"metadata": {},
"source": [
"### Query the NetCDF dataset [TOC](#top)"
]
},
{
"cell_type": "markdown",
"id": "d2ddaeb9",
"metadata": {},
"source": [
"The cell below queries for a time series of T2M data for a give latitude and longitude, and converts the temperatures to Fahrenheit. When the cell is run the execution steps are added to the Dask task graph but not actually computed - the cell returns immediately."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bad56859",
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"\n",
"ds_nc_query = ds_nc.sel(lat=40.7, lon=74, method='nearest')\n",
"ds_nc_query = (ds_nc_query - 273.15) * (9/5) + 32"
]
},
{
"cell_type": "markdown",
"id": "3a16531d",
"metadata": {},
"source": [
"We can access the underlying Dask.Array for ```T2M``` via the ```data``` attribute and visualize the task graph.\n",
"\n",
"When you run the cell below and double click on the image, we see that Dask will parallelize the operations on each file needed to pull back the time series data for the given lat/long. Double click on the image again to zoom back out."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "11a6139d",
"metadata": {},
"outputs": [],
"source": [
"ds_nc_query[var_name].data.visualize()"
]
},
{
"cell_type": "markdown",
"id": "e92e0c99",
"metadata": {},
"source": [
"Plotting the timeseries forces xarray to actually pull the data into memory, do the Fahrenheit conversion, and visualize, which takes 1-2 minutes."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7b3ac649",
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"\n",
"ds_nc_query.hvplot()"
]
},
{
"cell_type": "markdown",
"id": "7e3e4e18",
"metadata": {},
"source": [
"### Define Zarr chunking strategy [TOC](#top)"
]
},
{
"cell_type": "markdown",
"id": "ede12c87",
"metadata": {},
"source": [
"While rechunking is not required for converting from NetCDF to Zarr, rechunking to better fit data access patterns can be a significant component of the benefit of moving to Zarr. \n",
"\n",
"There is no one-size-fits all optimal chunking strategy. When selecting Zarr chunk sizes, consider:\n",
"- common read and write acess patterns (for example: time series queries)\n",
"- keeping the number of chunks/files created, and their size, to an appropriate amount for your storage characteristics\n",
"- making chunk sizes fit roughly into memory\n",
"\n",
"Two points to keep in mind:\n",
"- for fast reads along a given dimension, you want to maximize the chunk size along that dimension (to minimize the number of chunks that need to be opened and read).\n",
"- when writing new data to a Zarr store, if the write touches any part of a chunk then the whole chunk will need to be rewritten.\n",
"\n",
"If time series queries are a common access pattern but data will also need to be appended along the time dimension, you will need to achieve the right balance between large chunksizes (along the time dimension) which speed up reads, and small chunksizes which will make writes more efficient.\n",
"\n",
"#### rechunker\n",
"\n",
"The Python ```rechunker``` library allows you to convert data chunks of one size, in one file format, to different-sized chunks in another format. \n",
"\n",
"Because ```rechunker``` uses an intermediate storage location when writing from the source to the target, it is designed to handle rechunking data sets too large to fit into memory. \n",
"\n",
"At this time, ```rechunker``` does not have an append function, which means if you want to add new data to an existing data set using ```rechunker``` you need to rechunk the entire new dataset, or use ```xarray.to_zarr()```."
]
},
{
"cell_type": "markdown",
"id": "7024b181",
"metadata": {},
"source": [
"First, let's change the chunksizes used in the original NetCDF files to make time series queries more efficient.\n",
"\n",
"We'll make chunks longer in the time dimension to decrease the number of chunks that will be needed to be opened and read for a time series query. At the same time, let's make chunks smaller in the lat/lon dimensions to keep the overall size of each chunk (or object written on disk) and number of files at an appropriate level. \n",
"\n",
"We'll also use a chunk size smaller than the entire length of the time dimension (so that we will have more than one time chunk, and won't have to rewrite all chunks if we append along the time dimension later."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ba9aad10",
"metadata": {},
"outputs": [],
"source": [
"dims = ds_nc.dims.keys()\n",
"new_chunksizes = (1080, 90, 90)\n",
"new_chunksizes_dict = dict(zip(dims, new_chunksizes))\n",
"new_chunksizes_dict"
]
},
{
"cell_type": "markdown",
"id": "4a015d9a",
"metadata": {},
"source": [
"### Set up Zarr store [TOC](#top)"
]
},
{
"cell_type": "markdown",
"id": "e733206c",
"metadata": {},
"source": [
"At this point, we define the location of the new Zarr store on Amazon S3. Rechunker needs a location defined for both the new Zarr store as well as a temporary, intermediate location for use when copying chunks from one set of files to the other."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c230e366",
"metadata": {},
"outputs": [],
"source": [
"zarr_store_url = f's3://{S3_BUCKET_ZARR}/converted/merra2_utc.zarr'\n",
"zarr_temp_url = f's3://{S3_BUCKET_ZARR}/converted/merra2_utc-tmp.zarr'"
]
},
{
"cell_type": "markdown",
"id": "c841627e",
"metadata": {},
"source": [
"The URLs for the Zarr store need to be wrapped as filesystem objects (key-value pair)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b24424eb",
"metadata": {},
"outputs": [],
"source": [
"s3_priv = s3fs.S3FileSystem(anon=False, default_fill_cache=False)\n",
"\n",
"zarr_store = s3fs.S3Map(root=zarr_store_url, s3=s3_priv, check=False)\n",
"zarr_temp = s3fs.S3Map(root=zarr_temp_url, s3=s3_priv, check=False)"
]
},
{
"cell_type": "markdown",
"id": "8483ae8e",
"metadata": {},
"source": [
"### Batch convert NetCDF to Zarr [TOC](#top)"
]
},
{
"cell_type": "markdown",
"id": "8c3310f9",
"metadata": {},
"source": [
"Now call the rechunk function, passing it the NetCDF dataset and desired chunk sizes for the T2M variable, and the location for the final and temporary Zarr stores on S3. This process should take roughly 5 minutes."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f3523b7e",
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"\n",
"ds_zarr = rechunk(\n",
" ds_nc, \n",
" target_chunks={var_name: new_chunksizes_dict, 'time': None,'lat': None,'lon': None},\n",
" max_mem='15GB',\n",
" target_store = zarr_store, \n",
" temp_store = zarr_temp\n",
").execute()"
]
},
{
"cell_type": "markdown",
"id": "9a15165b",
"metadata": {},
"source": [
"The cell below shows the directory structure of the Zarr store that is created. The root of a Zarr store is a Zarr group. Each NetCDF dimension (lat, lon, time) and data variable (T2M) is stored in its own subfolder, as a Zarr array. \n",
"\n",
"All Zarr metadata is stored as plain text JSON. The .zattr file contains metadata supplied by the user, and .zgroup holds metadata for the group. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "36b30941",
"metadata": {},
"outputs": [],
"source": [
"!aws s3 ls {zarr_store_url}/"
]
},
{
"cell_type": "markdown",
"id": "72ff0feb",
"metadata": {},
"source": [
"Inside the T2M subfolder are two more metadata files -- .zarray and .zattr -- along with the and binary chunks (or files) that hold the actual data. Chunk names have three components, one for each dimension. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "30b1c123",
"metadata": {},
"outputs": [],
"source": [
"!aws s3 ls {zarr_store_url}/T2M/ --human-readable --summarize"
]
},
{
"cell_type": "markdown",
"id": "91422d1e",
"metadata": {},
"source": [
"The T2M/.zarray file holds the metadata for the T2M Dask array, which includes data type, compression type, chunking strategy, and original shape."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1c7c8847",
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"with s3_priv.open(f'{zarr_store_url}/T2M/.zarray', 'rb') as f:\n",
" print(json.dumps(json.loads(f.read()), indent=2))"
]
},
{
"cell_type": "markdown",
"id": "47bc5502",
"metadata": {},
"source": [
"Zarr allows you to easily consolidate metadata for the entire Zarr store into a single .zmetadata file, saved at the root of the Zarr directory. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a8a6af62",
"metadata": {},
"outputs": [],
"source": [
"zarr.consolidate_metadata(zarr_store_url)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e5873b4c",
"metadata": {},
"outputs": [],
"source": [
"!aws s3 ls {zarr_store_url}/"
]
},
{
"cell_type": "markdown",
"id": "5de229b2",
"metadata": {},
"source": [
"### Query the Zarr store [TOC](#top)"
]
},
{
"cell_type": "markdown",
"id": "8a221d90",
"metadata": {},
"source": [
"We can query the new Zarr store using xarray."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e5206b2f",
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"\n",
"ds_zarr = xr.open_zarr(store=zarr_store_url, chunks={}, consolidated=True)"
]
},
{
"cell_type": "markdown",
"id": "c39a9af2",
"metadata": {},
"source": [
"It contains the single T2M variable we converted, chunked as specified."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3f7e4d60",
"metadata": {},
"outputs": [],
"source": [
"ds_zarr"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1774afd4",
"metadata": {},
"outputs": [],
"source": [
"ds_zarr[var_name].chunksizes"
]
},
{
"cell_type": "markdown",
"id": "7a854989",
"metadata": {},
"source": [
"Querying from Zarr for the time series data and plotting it now takes seconds instead of minutes (from NetCDF)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0535221a",
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"\n",
"ds_zarr_query = ds_zarr.sel(lat=40.7, lon=74, method='nearest')\n",
"ds_zarr_query = (9/5) * (ds_zarr_query - 273.15) + 32"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "aabb59eb",
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"\n",
"ds_zarr_query.hvplot()"
]
},
{
"cell_type": "markdown",
"id": "393fd66a",
"metadata": {},
"source": [
"## Append NetCDF Files to Zarr using xarray [TOC](#top)\n",
"\n",
"***"
]
},
{
"cell_type": "markdown",
"id": "be073508",
"metadata": {},
"source": [
"The rechunker library does not have a way to append data to an existing Zarr store. For that, ```xarray.to_zarr``` is an option for appending data along a dimension, such as time. Let's append one week of additional data to the Zarr store, one day at a time."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c7635d42",
"metadata": {},
"outputs": [],
"source": [
"start = '20210301'\n",
"end = '20210307'\n",
"\n",
"nc_files = ['s3://power-datastore/v9/hourly/{}/{}/power_901_hourly_{}_merra2_utc.nc'.format(\n",
" d.strftime('%Y'), \n",
" d.strftime('%m'),\n",
" d.strftime('%Y%m%d')) for d in make_day_range(start, end)]\n",
"nc_files"
]
},
{
"cell_type": "markdown",
"id": "0d52f837",
"metadata": {},
"source": [
"Appending 7 days of data, one day at a time, takes roughly 10 minutes. When appending to a Zarr store, if data needs to be added to a chunk (such as additional observations along the time dimension), the entire chunk will need to be rewritten."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4a27d33f",
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"\n",
"for file in nc_files: \n",
" print(f'Appending {file}')\n",
" \n",
" # open a single file as a dataset\n",
" ds_nc_append = xr.open_dataset(s3.open(file), engine='h5netcdf', chunks={}) \n",
" # pull out T2M\n",
" ds_nc_append = ds_nc_append[[var_name]]\n",
" # rechunk in memory\n",
" ds_nc_append = ds_nc_append.chunk(chunks = new_chunksizes_dict)\n",
" # append\n",
" ds_nc_append.to_zarr(zarr_store_url, mode=\"a\", append_dim='time', consolidated=True)"
]
},
{
"cell_type": "markdown",
"id": "1f7cf4ca",
"metadata": {},
"source": [
"We can now reopen the file to see the additional observations along the time dimension."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f1c36298",
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"\n",
"ds_zarr = xr.open_zarr(store=zarr_store_url, chunks={}, consolidated=True)"
]
},
{
"cell_type": "markdown",
"id": "6e180761",
"metadata": {},
"source": [
"The Zarr store now contain the additional days of data. All chunks that include data for the new time observations have been rewritten on S3."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6e0513a0",
"metadata": {},
"outputs": [],
"source": [
"ds_zarr"
]
},
{
"cell_type": "markdown",
"id": "f4904e4a",
"metadata": {},
"source": [
"The chunksizes show that the number of observations in the second (last) chunk in the time dimension is now larger, compared to before the data was appended."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4fe4d41f",
"metadata": {},
"outputs": [],
"source": [
"ds_zarr[var_name].chunksizes"
]
},
{
"cell_type": "markdown",
"id": "24c2dabb",
"metadata": {},
"source": [
"Querying from Zarr for the time series data and plotting it takes seconds instead of minutes (from NetCDF)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bfc9148e",
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"ds_zarr_query = ds_zarr.sel(lat=40.7, lon=74, method='nearest')\n",
"ds_zarr_query = (9/5) * (ds_zarr_query - 273.15) + 32"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4cad4461",
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"\n",
"ds_zarr_query.hvplot()"
]
},
{
"cell_type": "markdown",
"id": "ca948f2b",
"metadata": {},
"source": [
"## Scale in Dask cluster [TOC](#top)\n",
"\n",
"***"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "92671426",
"metadata": {},
"outputs": [],
"source": [
"NUM_DASK_WORKERS = 1\n",
"\n",
"DASK_CLUSTER_NAME = \"Dask-Cluster\"\n",
"DASK_CLUSTER_SERVICE = \"Dask-Worker\"\n",
"\n",
"!aws ecs update-service --service {DASK_CLUSTER_SERVICE} --desired-count {NUM_DASK_WORKERS} --cluster {DASK_CLUSTER_NAME}"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3195ab70",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "conda_zarr_py310_nb",
"language": "python",
"name": "conda_zarr_py310_nb"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.10"
}
},
"nbformat": 4,
"nbformat_minor": 5
}