Skip to content
Snippets Groups Projects

TOAR Gridding Tool

About

The TOARgridding projects data from the TOAD database (https://toar-data.fz-juelich.de/) onto a grid. The request to the database also allows a statistical analysis of the requested value. The mean and standard deviation of all stations within a cell are computed.

The tool handles the request to the database over the REST API and the subsequent processing. The results of the gridding are provided as xarray datasets for subsequent processing and visualization by the user.

This project is in beta with the intended basic functionalities. The documentation and this README are work in progress.

Requirements

This project requires python 3.11 or higher. TBD, see pyproject.toml

Installation

Move to the folder you want to download this project to. We now need to download the source code from the repository either as ZIP file or via git:

1) Download with GIT

Clone the project from its git repository:

git clone https://gitlab.jsc.fz-juelich.de/esde/toar-public/toargridding.git 

With git we need to checkout the testing branch (testing), as the main branch is not yet populated. Therefore we need to change to the project directory first:

cd toargridding
git checkout testing

2) Installing Dependencies and Setting up Virtual Environment

Setup a virtual environment for your code to avoid conflicts with other projects. Here, you can use your preferred tool or run:

python -m venv .venv
source .venv/bin/activate

The latter line activates the virtual environment for the further usage. To deactivate your environment call

deactivate

For the installation of all required dependencies call

pip install -e .

To be able to execute the examples, that are provided as jupyter notebooks, we need to install a different preset by calling

pip install -e . "interactive"

To run the example notebooks:

#for selecting a notebook over the file browser in your webbrowser:
jupyter notebook
#or for directly opening a notebook:
jupyter notebook [/path/to/notebookname.ipynb]

and to run a script use

python [/path/to/scriptname.py]

How does this tool work?

This tool has two main parts. The first handles requests to the TOAR database via its analysis service. This includes the statistical analysis of the requested timeseries. The second part is the gridding, which is performed offline.

Request to TOAR Database with Statistical Analysis

Requests are send to the analysis service of the TOAR database. This allows a selection of different stations based on their metadata and performing a statistical analysis. Whenever a request is submitted, it will be processed. The returned status endpoint will point to the results as soon as the analysis is finished. A request can take several hours, depending on time range and the number of requested stations. This module stores the requests and their status endpoint in a local cache file. These endpoints are used to check, if the processing by the analysis service is finished. Requests are deleted from the cache after 14 days. You can adopt this by using Cache.setMaxDaysInCache([max age in days]). At the moment, there is no possibility implemented to check the status of a running job until it is finished (Date: 2024-05-14). It seems that crashed requests respond with an internal server error (HTML Status Code 500). Therefore, those requests are automatically deleted from the cache and resubmitted.

As soon as a request is finished, the status endpoint will not be valid forever. The data will be stored longer in a cache by the analysis service. As soon as the same request is submitted, first the cache is checked, if the results have already been calculated. The retrieval of the results from the cache can take some time, similar to the analysis.

There is no check, if a request is already running. Therefore, submitting a request multiple times, leads to additional load on the system and slows down all requests.

The TOAR database has only a limited number of workers for performing a statistical analysis. Therefore, it is advised to run one request after another, especially for large requests covering a large number of stations and or a longer time.

Gridding

The gridding uses a user defined grid to combine all stations in a cell. Per cell mean, standard deviation and the number of stations are reported in the resulting xarray dataset.

Logging

Output created by the different modules and classes of this package use the python logging. There is also a auxiliary class to reuse the same logger setup for examples and so over this script. This can be used to configures a logging to the shell as well as to the system log of a linux system.

Example

There are at the moment five example provided as jupyter notebooks (https://jupyter.org/). Jupyter uses your web-browser to display results and the code blocks. Here, examples are provided in python. As an alternative, visual studio code directly supports execution of jupyter notebooks. For VS Code, please ensure to select the kernel of the virtual environment see.

Running the provided examples with the python environment created by poetry can be done by

 jupyter notebook

as pointed out previously.

High level function

 jupyter notebook example/produce_data_manyStations.ipynb
#(please see next notebook for a faster example)

This notebook provides an example on how to download data, apply gridding and save the results as netCDF files. The AnalysisServiceDownload caches already obtained data on the local machine. This allows different grids without the necessity to repeat the request to the TOARDB, the statistical analysis and the subsequent download.

As an example we calculated the dma8epa_strict on a daily basis for the years 2000 to 2018 for all timeseries in the TOAR database. The first attempt for this example covered the full range of 19 years in a single request. It turned out, that an extraction year by year is more reliable. The subsequent requests function as a progress report and allow working with the data, while further requests are processed.

As the gridding is done offline, it will be executed for already downloaded files, whenever the notebook is rerun. Please note, that the file name for the gridded data also contains the date of creation.

 jupyter notebook example/produce_data_withOptional.ipynb 

This example is based on the previous one but uses additional arguments to reduce the number of stations per request. As an example, different classifications of the stations are used: first the "toar1_category" and second the "type_of_area". Details can be found in documentation of the FastAPI REST interface or the user guide.

The selection of only a limited number of stations leads to significant faster results. On the downside, the used classifications are not available for all stations.

Retrieving data

 jupyter notebook example/get_sample_data_manual.ipynb 

Downloads data from the TOAR database with a manual creation of the request to the TOAR database. The extracted data are written to disc. No further processing or gridding is done. The result is a ZIP-file containing two CSV files. The first one contains the statistical analysis of the timeseries and the second one the coordinates of the stations.

Retrieving data

 jupyter notebook example/get_sample_data.ipynb 

As a comparison to the previous example, this one performs the same request by using the interface of this project.

Retrieving data and visualization

 jupyter notebook example/quality_controll.ipynb

Notebook for downloading and visualization of data. The data are downloaded and reused for subsequent executions of this notebook. The gridding is done on the downloaded data. Gridded data are not saved to disc.

Benchmarks

Duration of Different Requests

 jupyter notebook tests/benchmark.py

This script requests datasets with different durations (days to month) from the TOAR Database and saves them to disc. It reports the duration for the different requests. There is no gridding involved. CAVE: This script can run several hours.

Supported Grids

The first supported grid is a regular grid with longitude and latitude.

Supported Variables

This module supports all variables of the TOAR database (Extraction: 2024-05-27). They can be identified by their "cf_standardname" or their name as stored in the TOAR database. The second option is to provide the name of a variable, as not all variables have a "cf_standardname". The full list of available variables with their name and "cf_standardname" can be accesses by querying the TOAR database, e.g. with https://toar-data.fz-juelich.de/api/v2/variables/?limit=None

Supported Time intervals

At the moment time differences larger than one day are working, i.e. start and end=start+1day leads to crashes.

Setup functions:

This package comes with all required information. There is a first function to fetch an update of the available variables from the TAOR-DB. This will override the original file:

 python toargridding/setupFunctions.py

Documentation of Source Code:

At the moment Carsten Hinz is working on a documentation of the source code, while getting familiar with this project. The aim is a brief overview on the functionalities and the arguments of individual functions. As he personally does not like repetitions, the documentations might not match other style guides. It will definitely be possible to extend the documentation:-)

class example:
    """An example class

    A more detailed explanation of the purpose of this example class.
    """

    def __init__(self, varA : int, varB : str):
        """Constructor

        Attributes:
        varA:
            brief details and more context
        varB:
            same here.
        """
        [implementation]    
    
    def func1(self, att1, att2):
        """Brief

        details

        Attributes:
        -----------
        att1:
            brief/details
        att2:
            brief/details
        """

        [implementation]
        
@dataclass
class dataClass:
    """Brief description

    optional details

    Parameters
    ----------
    anInt:
        brief description
    anStr:
        brief description
    secStr:
        brief description (explanation of default value, if this seems necessary)
    """
    anInt : int
    anStr : str
    secStr : str = "Default value"

Tested platforms

This project has been tested on

  • Rocky Linux 9