"Our following request will take some time, so we edit the durations between two checks, if our data are ready for download and the maximum duration for checking.\n",
"We will check every 45min for 12h. "
"We will check every 15min for 12h. "
]
},
{
...
...
%% Cell type:markdown id: tags:
# Example with optional parameters
Toargridding has a number of required arguments for a dataset. Those include the time range, variable and statistical analysis. The TAOR-DB has a large number of metadata fileds that can be used to further refine this request.
A python dictionary can be provided to include theses other fields. The analysis service provides an error message, if the requested parameters does not exist (check for typos) or if the provided value is wrong.
In this example we want to obtain data from 2012.
The fist block contains the includes and the setup of the logging.
Our following request will take some time, so we edit the durations between two checks, if our data are ready for download and the maximum duration for checking.
#### Preparation of requests with station metadata
We restrict our request to one year and of daily mean ozone data. In addition we would like to only include urban stations.
We use a container class to keep the configurations together (type: namedtuple).
We also want to refine our station selection by using further metadata.
Therefore, we create the `station_metadata` dictionary. We can use the further metadata stored in the TOAR-DB by providing their name and our desired value. This also discards stations, without a provided value for a metadata field. We can find information on different metadata values in the [documentation](https://toar-data.fz-juelich.de/sphinx/TOAR_UG_Vol03_Database/build/latex/toardatabase--userguide.pdf). For example for the *toar1_category* on page 18 and for the *type_of_area* on page 20.
We can use this to filter for all additional metadata, which are supported by the [statistics endpoint of the analysis service](https://toar-data.fz-juelich.de/api/v2/analysis/#statistics), namely station metadata and timeseries metadata.
In the end we have wo requests, that we want to submit.
#### execution of toargridding and saving of results
Now we want to request the data from the TOAR analysis service and create the gridded dataset.
Therefore, we call the function `get_gridded_toar_data` with everything we have prepared until now.
The request will be submitted to the analysis service, which will process the request. On our side, we will check in intervals, if the processing is finished. After several request, we will stop checking. The setup for this can be found a few cells above.
A restart of this cell allows to continue the look-up, if the data are available.
The obtained data are stored in the result directory (`results_basepath`). Before submitting a request, toargridding checks his cache, if the data have already been downloaded.
Last but not least, we want to save our dataset as netCDF file.
In the global metadata of this file we can find a recipe on how to obtain a list of contributors with the contributors file created by `get_gridded_toar_data`. This function also creates the required file with the extension "*.contributors".