Here, we come to an interesting point: The selection of the data and the definition of the grid.
As variable we select ozon by providing the name according to the CF convention.
As time sampling we select the interval of one week and select a daily sampling.
And we want to calculate the mean for each sampling point. i.e. this will produce the daily mean for each timeseries.
Last but not least we create a Cartesian grid by providing the desired resolutions.
This will result in a warning, as the lateral resolution does not create an integer number of bins (180/1.9=94.74). Therefore, the grid slightly increases the resolution to have grid points with a constant distance.
Now we want to request the data from the server and create the binned dataset.
Therefore, we call the function `get_gridded_toar_data` with everything we have prepared until now.
The request will be submitted to the analysis service, which will process the request. On our side, we will check every 5minutes, if the processing is finished. After 30minutes, we will stop those requests.
A restart of this cell allows to continue the look-up, if the data are available.
The obtained data are stored in the cache directory. Before submitting a request, toargridding checks his cache, if the data have already been downloaded.
%% Cell type:code id: tags:
``` python
print(f"\nProcessing request:")
print(f"--------------------")
datasets,metadatas=get_gridded_toar_data(
analysis_service=analysis_service,
grid=grid,
time=time_sampling,
variables=variable,
stats=statistics,
contributors_path=result_basepath
)
```
%% Cell type:markdown id: tags:
### Saving of results
last but not least, we want to save our dataset as netCDF file.
This part is done offline. Please note, that the file name for the gridded data also contains the date of creation.