@@ -132,6 +132,10 @@ The aim of this first example is the creation of a gridded dataset and the visua
For the visualization, `cartopy` is required, which might need dependencies, that might not be installed by `pip`. If you are experiencing any issues, do not hesitate to continue with the next examples.
If you are still curious for the results, we have uploaded the resulting map as title image of this README.
CAVE: We observed for a specific version and case, that the notebook is running into an exception from `matplotlib.pyplot.pcolormesh`.
Please be aware, that the error message complains about the usage of 'flat' instead if 'nearest' shading. Here, the value changes at some point after calling the plot function. We did not investigated this further...
There is also an export called `examples/00_download_and_vis_export.py`. This runs without an issue.
# In the next step we want to download the data and store them to disc.
#
# To obtain the contributors for this dataset, we need to create a dedicated file. This can be uploaded to the TOAR database to obtain a preformatted list of contributors. The required recipe can be found in the global metadata of the netCDF file.
#
# The request the database can take several minutes. This duration is also dependent on the overall usage of the services. The `get_data` function checks every 5minutes, if the data are ready for download. After 30min this cell stops the execution. Simply restart this cell to continue checking for the results.
# %%
# this cell can run longer than 30minutes
data=analysis_service.get_data(metadata)
# create contributors endpoint and write result to metadata
print("Gridded data have been written to ",out_file_name)
# %% [markdown]
# ### Visual inspection
# %% [markdown]
# We are working here with raw data and also want to visualize the station positions. Therefore, we want to distinguish stations that have valid data and those without valid data.
# %%
#calculation of coordinates for plotting
#especially separation of coordinates with results and without results.
# In the next step we prepare a function for plotting the gridded data to a world map. The flag *discrete* influences the creation of the color bar. The *plot_stations* flag allows including the station positions into the map.
plt.title(f"global ozon at {data.time.values}{data.time.units}")
# %% [markdown]
# Now we do the actual plotting. We select a single time from the dataset. To obtain two maps: 1) the mean ozone concentration per grid point and second the number of stations contributing to a grid point.
We need to select a temporal aggregation by selecting one week of data with a daily sampling.
With this sampling we define our metadata for the request. As variable we select ozone and as statistical aggregation a mean.
We also select an offline post processing of the data: we average the mean calculated for all timeseries at a station to create a single value.
Last but not least, we select a data quality flag with 'AllOK', requesting that all data points need to pass the quality tests of the provider and the automatic tests of the TOAR data infrastructure.
Other options exclude the tests of performed by TOAR.
The last step is the definition of the grid. We select a resolution of 2° in latitude and 2.5° in longitude.
In the next step we want to download the data and store them to disc.
To obtain the contributors for this dataset, we need to create a dedicated file. This can be uploaded to the TOAR database to obtain a preformatted list of contributors. The required recipe can be found in the global metadata of the netCDF file.
The request the database can take several minutes. This duration is also dependent on the overall usage of the services. The `get_data` function checks every 5minutes, if the data are ready for download. After 30min this cell stops the execution. Simply restart this cell to continue checking for the results.
%% Cell type:code id: tags:
``` python
# this cell can run longer than 30minutes
data=analysis_service.get_data(metadata)
# create contributors endpoint and write result to metadata
We are working here with raw data and also want to visualize the station positions. Therefore, we want to distinguish stations that have valid data and those without valid data.
%% Cell type:code id: tags:
``` python
#calculation of coordinates for plotting
#especially separation of coordinates with results and without results.
In the next step we prepare a function for plotting the gridded data to a world map. The flag *discrete* influences the creation of the color bar. The *plot_stations* flag allows including the station positions into the map.
plt.title(f"global ozone [{unit}] at {data.time.values}{data.time.units}")
```
%% Cell type:markdown id: tags:
Now we do the actual plotting. We select a single time from the dataset. To obtain two maps: 1) the mean ozone concentration per grid point and second the number of stations contributing to a grid point.