diff --git a/.gitignore b/.gitignore index 88ed83bd..e8b5c9d2 100644 --- a/.gitignore +++ b/.gitignore @@ -86,3 +86,6 @@ testdata.json *.a *.mod *.out + +# custom +etc/custom.cgf diff --git a/.travis.yml b/.travis.yml index 5db67e9c..b6181e7e 100644 --- a/.travis.yml +++ b/.travis.yml @@ -1,6 +1,6 @@ language: python python: - - "2.7" + # - "2.7" - "3.6" os: - linux diff --git a/README.rst b/README.rst index d26b7c65..31e7665e 100644 --- a/README.rst +++ b/README.rst @@ -21,10 +21,11 @@ Flyingpigeon Flyingpigeon (the bird) *Flyingpigeon is a bird ...* -A Web Processing Service for Climate Data Analysis. +A Web Processing Service - Testbed for new process development. * Free software: Apache Software License 2.0 * Documentation: https://flyingpigeon.readthedocs.io. +* Birdhouse Overview: https://flyingpigeon.readthedocs.io/en/latest/ Credits ------- diff --git a/docs/source/index.rst b/docs/source/index.rst index a4b4db51..9c861df5 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -6,6 +6,7 @@ installation configuration + user_guide dev_guide processes changes diff --git a/docs/source/pics/Europe_merged_regions.png b/docs/source/pics/Europe_merged_regions.png new file mode 100644 index 00000000..71d702b8 Binary files /dev/null and b/docs/source/pics/Europe_merged_regions.png differ diff --git a/docs/source/pics/Europe_regions.png b/docs/source/pics/Europe_regions.png new file mode 100644 index 00000000..ddc75842 Binary files /dev/null and b/docs/source/pics/Europe_regions.png differ diff --git a/docs/source/pics/Europe_too_many_regions.png b/docs/source/pics/Europe_too_many_regions.png new file mode 100644 index 00000000..4a24f177 Binary files /dev/null and b/docs/source/pics/Europe_too_many_regions.png differ diff --git a/docs/source/pics/Norway_mapshaper_commandline.png b/docs/source/pics/Norway_mapshaper_commandline.png new file mode 100644 index 00000000..1c0bbb0f Binary files /dev/null and b/docs/source/pics/Norway_mapshaper_commandline.png differ diff --git a/docs/source/pics/Norway_orig.png b/docs/source/pics/Norway_orig.png new file mode 100644 index 00000000..d06c2245 Binary files /dev/null and b/docs/source/pics/Norway_orig.png differ diff --git a/docs/source/pics/Norway_purple.png b/docs/source/pics/Norway_purple.png new file mode 100644 index 00000000..ac7bc20e Binary files /dev/null and b/docs/source/pics/Norway_purple.png differ diff --git a/docs/source/preparation.rst b/docs/source/preparation.rst new file mode 100644 index 00000000..35758bf1 --- /dev/null +++ b/docs/source/preparation.rst @@ -0,0 +1,138 @@ +Shapefile preparation +===================== + +This text describes how to prepare, simplify and customize shapefiles from the `GADM database `_. We used GADM version 2.7. + +Start by downloading gadm26_levels.gdb, the ESRI shapefile for the `whole world that contains all six administration levels `_. The resulting file is a directory that can be read with qgis 2.8. (Note: for Fedora users, you must have Fedora 22 in order to upgrade to qgis 2.8). + +Open shapefile in qgis +====================== + +To open gadm26_levels.gdb in qgis, follow the steps below: + +Add Vector Layer -> + +………Source type = Directory + +………Source: + +………………Type: OpenFileGDB + +After selecting the directory, select "Open" and a window will appear listing the six levels. Select the level you would like. We used "adm1" which corresponds to the region level (i.e. one level smaller than country level). + +Filter the countries you wish to retain +======================================== + +You can use qgis to select countries you wish to extract from the world shapefile. Use the "Advanced Filter (Expression)" option in the Attribute Table. For example, for a selection of EU countries, you can use this expression: + + + « ISO » LIKE ‘%AUT%’ OR « ISO » LIKE ‘%BEL%’ OR « ISO » LIKE ‘%BGR%’ OR « ISO » LIKE ‘%CYP%’ OR « ISO » LIKE ‘%CZE%’ OR « ISO » LIKE ‘%DEU%’ OR « ISO » LIKE ‘%DNK%’ OR « ISO » LIKE ‘%ESP%’ OR « ISO » LIKE ‘%EST%’ OR « ISO » LIKE ‘%FIN%’ OR « ISO » LIKE ‘%FRA%’ OR « ISO » LIKE ‘%GBR%’ OR « ISO » LIKE ‘%GRC%’ OR « ISO » LIKE ‘%HUN%’ OR « ISO » LIKE ‘%HRV%’ OR « ISO » LIKE ‘%IRL%’ OR « ISO » LIKE ‘%ITA%’ OR « ISO » LIKE ‘%LVA%’ OR « ISO » LIKE ‘%LTU%’ OR « ISO » LIKE ‘%LUX%’ OR « ISO » LIKE ‘%MLT%’ OR « ISO » LIKE ‘%NLD%’ OR « ISO » LIKE ‘%POL%’ OR « ISO » LIKE ‘%PRT%’ OR « ISO » LIKE ‘%ROU%’ OR « ISO » LIKE ‘%SVK%’ OR « ISO » LIKE ‘%SVN%’ OR « ISO » LIKE ‘%SWE%’ OR « ISO » LIKE ‘%NOR%’ OR « ISO » LIKE ‘%CHE%’ OR « ISO » LIKE ‘%ISL%’ OR « ISO » LIKE ‘%MKD%’ OR « ISO » LIKE ‘%MNE%’ OR « ISO » LIKE ‘%SRB%’ OR « ISO » LIKE ‘%MDA%’ OR « ISO » LIKE ‘%UKR%’ OR « ISO » LIKE ‘%BIH%’ OR « ISO » LIKE ‘%ALB%’ OR « ISO » LIKE ‘%BLR%’ OR « ISO » LIKE ‘%XKO%’ + +The Attribute Table will then be updated and you can choose all the rows. This selection will be displayed on the map in the main window of qgis: + + +.. image:: ../pics/Europe_regions.png + :alt: alternate text + :align: center + +You can save this selection in the format of an ESRI Shapefile: + +Layer -> Save as -> Save only selected features + + +Simplify using mapshaper (command line) +======================================= + +Note that the resulting map is very highly resolved, and it is often necessary to simplify the lines. There is an online tool called `mapshaper `_ that we found to be very effective. It can be used both on the command line and as a web GUI. + +On the command line, the default options are: Visvalingam Weighted Area, no Snap Vertices. +Choose the simplification level, the input and output shapefiles. Here is an example for 1% simplification: + +$ mapshaper -i adm1-EU.shp -simplify 1% -o adm1-EU-mapshaped1.shp + +=> Repaired 98 intersections; unable to repair 1 intersection. + +This produced a simplified map, shown here (purple) superimposed on the original map (blue), zoomed on the coastline of Norway: + +.. image:: ../pics/Norway_mapshaper_commandline.png + :alt: alternate text + :align: center + + +Simplify using mapshaper (GUI) +============================== + +You can test the different simplify options using the `mapshaper GUI `_ instead of the command line version. Namely: + +Visvalingam Weighted Area | Effective Area + +Snap Vertices ON | OFF + +Also, the GUI seems to be more successful at repairing all intersections. + +The figure below shows the original (cyan), NoSnapVertices-WeightedArea (magenta), and NoSnapVertices-EffectiveArea (purple): + +.. image:: ../pics/Norway_orig.png + :alt: alternate text + :align: center + +.. image:: ../pics/Norway_cyan.png + :alt: + :align: center + +.. image:: ../pics/Norway_purple.png + :alt: alternate text + :align: center + + + + +(There were very tiny differences between Snap Vertices vs. No Snap Vertices. Too hard to say from this example if one was better than the other). + +Customize shapefile +==================== + +The shapefile produced from the adm1 level of the ESRI shapefile as described above shows all regions of the selected countries, but when displayed on the screen, some regions were too small both visually and for the resolution of our models (~100 km): + +.. image:: ../pics/Europe_too_many_regions.png + :alt: alternate text + :align: center + +Another issue with using the ESRI file containing all six admin levels is that there is no unique identifier column such as "HASC_1". For example, for the region Homyel’ (with a ‘ at the end) in Belarus, "HASC_1" = "BY.HO". Without this field, one would be forced to use "NAME_1" = "Homyel’" to identify this region, but the special character ‘ may cause problems in the python script that reads the file. + +So, we need to merge the small country regions together while leaving the larger regions alone. + +Steps: + +1. We downloaded the GADM shapefile as a single layer (`here `_), and the EU countries were selected as before. + + Note: HASC_1" was NULL for some countires, e.g. FRA, ITA, GBR, BEL. We manually replaced the NULLs with a unique identifier following the convention in the file. + +2. We then decided which regions to merge together, and then we formed the following qgis filter expression: + + « ISO » LIKE ‘%CYP%’ OR « ISO » LIKE ‘%IRL%’ OR « ISO » LIKE ‘%MDA%’ OR « ISO » LIKE ‘%BGR%’ OR « ISO » LIKE ‘%XKO%’ OR « ISO » LIKE ‘%CHE%’ OR « ISO » LIKE ‘%LIE%’ OR « ISO » LIKE ‘%DNK%’ OR « ISO » LIKE ‘%HRV%’ OR « ISO » LIKE ‘%BIH%’ OR « ISO » LIKE ‘%SRB%’ OR « ISO » LIKE ‘%PRT%’ OR « ISO » LIKE ‘%MNE%’ OR « ISO » LIKE ‘%ALB%’ OR « ISO » LIKE ‘%MKD%’ OR « ISO » LIKE ‘%NLD%’ OR « ISO » LIKE ‘%SVN%’ OR « ISO » LIKE ‘%MDA%’ OR « ISO » LIKE ‘%LUX%’ OR « ISO » LIKE ‘%MLT%’ OR « ISO » LIKE ‘%LIE%’ OR « ISO » LIKE ‘%BIH%’ OR « ISO » LIKE ‘%ROU%’ OR « ISO » LIKE ‘%AUT%’ OR « ISO » LIKE ‘%CZE%’ OR « ISO » LIKE ‘%SVK%’ OR « ISO » LIKE ‘%HUN%’ + +3. In the main screen of qgis, these selected countries were regrouped with respect to field "ISO" (i.e. country level): + + Vector -> Geoprocessing tools -> Dissolve -> Dissolve field = ISO + +4. The other countries (whose regions are large enough to be resolved) were selected in the Attribute Table in the same way, but using ID_1 (corresponding to level adm1) as the identifier. + +5. Finally, the two shapefiles were fused together: + + Vector -> Data Management Tools -> Merge shapefiles to one + +6. The resulting shapefile was simplified with the `mapshaper GUI `_ at 0.1%, which can then be read into the flyingpigeon python scripts. + +7. To display in the browser, the shapefile was converted to geojson using `ogr2ogr `_: + + $ ogr2ogr -overwrite -f GeoJSON output.geojson input.shp + + +Here is the resulting file containing region-level and country-level areas: + +.. image:: ../pics/Europe_merged_regions.png + :alt: alternate text + :align: center + + diff --git a/docs/source/processes.rst b/docs/source/processes.rst index 0ac8313b..7a865bf3 100644 --- a/docs/source/processes.rst +++ b/docs/source/processes.rst @@ -13,3 +13,35 @@ Say Hello .. autoprocess:: flyingpigeon.processes.wps_say_hello.SayHello :docstring: :skiplines: 1 + +Subset processes +---------------- + +.. autoprocess:: flyingpigeon.processes.wps_pointinspection.PointinspectionProcess + :docstring: + :skiplines: 1 + +.. autoprocess:: flyingpigeon.processes.wps_subset.SubsetProcess + :docstring: + :skiplines: 1 + +.. autoprocess:: flyingpigeon.processes.wps_subset_continents.SubsetcontinentProcess + :docstring: + :skiplines: 1 + +.. autoprocess:: flyingpigeon.processes.wps_subset_countries.SubsetcountryProcess + :docstring: + :skiplines: 1 + + +.. autoprocess:: flyingpigeon.processes.wps_subset_bbox.SubsetBboxProcess + :docstring: + :skiplines: 1 + + + + + + + .py .py .py +__pycache__ wps_say_hello.py .py .py diff --git a/docs/source/user_guide.rst b/docs/source/user_guide.rst new file mode 100644 index 00000000..c6362dbe --- /dev/null +++ b/docs/source/user_guide.rst @@ -0,0 +1,77 @@ +.. _user_guide: + +User Guidelines +=============== + +.. contents:: + :local: + :depth: 2 + +placeholder for some Tutorials how to use the processes of Flyingpigeon. + + +command line +------------- + +with birdy +(reference to birdy) + +Python call: +------------- + +Example of a python call + + +Phoenix gui +------------ + +Screenshot with Phoenix + + + +Process Descriptions: +---------------------- + +Following is a detailed description of + +subset processes +................. + + +Generates a polygon subset of input netCDF files. Based on an ocgis call, several pre-defined polygons (e.g. world countries) can be used to generate an appropriate subset of input netCDF files. + +Method: +....... + +Integrated ocgis performs the sub-setting. + + +Process identifiers: +................... + + * **subset_continents** + subsets continents + * **subset_countries** + subsets countries + * **point-inspection** + extracts time-series for given coordinate subset_points + +Input parameter: +................ + +**Polygons** + Abbreviation of the appropriate polygon. + +**Mosaic** + The option 'MOSAIC' as a checkbox allows you to decide, in the case of multiple polygon selection, if the polygons should be stitched together into one polygon (e.g. shape of Germany and France as one polygon) or calculated as separate output files. + + +Shapefile optimization: +....................... + +For optimization of the subset process, the appropriate shapefiles are prepared as follows: + +.. toctree:: + :maxdepth: 1 + + preparation diff --git a/environment.yml b/environment.yml index 68f88856..e5b8a349 100644 --- a/environment.yml +++ b/environment.yml @@ -9,6 +9,7 @@ dependencies: - click - psutil # - eggshell +############## # analytic - python=3 - numpy diff --git a/flyingpigeon/default.cfg b/flyingpigeon/default.cfg index f3689e61..5485daba 100644 --- a/flyingpigeon/default.cfg +++ b/flyingpigeon/default.cfg @@ -1,6 +1,6 @@ [metadata:main] identification_title = Flyingpigeon -identification_abstract = A Web Processing Service for Climate Data Analysis. +identification_abstract = Testbed for new process development. identification_keywords = =PyWPS, WPS, OGC, processing, birdhouse, flyingpigeon, demo identification_keywords_type = theme provider_name = Flyingpigeon @@ -15,6 +15,6 @@ maxprocesses = 10 parallelprocesses = 2 [logging] -level = INFO +level = DEBUG file = flyingpigeon.log format = %(asctime)s] [%(levelname)s] line=%(lineno)s module=%(module)s %(message)s diff --git a/flyingpigeon/handler_common.py b/flyingpigeon/handler_common.py index f4e051c1..893d0aeb 100644 --- a/flyingpigeon/handler_common.py +++ b/flyingpigeon/handler_common.py @@ -13,7 +13,7 @@ import netCDF4 from shapely.geometry import shape -from eggshell.nc.nc_utils import guess_main_variables, opendap_or_download +from eggshell.nc.nc_utils import get_variable, opendap_or_download json_format = get_format('JSON') @@ -99,7 +99,7 @@ def wfs_common(request, response, mode, spatial_mode='wfs'): output_files = [] output_urls = [] mv_dir = tempfile.mkdtemp(dir=outputpath) - # os.chmod(mv_dir, 0755) # raise an error SyntaxError: invalid token + os.chmod(mv_dir, 0o755) for one_file in list_of_files: file_name = os.path.basename(one_file) @@ -110,7 +110,7 @@ def wfs_common(request, response, mode, spatial_mode='wfs'): ocgis.env.DIR_OUTPUT = tempfile.mkdtemp(dir=os.getcwd()) ocgis.env.OVERWRITE = True nc = netCDF4.Dataset(one_file, 'r') - var_names = guess_main_variables(nc) + var_names = get_variable(nc) nc.close() rd = ocgis.RequestDataset(one_file, var_names) for i, one_geom in enumerate(geom): diff --git a/flyingpigeon/processes/__init__.py b/flyingpigeon/processes/__init__.py index 9ce844d1..89690ce0 100644 --- a/flyingpigeon/processes/__init__.py +++ b/flyingpigeon/processes/__init__.py @@ -3,6 +3,8 @@ from .wps_subset_bbox import SubsetBboxProcess from .wps_subset_continents import SubsetcontinentProcess from .wps_subset_countries import SubsetcountryProcess +from .wps_pointinspection import PointinspectionProcess +from .wps_subset_WFS import SubsetWFSProcess processes = [ # SayHello(), @@ -10,4 +12,6 @@ SubsetBboxProcess(), SubsetcontinentProcess(), SubsetcountryProcess(), + PointinspectionProcess(), + SubsetWFSProcess() ] diff --git a/flyingpigeon/processes/wps_pointinspection.py b/flyingpigeon/processes/wps_pointinspection.py new file mode 100644 index 00000000..33816710 --- /dev/null +++ b/flyingpigeon/processes/wps_pointinspection.py @@ -0,0 +1,131 @@ +import logging + +from numpy import savetxt, column_stack +from pywps import ComplexInput, ComplexOutput +from pywps import Format +from pywps import LiteralInput +from pywps import Process +from pywps.app.Common import Metadata +from shapely.geometry import Point + +from eggshell.nc.ocg_utils import call +from eggshell.nc.nc_utils import sort_by_filename, get_values, get_time + +from eggshell.utils import archive, extract_archive +from eggshell.utils import rename_complexinputs + + +LOGGER = logging.getLogger("PYWPS") + + +class PointinspectionProcess(Process): + """ + TODO: optionally provide point list as file (csv, geojson) and WFS service + """ + + def __init__(self): + inputs = [ + ComplexInput('resource', 'Resource', + abstract='NetCDF files or archive (tar/zip) containing NetCDF files.', + min_occurs=1, + max_occurs=1000, + supported_formats=[ + Format('application/x-netcdf'), + Format('application/x-tar'), + Format('application/zip'), + ]), + + LiteralInput("coords", "Coordinates", + abstract="Comma-seperated tuple of WGS85 lon, lat decimal coordinates " + "(e.g. 2.356138, 48.846450).", + # noqa + default="2.356138, 48.846450", + data_type='string', + min_occurs=1, + max_occurs=100, + ), + ] + outputs = [ + ComplexOutput('tarout', 'Subsets', + abstract="Tar archive containing one CSV file per input file, " + "each one storing time series column-wise for all point coordinates.", + as_reference=True, + supported_formats=[Format('application/x-tar')] + ), + # ComplexOutput('output_log', 'Logging information', + # abstract="Collected logs during process run.", + # as_reference=True, + # supported_formats=[Format('text/plain')] + # ), + ] + + super(PointinspectionProcess, self).__init__( + self._handler, + identifier="pointinspection", + title="Point Inspection", + abstract='Extract the timeseries at the given coordinates.', + version="0.10", + metadata=[ + Metadata('LSCE', 'http://www.lsce.ipsl.fr/en/index.php'), + Metadata('Doc', 'http://flyingpigeon.readthedocs.io/en/latest/'), + ], + inputs=inputs, + outputs=outputs, + status_supported=True, + store_supported=True, + ) + + def _handler(self, request, response): + # init_process_logger('log.txt') + # response.outputs['output_log'].file = 'log.txt' + + ncs = extract_archive( + resources=rename_complexinputs(request.inputs['resource'])) + LOGGER.info('ncs: {}'.format(ncs)) + + coords = [] + for coord in request.inputs['coords']: + coords.append(coord.data) + + LOGGER.info('coords {}'.format(coords)) + filenames = [] + nc_exp = sort_by_filename(ncs, historical_concatination=True) + + for key in nc_exp.keys(): + try: + LOGGER.info('start calculation for {}'.format(key)) + ncs = nc_exp[key] + times = get_time(ncs) + concat_vals = times + header = 'date_time' + filename = '{}.csv'.format(key) + filenames.append(filename) + + for p in coords: + try: + response.update_status('processing point: {}'.format(p), 20) + # define the point: + p = p.split(',') + point = Point(float(p[0]), float(p[1])) + + # get the values + timeseries = call(resource=ncs, geom=point, select_nearest=True) + vals = get_values(timeseries) + + # concatenation of values + header = header + ',{}-{}'.format(p[0], p[1]) + concat_vals = column_stack([concat_vals, vals]) + except Exception as e: + LOGGER.debug('failed for point {} {}'.format(p, e)) + response.update_status('*** all points processed for {0} ****'.format(key), 50) + + # TODO: Ascertain whether this 'savetxt' is a valid command without string formatting argument: '%s' + savetxt(filename, concat_vals, fmt='%s', delimiter=',', header=header) + except Exception as ex: + LOGGER.debug('failed for {}: {}'.format(key, str(ex))) + + # set the outputs + response.update_status('*** creating output tar archive ****', 90) + tarout_file = archive(filenames, output_dir=self.workdir) + response.outputs['tarout'].file = tarout_file + return response diff --git a/flyingpigeon/processes/wps_subset_WFS.py b/flyingpigeon/processes/wps_subset_WFS.py new file mode 100644 index 00000000..6484f81f --- /dev/null +++ b/flyingpigeon/processes/wps_subset_WFS.py @@ -0,0 +1,83 @@ +import traceback +from urllib.parse import urlparse +import logging +# TODO handler_common to eggshell +from flyingpigeon.handler_common import wfs_common +from eggshell.nc.nc_utils import CookieNetCDFTransfer +from pywps import Process, LiteralInput, ComplexOutput, get_format + +LOGGER = logging.getLogger("PYWPS") + +json_format = get_format('JSON') + + +class SubsetWFSProcess(Process): + """Subset a NetCDF file using WFS geometry.""" + + def __init__(self): + inputs = [ + LiteralInput('resource', + 'NetCDF resource', + abstract='NetCDF files, can be OPEnDAP urls.', + data_type='string', + max_occurs=1000), + LiteralInput('typename', + 'TypeName', + abstract='Name of the layer in GeoServer.', + data_type='string'), + LiteralInput('featureids', + 'Feature Ids', + abstract='fid(s) of the feature in the layer.', + data_type='string', + max_occurs=1000), + LiteralInput('geoserver', + 'Geoserver', + abstract=('Typically of the form ' + 'http://host:port/geoserver/wfs'), + data_type='string', + min_occurs=0), + LiteralInput('mosaic', + 'Union of Feature Ids', + abstract=('If True, selected regions will be ' + 'merged into a single geometry.'), + data_type='boolean', + min_occurs=0, + default=False), + LiteralInput('variable', + 'Variable', + abstract=('Name of the variable in the NetCDF file.' + 'Will be guessed if not provided.'), + data_type='string', + min_occurs=0)] + + outputs = [ + ComplexOutput('output', + 'JSON file with link to NetCDF outputs', + abstract='JSON file with link to NetCDF outputs.', + as_reference=True, + supported_formats=[json_format])] + + super(SubsetWFSProcess, self).__init__( + self._handler, + identifier='subset_WFS', + title='Subset WFS', + version='0.1', + abstract=('Return the data for which grid cells intersect the ' + 'selected polygon for each input dataset.'), + inputs=inputs, + outputs=outputs, + status_supported=True, + store_supported=True, + ) + + def _handler(self, request, response): + try: + opendap_hostnames = [ + urlparse(r.data).hostname for r in request.inputs['resource']] + with CookieNetCDFTransfer(request, opendap_hostnames): + result = wfs_common(request, response, mode='subsetter') + return result + except Exception as ex: + msg = 'Connection to OPeNDAP failed: {}'.format(ex) + LOGGER.exception(msg) + raise Exception(traceback.format_exc()) diff --git a/flyingpigeon/processes/wps_subset_continents.py b/flyingpigeon/processes/wps_subset_continents.py index 39d0aab2..c65786b1 100644 --- a/flyingpigeon/processes/wps_subset_continents.py +++ b/flyingpigeon/processes/wps_subset_continents.py @@ -2,6 +2,7 @@ from pywps import ComplexInput, ComplexOutput, Format, LiteralInput, Process from pywps.app.Common import Metadata +from tempfile import mkstemp from flyingpigeon.subset import _CONTINENTS_ from flyingpigeon.subset import clipping @@ -59,11 +60,11 @@ def __init__(self): supported_formats=[Format('application/x-netcdf')] ), - ComplexOutput('output_log', 'Logging information', - abstract="Collected logs during process run.", - as_reference=True, - supported_formats=[Format('text/plain')] - ) + # ComplexOutput('output_log', 'Logging information', + # abstract="Collected logs during process run.", + # as_reference=True, + # supported_formats=[Format('text/plain')] + # ) ] super(SubsetcontinentProcess, self).__init__( @@ -83,14 +84,14 @@ def __init__(self): ) def _handler(self, request, response): - # init_process_logger('log.txt') - response.outputs['output_log'].file = 'log.txt' + # init_process_logger('log.txt') mkstemp(suffix='.log', prefix='tmp', dir=self.workdir, text=True) + # response.outputs['output_log'].file = mkstemp(suffix='.log', prefix='tmp', dir=self.workdir, text=True) # input files LOGGER.debug("url={}, mime_type={}".format(request.inputs['resource'][0].url, request.inputs['resource'][0].data_format.mime_type)) ncs = extract_archive( - resource=rename_complexinputs(request.inputs['resource'])) + resources=rename_complexinputs(request.inputs['resource'])) # mime_type=request.inputs['resource'][0].data_format.mime_type) # mosaic option # TODO: fix defaults in pywps 4.x @@ -115,7 +116,7 @@ def _handler(self, request, response): mosaic=mosaic, spatial_wrapping='wrap', # variable=variable, - # dir_output=os.path.abspath(os.curdir), + dir_output=self.workdir, # dimension_map=dimension_map, ) LOGGER.info('results %s' % results) diff --git a/flyingpigeon/processes/wps_subset_countries.py b/flyingpigeon/processes/wps_subset_countries.py index c54de23f..3828c6fd 100644 --- a/flyingpigeon/processes/wps_subset_countries.py +++ b/flyingpigeon/processes/wps_subset_countries.py @@ -63,11 +63,11 @@ def __init__(self): supported_formats=[Format('application/x-netcdf')] ), - ComplexOutput('output_log', 'Logging information', - abstract="Collected logs during process run.", - as_reference=True, - supported_formats=[Format('text/plain')] - ) + # ComplexOutput('output_log', 'Logging information', + # abstract="Collected logs during process run.", + # as_reference=True, + # supported_formats=[Format('text/plain')] + # ) ] super(SubsetcountryProcess, self).__init__( @@ -89,14 +89,14 @@ def __init__(self): @staticmethod def _handler(request, response): # init_process_logger('log.txt') - response.outputs['output_log'].file = 'log.txt' + # response.outputs['output_log'].file = 'log.txt' # input files LOGGER.debug('url={}, mime_type={}'.format( request.inputs['resource'][0].url, request.inputs['resource'][0].data_format.mime_type)) ncs = extract_archive( - resource=rename_complexinputs(request.inputs['resource'])) + resources=rename_complexinputs(request.inputs['resource'])) # mime_type=request.inputs['resource'][0].data_format.mime_type) # mosaic option # TODO: fix defaults in pywps 4.x diff --git a/flyingpigeon/subset.py b/flyingpigeon/subset.py index e7a7782c..53f1114e 100644 --- a/flyingpigeon/subset.py +++ b/flyingpigeon/subset.py @@ -1,21 +1,19 @@ from tempfile import mkstemp import os -from eggshell.nc.ocg_utils import call -from eggshell.nc.nc_utils import sort_by_filename, guess_main_variables - -import logging -LOGGER = logging.getLogger("PYWPS") - -_PATH = os.path.abspath(os.path.dirname(__file__)) +from eggshell.nc.ocg_utils import call, get_variable +from eggshell.nc.nc_utils import sort_by_filename +from ocgis import env, ShpCabinetIterator, ShpCabinet -def shapefiles_path(): - return os.path.join(data_path(), 'shapefiles') +from eggshell.config import Paths +import flyingpigeon as fp +import logging +LOGGER = logging.getLogger("PYWPS") +paths = Paths(fp) -def data_path(): - return os.path.join(_PATH, 'data') +env.DIR_SHPCABINET = paths.shapefiles def countries(): @@ -130,7 +128,7 @@ def clipping(resource=[], variable=None, dimension_map=None, calc=None, output_f for i, key in enumerate(ncs.keys()): try: # if variable is None: - variable = guess_main_variables(ncs[key]) + variable = get_variable(ncs[key]) LOGGER.info('variable %s detected in resource' % (variable)) if prefix is None: name = key + nameadd @@ -155,7 +153,7 @@ def clipping(resource=[], variable=None, dimension_map=None, calc=None, output_f for key in ncs.keys(): try: # if variable is None: - variable = guess_main_variables(ncs[key]) + variable = get_variable(ncs[key]) LOGGER.info('variable %s detected in resource' % (variable)) if prefix is None: name = key + '_' + polygon.replace(' ', '') @@ -237,10 +235,6 @@ def get_shp_column_values(geom, columnname): returns list: column names """ - from ocgis import env, ShpCabinetIterator - # import ocgis - - env.DIR_SHPCABINET = shapefiles_path() sci = ShpCabinetIterator(geom) vals = [] @@ -258,8 +252,6 @@ def get_ugid(polygons=None, geom=None): :returns list: ugids used by ocgis """ - from ocgis import env, ShpCabinetIterator - # from ocgis import env if polygons is None: result = None @@ -267,7 +259,7 @@ def get_ugid(polygons=None, geom=None): if type(polygons) != list: polygons = list([polygons]) - env.DIR_SHPCABINET = shapefiles_path() + # env.DIR_SHPCABINET = shapefiles_path() sc_iter = ShpCabinetIterator(geom) result = [] @@ -289,8 +281,7 @@ def get_ugid(polygons=None, geom=None): if row['properties']['CONTINENT'] == polygon: result.append(row['properties']['UGID']) else: - from ocgis import ShpCabinet - sc = ShpCabinet(shapefiles_path()) + sc = ShpCabinet(paths.shapefiles) LOGGER.debug('geom: %s not found in shape cabinet. Available geoms are: %s ', geom, sc) return result diff --git a/tests/common.py b/tests/common.py index 4aa035d9..7eb7de70 100644 --- a/tests/common.py +++ b/tests/common.py @@ -1,11 +1,63 @@ from pywps import get_ElementMakerForVersion from pywps.app.basic import get_xpath_ns from pywps.tests import WpsClient, WpsTestResponse +import os VERSION = "1.0.0" WPS, OWS = get_ElementMakerForVersion(VERSION) xpath_ns = get_xpath_ns(VERSION) +TESTS_HOME = os.path.abspath(os.path.dirname(__file__)) +# CFG_FILE = os.path.join(TESTS_HOME, 'test.cfg') + +TESTDATA = { + 'cmip5_tasmax_2006_nc': "file://{0}".format(os.path.join( + TESTS_HOME, + 'testdata', + 'cmip5', + 'tasmax_Amon_MPI-ESM-MR_rcp45_r1i1p1_200601-200612.nc')), + 'cmip5_tasmax_2007_nc': "file://{0}".format(os.path.join( + TESTS_HOME, + 'testdata', + 'cmip5', + 'tasmax_Amon_MPI-ESM-MR_rcp45_r1i1p1_200701-200712.nc')), + 'cmip3_tas_sresb1_da_nc': "file://{0}".format(os.path.join( + TESTS_HOME, + 'testdata', + 'cmip3', + 'tas.sresb1.giss_model_e_r.run1.atm.da.nc')), + 'cmip3_tas_sresa2_da_nc': "file://{0}".format(os.path.join( + TESTS_HOME, + 'testdata', + 'cmip3', + 'tas.sresa2.miub_echo_g.run1.atm.da.nc')), + 'cmip3_tasmin_sresa2_da_nc': "file://{0}".format(os.path.join( + TESTS_HOME, + 'testdata', + 'cmip3', + 'tasmin.sresa2.miub_echo_g.run1.atm.da.nc')), + 'cmip3_tasmax_sresa2_da_nc': "file://{0}".format(os.path.join( + TESTS_HOME, + 'testdata', + 'cmip3', + 'tasmax.sresa2.miub_echo_g.run1.atm.da.nc')), + 'cmip3_pr_sresa2_da_nc': "file://{0}".format(os.path.join( + TESTS_HOME, + 'testdata', + 'cmip3', + 'pr.sresa2.miub_echo_g.run1.atm.da.nc')), + 'cordex_tasmax_2006_nc': "file://{0}".format(os.path.join( + TESTS_HOME, + 'testdata', + 'cordex', + 'tasmax_EUR-44_MPI-M-MPI-ESM-LR_rcp45_r1i1p1_MPI-CSC-REMO2009_v1_mon_200602-200612.nc')), + 'cordex_tasmax_2007_nc': "file://{0}".format(os.path.join( + TESTS_HOME, + 'testdata', + 'cordex', + 'tasmax_EUR-44_MPI-M-MPI-ESM-LR_rcp45_r1i1p1_MPI-CSC-REMO2009_v1_mon_200701-200712.nc')), +} + class WpsTestClient(WpsClient): diff --git a/tests/test_wps_caps.py b/tests/test_wps_caps.py index f4dd2998..0e6fa159 100644 --- a/tests/test_wps_caps.py +++ b/tests/test_wps_caps.py @@ -13,5 +13,9 @@ def test_wps_caps(): '/wps:Process' '/ows:Identifier') assert sorted(names.split()) == [ - 'subset', 'subset_bbox', 'subset_continents', 'subset_countries' - ] + 'pointinspection', + 'subset', + 'subset_WFS', + 'subset_bbox', + 'subset_continents', + 'subset_countries'] diff --git a/tests/test_wps_fetch.py b/tests/test_wps_fetch.py new file mode 100644 index 00000000..df67a885 --- /dev/null +++ b/tests/test_wps_fetch.py @@ -0,0 +1,20 @@ +import pytest + +from pywps import Service +from pywps.tests import assert_response_success + +from .common import TESTDATA, client_for # , CFG_FILE +try: + from flyingpigeon.processes import FetchProcess +except Exception: + pytestmark = pytest.mark.skip + + +def test_wps_fetch(): + client = client_for(Service(processes=[FetchProcess()])) + datainputs = "resource=files@xlink:href={0}".format(TESTDATA['cmip5_tasmax_2006_nc']) + resp = client.get( + service='WPS', request='Execute', version='1.0.0', + identifier='fetch_resources', + datainputs=datainputs) + assert_response_success(resp) diff --git a/tests/test_wps_pointinspection.py b/tests/test_wps_pointinspection.py new file mode 100644 index 00000000..44afdad3 --- /dev/null +++ b/tests/test_wps_pointinspection.py @@ -0,0 +1,25 @@ +from pywps import Service +from pywps.tests import assert_response_success + +from flyingpigeon.processes import PointinspectionProcess +from .common import TESTDATA, client_for # ,CFG_FILE +import os + + +datainputs_fmt = ( + "resource=files@xlink:href={0};" + "coords={1};" +) + + +def test_wps_pointinspection(): + client = client_for( + Service(processes=[PointinspectionProcess()])) # ,cfgfiles=CFG_FILE + datainputs = datainputs_fmt.format( + TESTDATA['cmip5_tasmax_2006_nc'], + "2.356138, 48.846450",) + resp = client.get( + service='wps', request='execute', version='1.0.0', + identifier='pointinspection', + datainputs=datainputs) + assert_response_success(resp) diff --git a/tests/test_wps_subset.py b/tests/test_wps_subset.py new file mode 100644 index 00000000..d22b0ed2 --- /dev/null +++ b/tests/test_wps_subset.py @@ -0,0 +1,282 @@ +import os +import sys +import unittest +import configparser +import json + +from pywps import Service +from pywps.tests import client_for +import numpy.ma as ma +import netCDF4 + +try: + from tests import test_wps_utils +except ImportError: + import test_wps_utils + + +class TestSubset(unittest.TestCase): + + def setUp(self): + self.config = configparser.RawConfigParser() + if os.path.isfile('configtests.cfg'): + self.config.read('configtests.cfg') + else: + self.config.read('flyingpigeon/tests/configtests.cfg') + sys.path.append('/'.join(os.getcwd().split('/')[:-1])) + from flyingpigeon.processes import SubsetProcess + self.client = client_for(Service(processes=[SubsetProcess()])) + + def test_getcapabilities(self): + config_dict = test_wps_utils.config_is_available( + 'subsetwfs', [], self.config, set_wps_host=True) + test_wps_utils.get_capabilities(config_dict['wps_host'], self.client) + + def test_getcapabilities_repeat(self): + config_dict = test_wps_utils.config_is_available( + 'subsetwfs', [], self.config, set_wps_host=True) + + for i in range(10): + test_wps_utils.get_capabilities(config_dict['wps_host'], + self.client) + + def test_process_exists(self): + config_dict = test_wps_utils.config_is_available( + 'subsetwfs', [], self.config, set_wps_host=True) + + wps = test_wps_utils.get_capabilities(config_dict['wps_host'], + self.client) + self.assertIn('subset', [x.identifier for x in wps.processes]) + + def test_describeprocess(self): + config_dict = test_wps_utils.config_is_available( + 'subsetwfs', [], self.config, set_wps_host=True) + + process = test_wps_utils.describe_process( + 'subset', config_dict['wps_host'], self.client) + self.assertIn('resource', + [x.identifier for x in process.dataInputs]) + self.assertIn('initial_datetime', + [x.identifier for x in process.dataInputs]) + self.assertIn('output', + [x.identifier for x in process.processOutputs]) + + def test_subset_wfs_opendap_01(self): + # pairing0.1.1-montreal_circles0.1.1 + config_dict = test_wps_utils.config_is_available( + 'pairing0.1.1-montreal_circles0.1.1', + ['opendap_path', 'fileserver_path', 'geoserver'], + self.config, set_wps_host=True) + + # let's start with one example... + resource = os.path.join( + config_dict['opendap_path'], + 'pairing_day_global-reg-grid_360_720_nobounds_ref180.nc') + + execution = test_wps_utils.execute( + 'subset', inputs=[ + ('resource', resource), + ('typename', 'testgeom:montreal_circles'), + ('featureids', 'montreal_circles.43'), + ('geoserver', config_dict['geoserver'])], + wps_host=config_dict['wps_host'], wps_client=self.client) + + while execution.status == 'ProcessAccepted': + execution.checkStatus(sleepSecs=1) + output_json = execution.processOutputs[0].reference + if output_json[:7] == 'file://': + output_json = output_json[7:] + f1 = open(output_json, 'r') + json_data = json.loads(f1.read()) + f1.close() + else: + json_data = json.loads(test_wps_utils.get_wps_xlink(output_json)) + output_netcdf = json_data[0] + if output_netcdf[:7] == 'file://': + tmp_output_netcdf = output_netcdf[7:] + else: + tmp_output_netcdf = '/tmp/testtmp.nc' + f1 = open(tmp_output_netcdf, 'w') + f1.write(test_wps_utils.get_wps_xlink(output_netcdf)) + f1.close() + + nc = netCDF4.Dataset(tmp_output_netcdf, 'r') + nclon = nc.variables['lon'] + nclat = nc.variables['lat'] + ncvar = nc.variables['pairing'] + self.assertEqual(nclon.shape, (21,)) + self.assertEqual(nclat.shape, (21,)) + self.assertEqual(ncvar.shape, (365, 21, 21)) + self.assertTrue(ncvar[0, 0, 0] is ma.masked) + self.assertEqual(ncvar[0, 10, 10], 271213.0) + self.assertEqual(nc.subset_typename, 'testgeom:montreal_circles') + self.assertEqual(nc.subset_featureid, 'montreal_circles.43') + nc.close() + + def test_subset_wfs_opendap_01_default_geoserver(self): + # pairing0.1.1-montreal_circles0.1.1 + config_dict = test_wps_utils.config_is_available( + 'pairing0.1.1-montreal_circles0.1.1', + ['opendap_path', 'fileserver_path', 'test_default_geoserver'], + self.config, set_wps_host=True) + + # let's start with one example... + resource = os.path.join( + config_dict['opendap_path'], + 'pairing_day_global-reg-grid_360_720_nobounds_ref180.nc') + + execution = test_wps_utils.execute( + 'subset', inputs=[ + ('resource', resource), + ('typename', 'testgeom:montreal_circles'), + ('featureids', 'montreal_circles.43')], + wps_host=config_dict['wps_host'], wps_client=self.client) + + while execution.status == 'ProcessAccepted': + execution.checkStatus(sleepSecs=1) + output_json = execution.processOutputs[0].reference + if output_json[:7] == 'file://': + output_json = output_json[7:] + f1 = open(output_json, 'r') + json_data = json.loads(f1.read()) + f1.close() + else: + json_data = json.loads(test_wps_utils.get_wps_xlink(output_json)) + output_netcdf = json_data[0] + if output_netcdf[:7] == 'file://': + tmp_output_netcdf = output_netcdf[7:] + else: + tmp_output_netcdf = '/tmp/testtmp.nc' + f1 = open(tmp_output_netcdf, 'w') + f1.write(test_wps_utils.get_wps_xlink(output_netcdf)) + f1.close() + + nc = netCDF4.Dataset(tmp_output_netcdf, 'r') + nclon = nc.variables['lon'] + nclat = nc.variables['lat'] + ncvar = nc.variables['pairing'] + self.assertEqual(nclon.shape, (21,)) + self.assertEqual(nclat.shape, (21,)) + self.assertEqual(ncvar.shape, (365, 21, 21)) + self.assertTrue(ncvar[0, 0, 0] is ma.masked) + self.assertEqual(ncvar[0, 10, 10], 271213.0) + self.assertEqual(nc.subset_typename, 'testgeom:montreal_circles') + self.assertEqual(nc.subset_featureid, 'montreal_circles.43') + nc.close() + + def test_subset_wfs_fileserver_01(self): + # pairing0.1.1-montreal_circles0.1.1 + config_dict = test_wps_utils.config_is_available( + 'pairing0.1.1-montreal_circles0.1.1', + ['opendap_path', 'fileserver_path', 'geoserver'], + self.config, set_wps_host=True) + + # let's start with one example... + resource = os.path.join( + config_dict['fileserver_path'], + 'pairing_day_global-reg-grid_360_720_nobounds_ref180.nc') + + execution = test_wps_utils.execute( + 'subset', inputs=[ + ('resource', resource), + ('typename', 'testgeom:montreal_circles'), + ('featureids', 'montreal_circles.43'), + ('geoserver', config_dict['geoserver'])], + wps_host=config_dict['wps_host'], wps_client=self.client) + + while execution.status == 'ProcessAccepted': + execution.checkStatus(sleepSecs=1) + output_json = execution.processOutputs[0].reference + if output_json[:7] == 'file://': + output_json = output_json[7:] + f1 = open(output_json, 'r') + json_data = json.loads(f1.read()) + f1.close() + else: + json_data = json.loads(test_wps_utils.get_wps_xlink(output_json)) + output_netcdf = json_data[0] + if output_netcdf[:7] == 'file://': + tmp_output_netcdf = output_netcdf[7:] + else: + tmp_output_netcdf = '/tmp/testtmp.nc' + f1 = open(tmp_output_netcdf, 'w') + f1.write(test_wps_utils.get_wps_xlink(output_netcdf)) + f1.close() + + nc = netCDF4.Dataset(tmp_output_netcdf, 'r') + nclon = nc.variables['lon'] + nclat = nc.variables['lat'] + ncvar = nc.variables['pairing'] + self.assertEqual(nclon.shape, (21,)) + self.assertEqual(nclat.shape, (21,)) + self.assertEqual(ncvar.shape, (365, 21, 21)) + self.assertTrue(ncvar[0, 0, 0] is ma.masked) + self.assertEqual(ncvar[0, 10, 10], 271213.0) + self.assertEqual(nc.subset_typename, 'testgeom:montreal_circles') + self.assertEqual(nc.subset_featureid, 'montreal_circles.43') + nc.close() + + def test_subset_wfs_opendap_multi_inputs_01(self): + # pairing0.1.1-montreal_circles0.1.1 + config_dict = test_wps_utils.config_is_available( + 'pairing0.1.1-montreal_circles0.1.1', + ['opendap_path', 'fileserver_path', 'geoserver'], + self.config, set_wps_host=True) + + # let's start with one example... + resource1 = os.path.join( + config_dict['opendap_path'], + 'pairing_day_global-reg-grid_360_720_nobounds_ref180.nc') + resource2 = os.path.join( + config_dict['opendap_path'], + 'pairing_day_global-reg-grid_360_720_bounds_ref180.nc') + + execution = test_wps_utils.execute( + 'subset', inputs=[ + ('resource', resource1), + ('resource', resource2), + ('typename', 'testgeom:montreal_circles'), + ('featureids', 'montreal_circles.43'), + ('featureids', 'montreal_circles.45'), + ('geoserver', config_dict['geoserver'])], + wps_host=config_dict['wps_host'], wps_client=self.client) + + while execution.status == 'ProcessAccepted': + execution.checkStatus(sleepSecs=1) + output_json = execution.processOutputs[0].reference + if output_json[:7] == 'file://': + output_json = output_json[7:] + f1 = open(output_json, 'r') + json_data = json.loads(f1.read()) + f1.close() + else: + json_data = json.loads(test_wps_utils.get_wps_xlink(output_json)) + self.assertEqual(len(json_data), 4) + output_netcdf = json_data[0] + if output_netcdf[:7] == 'file://': + tmp_output_netcdf = output_netcdf[7:] + else: + tmp_output_netcdf = '/tmp/testtmp.nc' + f1 = open(tmp_output_netcdf, 'w') + f1.write(test_wps_utils.get_wps_xlink(output_netcdf)) + f1.close() + + nc = netCDF4.Dataset(tmp_output_netcdf, 'r') + nclon = nc.variables['lon'] + nclat = nc.variables['lat'] + ncvar = nc.variables['pairing'] + self.assertEqual(nclon.shape, (21,)) + self.assertEqual(nclat.shape, (21,)) + self.assertEqual(ncvar.shape, (365, 21, 21)) + self.assertTrue(ncvar[0, 0, 0] is ma.masked) + self.assertEqual(ncvar[0, 10, 10], 271213.0) + self.assertEqual(nc.subset_typename, 'testgeom:montreal_circles') + self.assertEqual(nc.subset_featureid, 'montreal_circles.43') + nc.close() + + +suite = unittest.TestLoader().loadTestsFromTestCase(TestSubset) + +if __name__ == '__main__': + run_result = unittest.TextTestRunner(verbosity=2).run(suite) + sys.exit(not run_result.wasSuccessful()) diff --git a/tests/test_wps_subset_WFS.py b/tests/test_wps_subset_WFS.py new file mode 100644 index 00000000..2a155548 --- /dev/null +++ b/tests/test_wps_subset_WFS.py @@ -0,0 +1,309 @@ +import os +import sys +import unittest +import configparser +import json + +from pywps import Service +from pywps.tests import client_for +import numpy.ma as ma +import netCDF4 + +try: + from tests import test_wps_utils +except ImportError: + import test_wps_utils + + +class TestSubsetWFS(unittest.TestCase): + + def setUp(self): + self.config = configparser.RawConfigParser() + if os.path.isfile('configtests.cfg'): + self.config.read('configtests.cfg') + else: + self.config.read('flyingpigeon/tests/configtests.cfg') + sys.path.append('/'.join(os.getcwd().split('/')[:-1])) + from flyingpigeon.processes import SubsetWFSProcess + self.client = client_for(Service(processes=[SubsetWFSProcess()])) + + def test_getcapabilities(self): + config_dict = test_wps_utils.config_is_available( + 'subsetwfs', [], self.config, set_wps_host=True) + + html_response = test_wps_utils.wps_response( + config_dict['wps_host'], + '?service=WPS&request=GetCapabilities&version=1.0.0', + self.client) + self.assertTrue(html_response) + + def test_getcapabilities_repeat(self): + config_dict = test_wps_utils.config_is_available( + 'subsetwfs', [], self.config, set_wps_host=True) + + for i in range(10): + html_response = test_wps_utils.wps_response( + config_dict['wps_host'], + '?service=WPS&request=GetCapabilities&version=1.0.0', + self.client) + self.assertTrue(html_response) + + def test_process_exists(self): + config_dict = test_wps_utils.config_is_available( + 'subsetwfs', [], self.config, set_wps_host=True) + + html_response = test_wps_utils.wps_response( + config_dict['wps_host'], + '?service=WPS&request=GetCapabilities&version=1.0.0', + self.client) + processes = test_wps_utils.parse_getcapabilities(html_response) + self.assertTrue('subset_WFS' in processes) + + def test_describeprocess(self): + config_dict = test_wps_utils.config_is_available( + 'subsetwfs', [], self.config, set_wps_host=True) + + html_response = test_wps_utils.wps_response( + config_dict['wps_host'], + ('?service=WPS&request=DescribeProcess&version=1.0.0&' + 'identifier=subset_WFS'), + self.client) + describe_process = test_wps_utils.parse_describeprocess(html_response) + self.assertTrue('mosaic' in describe_process[0]['inputs']) + self.assertTrue('output' in describe_process[0]['outputs']) + + def test_subset_wfs_opendap_01(self): + # pairing0.1.1-montreal_circles0.1.1 + config_dict = test_wps_utils.config_is_available( + 'pairing0.1.1-montreal_circles0.1.1', + ['opendap_path', 'fileserver_path', 'geoserver'], + self.config, set_wps_host=True) + + # let's start with one example... + resource = os.path.join( + config_dict['opendap_path'], + 'pairing_day_global-reg-grid_360_720_nobounds_ref180.nc') + + wps_request = ( + '?service=WPS&request=execute&version=1.0.0&' + 'identifier=subset_WFS&DataInputs=resource={0};' + 'typename={1};featureids={2};geoserver={3}').format( + resource, + 'testgeom:montreal_circles', + 'montreal_circles.43', + config_dict['geoserver']) + + html_response = test_wps_utils.wps_response( + config_dict['wps_host'], wps_request, self.client) + outputs = test_wps_utils.parse_execute_response(html_response) + if outputs['status'] == 'ProcessFailed': + raise RuntimeError(wps_request) + output_json = outputs['outputs']['output'] + if output_json[:7] == 'file://': + output_json = output_json[7:] + f1 = open(output_json, 'r') + json_data = json.loads(f1.read()) + f1.close() + else: + json_data = json.loads(test_wps_utils.get_wps_xlink(output_json)) + output_netcdf = json_data[0] + if output_netcdf[:7] == 'file://': + tmp_output_netcdf = output_netcdf[7:] + else: + tmp_output_netcdf = '/tmp/testtmp.nc' + f1 = open(tmp_output_netcdf, 'w') + f1.write(test_wps_utils.get_wps_xlink(output_netcdf)) + f1.close() + + nc = netCDF4.Dataset(tmp_output_netcdf, 'r') + nclon = nc.variables['lon'] + nclat = nc.variables['lat'] + ncvar = nc.variables['pairing'] + self.assertEqual(nclon.shape, (21,)) + self.assertEqual(nclat.shape, (21,)) + self.assertEqual(ncvar.shape, (365, 21, 21)) + self.assertTrue(ncvar[0, 0, 0] is ma.masked) + self.assertEqual(ncvar[0, 10, 10], 271213.0) + self.assertEqual(nc.subset_typename, 'testgeom:montreal_circles') + self.assertEqual(nc.subset_featureid, 'montreal_circles.43') + nc.close() + + def test_subset_wfs_opendap_01_default_geoserver(self): + # pairing0.1.1-montreal_circles0.1.1 + config_dict = test_wps_utils.config_is_available( + 'pairing0.1.1-montreal_circles0.1.1', + ['opendap_path', 'fileserver_path', 'geoserver', + 'test_default_geoserver'], + self.config, set_wps_host=True) + + # let's start with one example... + resource = os.path.join( + config_dict['opendap_path'], + 'pairing_day_global-reg-grid_360_720_nobounds_ref180.nc') + + wps_request = ( + '?service=WPS&request=execute&version=1.0.0&' + 'identifier=subset_WFS&DataInputs=resource={0};' + 'typename={1};featureids={2};geoserver={3}').format( + resource, + 'testgeom:montreal_circles', + 'montreal_circles.43', + config_dict['geoserver']) + + html_response = test_wps_utils.wps_response( + config_dict['wps_host'], wps_request, self.client) + outputs = test_wps_utils.parse_execute_response(html_response) + if outputs['status'] == 'ProcessFailed': + raise RuntimeError(wps_request) + output_json = outputs['outputs']['output'] + if output_json[:7] == 'file://': + output_json = output_json[7:] + f1 = open(output_json, 'r') + json_data = json.loads(f1.read()) + f1.close() + else: + json_data = json.loads(test_wps_utils.get_wps_xlink(output_json)) + output_netcdf = json_data[0] + if output_netcdf[:7] == 'file://': + tmp_output_netcdf = output_netcdf[7:] + else: + tmp_output_netcdf = '/tmp/testtmp.nc' + f1 = open(tmp_output_netcdf, 'w') + f1.write(test_wps_utils.get_wps_xlink(output_netcdf)) + f1.close() + + nc = netCDF4.Dataset(tmp_output_netcdf, 'r') + nclon = nc.variables['lon'] + nclat = nc.variables['lat'] + ncvar = nc.variables['pairing'] + self.assertEqual(nclon.shape, (21,)) + self.assertEqual(nclat.shape, (21,)) + self.assertEqual(ncvar.shape, (365, 21, 21)) + self.assertTrue(ncvar[0, 0, 0] is ma.masked) + self.assertEqual(ncvar[0, 10, 10], 271213.0) + self.assertEqual(nc.subset_typename, 'testgeom:montreal_circles') + self.assertEqual(nc.subset_featureid, 'montreal_circles.43') + nc.close() + + def test_subset_wfs_fileserver_01(self): + # pairing0.1.1-montreal_circles0.1.1 + config_dict = test_wps_utils.config_is_available( + 'pairing0.1.1-montreal_circles0.1.1', + ['opendap_path', 'fileserver_path', 'geoserver'], + self.config, set_wps_host=True) + + # let's start with one example... + resource = os.path.join( + config_dict['fileserver_path'], + 'pairing_day_global-reg-grid_360_720_nobounds_ref180.nc') + + wps_request = ( + '?service=WPS&request=execute&version=1.0.0&' + 'identifier=subset_WFS&DataInputs=resource={0};' + 'typename={1};featureids={2};geoserver={3}').format( + resource, + 'testgeom:montreal_circles', + 'montreal_circles.43', + config_dict['geoserver']) + + html_response = test_wps_utils.wps_response( + config_dict['wps_host'], wps_request, self.client) + outputs = test_wps_utils.parse_execute_response(html_response) + if outputs['status'] == 'ProcessFailed': + raise RuntimeError(wps_request) + output_json = outputs['outputs']['output'] + if output_json[:7] == 'file://': + output_json = output_json[7:] + f1 = open(output_json, 'r') + json_data = json.loads(f1.read()) + f1.close() + else: + json_data = json.loads(test_wps_utils.get_wps_xlink(output_json)) + output_netcdf = json_data[0] + if output_netcdf[:7] == 'file://': + tmp_output_netcdf = output_netcdf[7:] + else: + tmp_output_netcdf = '/tmp/testtmp.nc' + f1 = open(tmp_output_netcdf, 'w') + f1.write(test_wps_utils.get_wps_xlink(output_netcdf)) + f1.close() + + nc = netCDF4.Dataset(tmp_output_netcdf, 'r') + nclon = nc.variables['lon'] + nclat = nc.variables['lat'] + ncvar = nc.variables['pairing'] + self.assertEqual(nclon.shape, (21,)) + self.assertEqual(nclat.shape, (21,)) + self.assertEqual(ncvar.shape, (365, 21, 21)) + self.assertTrue(ncvar[0, 0, 0] is ma.masked) + self.assertEqual(ncvar[0, 10, 10], 271213.0) + self.assertEqual(nc.subset_typename, 'testgeom:montreal_circles') + self.assertEqual(nc.subset_featureid, 'montreal_circles.43') + nc.close() + + def test_subset_wfs_opendap_multi_inputs_01(self): + # pairing0.1.1-montreal_circles0.1.1 + config_dict = test_wps_utils.config_is_available( + 'pairing0.1.1-montreal_circles0.1.1', + ['opendap_path', 'fileserver_path', 'geoserver'], + self.config, set_wps_host=True) + + # let's start with one example... + resource1 = os.path.join( + config_dict['opendap_path'], + 'pairing_day_global-reg-grid_360_720_nobounds_ref180.nc') + resource2 = os.path.join( + config_dict['opendap_path'], + 'pairing_day_global-reg-grid_360_720_bounds_ref180.nc') + + wps_request = ( + '?service=WPS&request=execute&version=1.0.0&' + 'identifier=subset_WFS&DataInputs=resource={0};resource={1};' + 'typename={2};featureids={3};featureids={4};geoserver={5}').format( + resource1, resource2, + 'testgeom:montreal_circles', + 'montreal_circles.43', 'montreal_circles.45', + config_dict['geoserver']) + + html_response = test_wps_utils.wps_response( + config_dict['wps_host'], wps_request, self.client) + outputs = test_wps_utils.parse_execute_response(html_response) + if outputs['status'] == 'ProcessFailed': + raise RuntimeError(wps_request) + output_json = outputs['outputs']['output'] + if output_json[:7] == 'file://': + output_json = output_json[7:] + f1 = open(output_json, 'r') + json_data = json.loads(f1.read()) + f1.close() + else: + json_data = json.loads(test_wps_utils.get_wps_xlink(output_json)) + self.assertEqual(len(json_data), 4) + output_netcdf = json_data[0] + if output_netcdf[:7] == 'file://': + tmp_output_netcdf = output_netcdf[7:] + else: + tmp_output_netcdf = '/tmp/testtmp.nc' + f1 = open(tmp_output_netcdf, 'w') + f1.write(test_wps_utils.get_wps_xlink(output_netcdf)) + f1.close() + + nc = netCDF4.Dataset(tmp_output_netcdf, 'r') + nclon = nc.variables['lon'] + nclat = nc.variables['lat'] + ncvar = nc.variables['pairing'] + self.assertEqual(nclon.shape, (21,)) + self.assertEqual(nclat.shape, (21,)) + self.assertEqual(ncvar.shape, (365, 21, 21)) + self.assertTrue(ncvar[0, 0, 0] is ma.masked) + self.assertEqual(ncvar[0, 10, 10], 271213.0) + self.assertEqual(nc.subset_typename, 'testgeom:montreal_circles') + self.assertEqual(nc.subset_featureid, 'montreal_circles.43') + nc.close() + + +suite = unittest.TestLoader().loadTestsFromTestCase(TestSubsetWFS) + +if __name__ == '__main__': + run_result = unittest.TextTestRunner(verbosity=2).run(suite) + sys.exit(not run_result.wasSuccessful()) diff --git a/tests/test_wps_subset_continents.py b/tests/test_wps_subset_continents.py new file mode 100644 index 00000000..e6103565 --- /dev/null +++ b/tests/test_wps_subset_continents.py @@ -0,0 +1,37 @@ +from pywps import Service +from pywps.tests import assert_response_success + +from flyingpigeon.processes import SubsetcontinentProcess +from tests.common import TESTDATA, client_for, get_output +import os + + +datainputs_fmt = ( + "resource=files@xlink:href={0};" + "region={1};" + "mosaic={2};" +) + + +def test_wps_subset_continents(): + client = client_for( + Service(processes=[SubsetcontinentProcess()])) # , cfgfiles=CFG_FILE + + datainputs = datainputs_fmt.format( + TESTDATA['cmip5_tasmax_2006_nc'], + 'Africa', + "True",) + + resp = client.get( + service='wps', request='execute', version='1.0.0', + identifier='subset_continents', + datainputs=datainputs) + + assert_response_success(resp) + + # Check output file size is smaller than input. + out = get_output(resp.xml) + assert 'output' in out.keys() + # ins = os.path.getsize(TESTDATA['cmip5_tasmax_2006_nc'][6:]) + # outs = os.path.getsize(out['ncout'][6:]) + # assert (outs < ins) diff --git a/tests/test_wps_subset_countries.py b/tests/test_wps_subset_countries.py new file mode 100644 index 00000000..6e7b57f3 --- /dev/null +++ b/tests/test_wps_subset_countries.py @@ -0,0 +1,37 @@ +from pywps import Service +from pywps.tests import assert_response_success + +from flyingpigeon.processes import SubsetcountryProcess +from tests.common import TESTDATA, client_for, get_output # ,cfgfiles=CFG_FILE +import os + + +datainputs_fmt = ( + "resource=files@xlink:href={0};" + "region={1};" + "mosaic={2};" +) + + +def test_wps_subset_countries(): + client = client_for( + Service(processes=[SubsetcountryProcess()])) # , cfgfiles=CFG_FILE + + datainputs = datainputs_fmt.format( + TESTDATA['cmip5_tasmax_2006_nc'], + 'CAN', + "True",) + + resp = client.get( + service='wps', request='execute', version='1.0.0', + identifier='subset_countries', + datainputs=datainputs) + + assert_response_success(resp) + + # Check output file size is smaller than input. + out = get_output(resp.xml) + assert 'output' in out.keys() + # ins = os.path.getsize(TESTDATA['cmip5_tasmax_2006_nc'][6:]) + # outs = os.path.getsize(out['ncout'][6:]) + # assert (outs < ins) diff --git a/tests/test_wps_utils.py b/tests/test_wps_utils.py new file mode 100644 index 00000000..a68b608a --- /dev/null +++ b/tests/test_wps_utils.py @@ -0,0 +1,405 @@ +""" +=============== +WPS Tests Utils +=============== + +Functions: + + * :func:`xml_children_as_dict` - create dictionary from xml element children. + * :func:`xml_attrib_nsmap` - replace nsmap values with their key. + * :func:`parse_getcapabilities` - parse wps GetCapabilities response. + * :func:`parse_describeprocess` - parse wps DescribeProcess response. + * :func:`parse_execute_response` - parse wps execute response. + * :func:`wps_response` - get xml document response from a WPS. + * :func:`get_wps_xlink` - read a document from an url. + * :func:`config_is_available` - skip tests when config is unavailable. + +""" + +import unittest +from lxml import etree +try: + from urllib.request import urlopen, Request +except ImportError: + from urllib2 import urlopen, Request + +from owslib.wps import WebProcessingService, WPSReader, WPSExecution + + +def get_capabilities(wps_host=None, wps_client=None, version='1.0.0'): + if wps_host: + return WebProcessingService(wps_host, version) + else: + response = wps_client.get( + '?service=WPS&request=GetCapabilities&version={0}'.format(version)) + wps_reader = WPSReader() + element = wps_reader.readFromString(response.get_data()) + wps = WebProcessingService(None, version, skip_caps=True) + wps._parseCapabilitiesMetadata(element) + return wps + + +def describe_process(identifier, wps_host=None, wps_client=None, + version='1.0.0'): + if wps_host: + wps = WebProcessingService(wps_host, version) + return wps.describeprocess(identifier) + else: + response = wps_client.get( + ('?service=WPS&request=DescribeProcess&version={0}&' + 'identifier={1}').format(version, identifier)) + wps_reader = WPSReader() + element = wps_reader.readFromString(response.get_data()) + wps = WebProcessingService(None, version, skip_caps=True) + return wps._parseProcessMetadata(element) + + +def execute(identifier, inputs=[], wps_host=None, wps_client=None, + version='1.0.0'): + if wps_host: + wps = WebProcessingService(wps_host, version) + return wps.execute(identifier, inputs=inputs) + else: + y = '' + for data_input in inputs: + y += '{0}={1};'.format(data_input[0], data_input[1]) + y = y[:-1] + response = wps_client.get( + ('?service=WPS&request=execute&version={0}&' + 'identifier={1}&DataInputs={2}').format(version, identifier, y)) + wps_reader = WPSReader() + element = wps_reader.readFromString(response.get_data()) + execution = WPSExecution() + execution._parseExecuteResponse(element) + return execution + +# Most of these xml parsing functions should be obsolete with the use of +# owslib above... + + +def xml_children_as_dict(element): + """Create dictionary from xml element children. + + Parameters + ---------- + element : lxml.etree._Element + + Returns + ------- + out : dict + + Notes + ----- + An additional replacement of the nsmap values with their key is done + on the tag of each child. + + """ + + d = {} + for child in element: + child_tag = child.tag + for key, nsvalue in child.nsmap.items(): + child_tag = child_tag.replace('{' + nsvalue + '}', key + ':') + if child_tag in d: + d[child_tag].append(child) + else: + d[child_tag] = [child] + return d + + +def xml_attrib_nsmap(element): + """Replace nsmap values with their key in element attributes. + + Parameters + ---------- + element : lxml.etree._Element + + Returns + ------- + out : dict + element.attrib with the replaced nsmap. + + """ + + d = {} + for key, value in element.attrib.items(): + new_key = key + for nskey, nsvalue in element.nsmap.items(): + new_key = new_key.replace('{' + nsvalue + '}', nskey + ':') + d[new_key] = value + return d + + +def parse_getcapabilities(html_response): + """Parse WPS GetCapabilities response. + + Parameters + ---------- + html_response : string + xml document from a GetCapabilities WPS request. + + Returns + ------- + out : list of string + wps:ProcessOfferings -> wps:Process -> ows:Identifier + + """ + + # XML structure: + # wps:Capabilities + # ows:ServiceIdentification + # [...] + # ows:ServiceProvider + # [...] + # ows:OperationsMetadata + # [...] + # wps:ProcessOfferings + # wps:Process (list) + # ows:Identifier (text) + # ows:Title (text) + # wps:Languages + # [...] + processes = [] + capabilities = xml_children_as_dict(etree.fromstring(html_response)) + process_offerings = xml_children_as_dict( + capabilities['wps:ProcessOfferings'][0]) + for process_element in process_offerings['wps:Process']: + process = xml_children_as_dict(process_element) + processes.append(process['ows:Identifier'][0].text) + return sorted(processes) + + +def parse_describeprocess(html_response): + """Parse WPS DescribeProcess response. + + Parameters + ---------- + html_response : string + xml document from a DescribeProcess WPS request. + + Returns + ------- + out : list of dict + 'identifier' : ProcessDescription -> ows:Identifier + 'inputs' : ProcessDescription -> DataInputs -> Input -> ows:Identifier + 'ouputs' : ProcessDescription -> ProcessOutputs -> Output -> + ows:Identifier + + """ + + # XML structure: + # wps:ProcessDescriptions + # ProcessDescription (list) + # ows:Identifier (text) + # ows:Title (text) + # DataInputs (optional) + # Input (list) + # ows:Identifier (text) + # ows:Title (text) + # LiteralData (xor) + # ows:DataType + # [...] + # ProcessOutputs + # Output (list) + # ows:Identifier (text) + # ows:Title (text) + # LiteralOutput (xor) + # ows:DataType + # [...] + processes = [] + process_descriptions = xml_children_as_dict( + etree.fromstring(html_response)) + for process_description_el in process_descriptions['ProcessDescription']: + d = {'inputs': [], 'outputs': []} + process_description = xml_children_as_dict(process_description_el) + d['identifier'] = process_description['ows:Identifier'][0].text + if 'DataInputs' in process_description: + data_inputs = xml_children_as_dict( + process_description['DataInputs'][0]) + for input_element in data_inputs['Input']: + input1 = xml_children_as_dict(input_element) + d['inputs'].append(input1['ows:Identifier'][0].text) + process_outputs = xml_children_as_dict( + process_description['ProcessOutputs'][0]) + for output_element in process_outputs['Output']: + output1 = xml_children_as_dict(output_element) + d['outputs'].append(output1['ows:Identifier'][0].text) + processes.append(d) + return processes + + +def parse_execute_response(html_response): + """Parse WPS execute response. + + Parameters + ---------- + html_response : string + xml document from an execute WPS request. + + Returns + ------- + out : dict + 'identifier' : wps:Process -> ows:Identifier + 'status' : wps:Status -> {tag of the status, without nsmap} + 'ouputs' : wps:ProcessOutputs -> wps:Output -> + {ows:Identifier : wps:Data} + + """ + + # XML structure: + # wps:ExecuteResponse + # wps:Process + # ows:Identifier (text) + # ows:Title (text) + # wps:Status + # creationTime (attrib) + # wps:ProcessSucceeded (xor, text) + # wps:ProcessFailed (xor, text) + # wps:ProcessAccepted (xor, text) + # wps:ProcessStarted (xor, text) + # percentCompleted (attrib) + # wps:ProcessOutputs + # wps:Output (list) + # ows:Identifier (text) + # ows:Title (text) + # wps:Data (xor) + # wps:LiteralData (xor) + # [...] + # wps:Reference (xor) + # xlink:href (attrib) + # mimeType (attrib) + d = {'outputs': {}} + execute_response_element = etree.fromstring(html_response) + execute_response_attrib = xml_attrib_nsmap(execute_response_element) + if 'statusLocation' in execute_response_attrib: + d['statusLocation'] = execute_response_attrib['statusLocation'] + execute_response = xml_children_as_dict(execute_response_element) + + process = xml_children_as_dict(execute_response['wps:Process'][0]) + d['identifier'] = process['ows:Identifier'][0].text + + status_element = execute_response['wps:Status'][0] + status_attrib = xml_attrib_nsmap(status_element) + status = xml_children_as_dict(status_element) + d['creationTime'] = status_attrib['creationTime'] + if 'wps:ProcessSucceeded' in status: + d['status'] = 'ProcessSucceeded' + elif 'wps:ProcessFailed' in status: + d['status'] = 'ProcessFailed' + return d + elif 'wps:ProcessAccepted' in status: + d['status'] = 'ProcessAccepted' + return d + elif 'wps:ProcessStarted' in status: + process_started_element = status['wps:ProcessStarted'][0] + process_started_attrib = xml_attrib_nsmap(process_started_element) + d['status'] = 'ProcessStarted' + d['percentCompleted'] = \ + float(process_started_attrib['percentCompleted']) + return d + else: + raise NotImplementedError() + + process_outputs = xml_children_as_dict( + execute_response['wps:ProcessOutputs'][0]) + for output_element in process_outputs['wps:Output']: + output1 = xml_children_as_dict(output_element) + identifier = output1['ows:Identifier'][0].text + if 'wps:Data' in output1: + data1 = xml_children_as_dict(output1['wps:Data'][0]) + if 'wps:LiteralData' in data1: + d['outputs'][identifier] = data1['wps:LiteralData'][0].text + else: + raise NotImplementedError() + elif 'wps:Reference' in output1: + reference_element = output1['wps:Reference'][0] + reference_attrib = xml_attrib_nsmap(reference_element) + d['outputs'][identifier] = reference_attrib['xlink:href'] + else: + raise NotImplementedError() + return d + + +def wps_response(wps_host, pywps_request, wps_client=None): + """Get xml document response from a WPS. + + Parameters + ---------- + wps_host : string or None + url without the http:// prefix (e.g. 'localhost:8009/pywps'). + If set to None, will use the wps_client provided (required). + pywps_request : string + wps request starting with '?service=WPS&request=[...]' + wps_client : pywps.tests.WpsClient or None + If wps_host is None this will be used to listen to wps requests. + + Returns + ------- + out : string + response from the server, which is an xml document if the request + is valid and the server is correctly setup. + + """ + + if wps_host: + wps_host = wps_host.replace('http://', '') + url_request = Request( + url='http://{0}{1}'.format(wps_host, pywps_request)) + url_response = urlopen(url_request) + return url_response.read() + else: + resp = wps_client.get(pywps_request) + return resp.get_data() + + +def get_wps_xlink(xlink): + url_request = Request(url=xlink) + url_response = urlopen(url_request) + return url_response.read() + + +def config_is_available(config_section, config_names, config_read, + set_wps_host=False): + """Check if a config section & parameters are available for tests. + + Parameters + ---------- + config_section : string + section of a cfg file. + config_names : list of string + name of parameters to check. + config_read : result from read method of ConfigParser.RawConfigParser + set_wps_host : bool + whether to set a default wps_host in the output config dictionary, + if there is already one, it is not overwritten. + + Returns + ------- + out : dict + dictionary of parameter:value for all parameters of the given section + + """ + + if not hasattr(config_names, '__iter__'): + config_names = [config_names] + if config_section not in config_read.sections(): + raise unittest.SkipTest( + "{0} section not defined in config.".format(config_section)) + section = config_read.items(config_section) + section_d = {} + for item in section: + section_d[item[0]] = item[1] + for config_name in config_names: + if (config_name not in section_d) or (not section_d[config_name]): + raise unittest.SkipTest( + "{0} not defined in config.".format(config_name)) + + if set_wps_host: + if 'wps_host' in section_d: + # wps_host might be set, but empty. If that's the case, set to None + if not section_d['wps_host']: + section_d['wps_host'] = None + else: + section_d['wps_host'] = None + + return section_d diff --git a/tests/testdata/cmip3/pr.sresa2.miub_echo_g.run1.atm.da.nc b/tests/testdata/cmip3/pr.sresa2.miub_echo_g.run1.atm.da.nc new file mode 100644 index 00000000..d66d5673 Binary files /dev/null and b/tests/testdata/cmip3/pr.sresa2.miub_echo_g.run1.atm.da.nc differ diff --git a/tests/testdata/cmip3/tas.sresa2.miub_echo_g.run1.atm.da.nc b/tests/testdata/cmip3/tas.sresa2.miub_echo_g.run1.atm.da.nc new file mode 100644 index 00000000..6565e011 Binary files /dev/null and b/tests/testdata/cmip3/tas.sresa2.miub_echo_g.run1.atm.da.nc differ diff --git a/tests/testdata/cmip3/tas.sresb1.giss_model_e_r.run1.atm.da.nc b/tests/testdata/cmip3/tas.sresb1.giss_model_e_r.run1.atm.da.nc new file mode 100644 index 00000000..49c990aa Binary files /dev/null and b/tests/testdata/cmip3/tas.sresb1.giss_model_e_r.run1.atm.da.nc differ diff --git a/tests/testdata/cmip3/tasmax.sresa2.miub_echo_g.run1.atm.da.nc b/tests/testdata/cmip3/tasmax.sresa2.miub_echo_g.run1.atm.da.nc new file mode 100644 index 00000000..9b1a983a Binary files /dev/null and b/tests/testdata/cmip3/tasmax.sresa2.miub_echo_g.run1.atm.da.nc differ diff --git a/tests/testdata/cmip3/tasmin.sresa2.miub_echo_g.run1.atm.da.nc b/tests/testdata/cmip3/tasmin.sresa2.miub_echo_g.run1.atm.da.nc new file mode 100644 index 00000000..437bcf3a Binary files /dev/null and b/tests/testdata/cmip3/tasmin.sresa2.miub_echo_g.run1.atm.da.nc differ diff --git a/tests/testdata/cmip5/tasmax_Amon_MPI-ESM-MR_rcp45_r1i1p1_200601-200612.nc b/tests/testdata/cmip5/tasmax_Amon_MPI-ESM-MR_rcp45_r1i1p1_200601-200612.nc new file mode 100644 index 00000000..cbf2c126 Binary files /dev/null and b/tests/testdata/cmip5/tasmax_Amon_MPI-ESM-MR_rcp45_r1i1p1_200601-200612.nc differ diff --git a/tests/testdata/cmip5/tasmax_Amon_MPI-ESM-MR_rcp45_r1i1p1_200701-200712.nc b/tests/testdata/cmip5/tasmax_Amon_MPI-ESM-MR_rcp45_r1i1p1_200701-200712.nc new file mode 100644 index 00000000..441337cb Binary files /dev/null and b/tests/testdata/cmip5/tasmax_Amon_MPI-ESM-MR_rcp45_r1i1p1_200701-200712.nc differ diff --git a/tests/testdata/cmip5/tasmax_Amon_MPI-ESM-MR_rcp45_r2i1p1_200601-200612.nc b/tests/testdata/cmip5/tasmax_Amon_MPI-ESM-MR_rcp45_r2i1p1_200601-200612.nc new file mode 100644 index 00000000..aae2f072 Binary files /dev/null and b/tests/testdata/cmip5/tasmax_Amon_MPI-ESM-MR_rcp45_r2i1p1_200601-200612.nc differ diff --git a/tests/testdata/cordex/tasmax_EUR-44_MPI-M-MPI-ESM-LR_rcp45_r1i1p1_MPI-CSC-REMO2009_v1_mon_200602-200612.nc b/tests/testdata/cordex/tasmax_EUR-44_MPI-M-MPI-ESM-LR_rcp45_r1i1p1_MPI-CSC-REMO2009_v1_mon_200602-200612.nc new file mode 100644 index 00000000..c1f937c4 Binary files /dev/null and b/tests/testdata/cordex/tasmax_EUR-44_MPI-M-MPI-ESM-LR_rcp45_r1i1p1_MPI-CSC-REMO2009_v1_mon_200602-200612.nc differ diff --git a/tests/testdata/cordex/tasmax_EUR-44_MPI-M-MPI-ESM-LR_rcp45_r1i1p1_MPI-CSC-REMO2009_v1_mon_200701-200712.nc b/tests/testdata/cordex/tasmax_EUR-44_MPI-M-MPI-ESM-LR_rcp45_r1i1p1_MPI-CSC-REMO2009_v1_mon_200701-200712.nc new file mode 100644 index 00000000..c30d7bb4 Binary files /dev/null and b/tests/testdata/cordex/tasmax_EUR-44_MPI-M-MPI-ESM-LR_rcp45_r1i1p1_MPI-CSC-REMO2009_v1_mon_200701-200712.nc differ