Skip to content

Commit

Permalink
Merge branch 'develop' into feature/tc_model_kwargs
Browse files Browse the repository at this point in the history
  • Loading branch information
Thomas Vogt committed Feb 20, 2024
2 parents 027d3ee + db36b43 commit ffc3540
Show file tree
Hide file tree
Showing 24 changed files with 3,605 additions and 1,063 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ jobs:
steps:
-
name: Checkout Repo
uses: actions/checkout@v3
uses: actions/checkout@v4
-
# Store the current date to use it as cache key for the environment
name: Get current date
Expand Down Expand Up @@ -64,7 +64,7 @@ jobs:
-
name: Upload Coverage Reports
if: always()
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: coverage-report-unittests-py${{ matrix.python-version }}
path: coverage/
1 change: 1 addition & 0 deletions AUTHORS.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,3 +31,4 @@
* Leonie Villiger
* Kam Lam Yeung
* Sarah Hülsen
* Timo Schmid
52 changes: 48 additions & 4 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,53 @@ Code freeze date: YYYY-MM-DD

### Added

### Changed

### Fixed

- Fix `util.coordinates.latlon_bounds` for cases where the specified buffer is very large so that the bounds cover more than the full longitudinal range `[-180, 180]` [#839](https://github.com/CLIMADA-project/climada_python/pull/839)
- Fix `climada.hazard.trop_cyclone` for TC tracks crossing the antimeridian [#839](https://github.com/CLIMADA-project/climada_python/pull/839)

### Deprecated

### Removed

## 4.1.0

Release date: 2024-02-14

### Dependency Changes

Added:

- `pyproj` >=3.5
- `numexpr` >=2.9

Updated:

- `contextily` >=1.3 → >=1.5
- `dask` >=2023 → >=2024
- `numba` >=0.57 → >=0.59
- `pandas` >=2.1 &rarr; >=2.1,<2.2
- `pint` >=0.22 &rarr; >=0.23
- `scikit-learn` >=1.3 &rarr; >=1.4
- `scipy` >=1.11 &rarr; >=1.12
- `sparse` >=0.14 &rarr; >=0.15
- `xarray` >=2023.8 &rarr; >=2024.1
- `overpy` =0.6 &rarr; =0.7
- `peewee` =3.16.3 &rarr; =3.17.1

Removed:

- `proj` (in favor of `pyproj`)

### Added

- Convenience method `api_client.Client.get_dataset_file`, combining `get_dataset_info` and `download_dataset`, returning a single file objet. [#821](https://github.com/CLIMADA-project/climada_python/pull/821)
- Read and Write methods to and from csv files for the `DiscRates` class. [#818](ttps://github.com/CLIMADA-project/climada_python/pull/818)
- Add `CalcDeltaClimate` to unsequa module to allow uncertainty and sensitivity analysis of impact change calculations [#844](https://github.com/CLIMADA-project/climada_python/pull/844)
- Add function `safe_divide` in util which handles division by zero and NaN values in the numerator or denominator [#844](https://github.com/CLIMADA-project/climada_python/pull/844)
- Add reset_frequency option for the impact.select() function. [#847](https://github.com/CLIMADA-project/climada_python/pull/847)

### Changed

Expand All @@ -24,6 +69,7 @@ Code freeze date: YYYY-MM-DD
- Recommend using Mamba instead of Conda for installing CLIMADA [#809](https://github.com/CLIMADA-project/climada_python/pull/809)
- `Hazard.from_xarray_raster` now allows arbitrary values as 'event' coordinates [#837](https://github.com/CLIMADA-project/climada_python/pull/837)
- `climada.test.get_test_file` now compares the version of the requested test dataset with the version of climada itself and selects the most appropriate dataset. In this way a test file can be updated without the need of changing the code of the unittest. [#822](https://github.com/CLIMADA-project/climada_python/pull/822)
- Explicitly require `pyproj` instead of `proj` (the latter is now implicitly required) [#845](https://github.com/CLIMADA-project/climada_python/pull/845)

### Fixed

Expand All @@ -32,10 +78,8 @@ Code freeze date: YYYY-MM-DD
- `climada.util.yearsets.sample_from_poisson`: fix a bug ([#819](https://github.com/CLIMADA-project/climada_python/issues/819)) and inconsistency that occurs when lambda events per year (`lam`) are set to 1. [[#823](https://github.com/CLIMADA-project/climada_python/pull/823)]
- In the TropCyclone class in the Holland model 2008 and 2010 implementation, a doublecounting of translational velocity is removed [#833](https://github.com/CLIMADA-project/climada_python/pull/833)
- `climada.util.test.test_finance` and `climada.test.test_engine` updated to recent input data from worldbank [#841](https://github.com/CLIMADA-project/climada_python/pull/841)

### Deprecated

### Removed
- Set `nodefaults` in Conda environment specs because `defaults` are not compatible with conda-forge [#845](https://github.com/CLIMADA-project/climada_python/pull/845)
- Avoid redundant calls to `np.unique` in `Impact.impact_at_reg` [#848](https://github.com/CLIMADA-project/climada_python/pull/848)

## 4.0.1

Expand Down
2 changes: 1 addition & 1 deletion climada/_version.py
Original file line number Diff line number Diff line change
@@ -1 +1 @@
__version__ = '4.0.2-dev'
__version__ = '4.1.1-dev'
31 changes: 26 additions & 5 deletions climada/engine/impact.py
Original file line number Diff line number Diff line change
Expand Up @@ -451,12 +451,12 @@ def impact_at_reg(self, agg_regions=None):
at_reg_event = np.hstack(
[
self.imp_mat[:, np.where(agg_regions == reg)[0]].sum(1)
for reg in np.unique(agg_reg_unique)
for reg in agg_reg_unique
]
)

at_reg_event = pd.DataFrame(
at_reg_event, columns=np.unique(agg_reg_unique), index=self.event_id
at_reg_event, columns=agg_reg_unique, index=self.event_id
)

return at_reg_event
Expand Down Expand Up @@ -1475,9 +1475,14 @@ def _cen_return_imp(imp, freq, imp_th, return_periods):

return imp_fit

def select(self,
event_ids=None, event_names=None, dates=None,
coord_exp=None):
def select(
self,
event_ids=None,
event_names=None,
dates=None,
coord_exp=None,
reset_frequency=False
):
"""
Select a subset of events and/or exposure points from the impact.
If multiple input variables are not None, it returns all the impacts
Expand Down Expand Up @@ -1509,6 +1514,9 @@ def select(self,
coord_exp : np.array, optional
Selection of exposures coordinates [lat, lon] (in degrees)
The default is None.
reset_frequency : bool, optional
Change frequency of events proportional to difference between first and last
year (old and new). Assumes annual frequency values. Default: False.
Raises
------
Expand Down Expand Up @@ -1580,6 +1588,19 @@ def select(self,
LOGGER.info("The total value cannot be re-computed for a "
"subset of exposures and is set to None.")

# reset frequency if date span has changed (optional):
if reset_frequency:
if self.frequency_unit not in ['1/year', 'annual', '1/y', '1/a']:
LOGGER.warning("Resetting the frequency is based on the calendar year of given"
" dates but the frequency unit here is %s. Consider setting the frequency"
" manually for the selection or changing the frequency unit to %s.",
self.frequency_unit, DEF_FREQ_UNIT)
year_span_old = np.abs(dt.datetime.fromordinal(self.date.max()).year -
dt.datetime.fromordinal(self.date.min()).year) + 1
year_span_new = np.abs(dt.datetime.fromordinal(imp.date.max()).year -
dt.datetime.fromordinal(imp.date.min()).year) + 1
imp.frequency = imp.frequency * year_span_old / year_span_new

# cast frequency vector into 2d array for sparse matrix multiplication
freq_mat = imp.frequency.reshape(len(imp.frequency), 1)
# .A1 reduce 1d matrix to 1d array
Expand Down
35 changes: 35 additions & 0 deletions climada/engine/test/test_impact.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@
import h5py
from pyproj import CRS
from rasterio.crs import CRS as rCRS
import datetime as dt

from climada.entity.entity_def import Entity
from climada.hazard.base import Hazard
Expand Down Expand Up @@ -67,6 +68,24 @@ def dummy_impact():
haz_type="TC",
)

def dummy_impact_yearly():
"""Return an impact containing events in multiple years"""
imp = dummy_impact()

years = np.arange(2010,2010+len(imp.date))

# Edit the date and frequency
imp.date = np.array([dt.date(year,1,1).toordinal() for year in years])
imp.frequency_unit = "1/year"
imp.frequency = np.ones(len(years))/len(years)

# Calculate the correct expected annual impact
freq_mat = imp.frequency.reshape(len(imp.frequency), 1)
imp.eai_exp = imp.imp_mat.multiply(freq_mat).sum(axis=0).A1
imp.aai_agg = imp.eai_exp.sum()

return imp


class TestImpact(unittest.TestCase):
""""Test initialization and more"""
Expand Down Expand Up @@ -868,6 +887,22 @@ def test_select_imp_map_fail(self):
with self.assertRaises(ValueError):
imp.select(event_ids=[0], event_names=[1, 'two'], dates=(0, 2))

def test_select_reset_frequency(self):
"""Test that reset_frequency option works correctly"""

imp = dummy_impact_yearly() # 6 events, 1 per year

# select first 4 events
n_yr = 4
sel_imp = imp.select(dates=(imp.date[0],imp.date[n_yr-1]), reset_frequency=True)

# check frequency-related attributes
np.testing.assert_array_equal(sel_imp.frequency, [1/n_yr]*n_yr)
self.assertEqual(sel_imp.aai_agg,imp.at_event[0:n_yr].sum()/n_yr)
np.testing.assert_array_equal(sel_imp.eai_exp,
imp.imp_mat[0:n_yr,:].todense().sum(axis=0).A1/n_yr)


class TestConvertExp(unittest.TestCase):
def test__build_exp(self):
"""Test that an impact set can be converted to an exposure"""
Expand Down
1 change: 1 addition & 0 deletions climada/engine/unsequa/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,3 +22,4 @@
from .calc_base import *
from .calc_impact import *
from .calc_cost_benefit import *
from .calc_delta_climate import *
20 changes: 11 additions & 9 deletions climada/engine/unsequa/calc_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,6 @@ def __init__(self):
"""
Empty constructor to be overwritten by subclasses
"""
pass

def check_distr(self):
"""
Expand Down Expand Up @@ -118,7 +117,6 @@ def check_distr(self):
distr_dict[input_param_name] = input_param_func
return True


@property
def input_vars(self):
"""
Expand Down Expand Up @@ -179,7 +177,6 @@ def est_comp_time(self, n_samples, time_one_run, processes=None):
"been assigned to exp before defining input_var, ..."
"\n If computation cannot be reduced, consider using"
" a surrogate model https://www.uqlab.com/", time_one_run)

total_time = n_samples * time_one_run / processes
LOGGER.info("\n\nEstimated computaion time: %s\n",
dt.timedelta(seconds=total_time))
Expand Down Expand Up @@ -323,21 +320,23 @@ def sensitivity(self, unc_output, sensitivity_method = 'sobol',
Parameters
----------
unc_output : climada.engine.uncertainty.unc_output.UncOutput
unc_output : climada.engine.unsequa.UncOutput
Uncertainty data object in which to store the sensitivity indices
sensitivity_method : str, optional
Sensitivity analysis method from SALib.analyse. Possible choices: 'fast', 'rbd_fact',
'morris', 'sobol', 'delta', 'ff'. Note that in Salib, sampling methods and sensitivity
analysis methods should be used in specific pairs:
sensitivity analysis method from SALib.analyse
Possible choices:
'fast', 'rbd_fact', 'morris', 'sobol', 'delta', 'ff'
The default is 'sobol'.
Note that in Salib, sampling methods and sensitivity analysis
methods should be used in specific pairs.
https://salib.readthedocs.io/en/latest/api.html
Default: 'sobol'
sensitivity_kwargs: dict, optional
Keyword arguments of the chosen SALib analyse method.
The default is to use SALib's default arguments.
Returns
-------
sens_output : climada.engine.uncertainty.unc_output.UncOutput()
sens_output : climada.engine.unsequa.UncOutput
Uncertainty data object with all the sensitivity indices,
and all the uncertainty data copied over from unc_output.
Expand Down Expand Up @@ -379,6 +378,7 @@ def sensitivity(self, unc_output, sensitivity_method = 'sobol',

return sens_output


def _multiprocess_chunksize(samples_df, processes):
"""Divides the samples into chunks for multiprocesses computing
Expand All @@ -405,6 +405,7 @@ def _multiprocess_chunksize(samples_df, processes):
samples_df.shape[0] / processes
).astype(int)


def _transpose_chunked_data(metrics):
"""Transposes the output metrics lists from one list per
chunk of samples to one list per output metric
Expand Down Expand Up @@ -434,6 +435,7 @@ def _transpose_chunked_data(metrics):
for x in zip(*metrics)
]


def _sample_parallel_iterator(samples, chunksize, **kwargs):
"""
Make iterator over chunks of samples
Expand Down
4 changes: 3 additions & 1 deletion climada/engine/unsequa/calc_cost_benefit.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,9 @@ class CalcCostBenefit(Calc):
('haz_input_var', 'ent_input_var', 'haz_fut_input_var', 'ent_fut_input_var')
_metric_names : tuple(str)
Names of the cost benefit output metrics
('tot_climate_risk', 'benefit', 'cost_ben_ratio', 'imp_meas_present', 'imp_meas_future')
('tot_climate_risk', 'benefit', 'cost_ben_ratio',
'imp_meas_present', 'imp_meas_future')
"""

_input_var_names = (
Expand Down
Loading

0 comments on commit ffc3540

Please sign in to comment.