Skip to content
This repository has been archived by the owner on Dec 20, 2024. It is now read-only.

Commit

Permalink
Merge branch 'develop' into feature/positional-embedding-hidden-grid
Browse files Browse the repository at this point in the history
  • Loading branch information
sahahner committed Dec 16, 2024
2 parents 444b704 + 0deb66b commit 713a13d
Show file tree
Hide file tree
Showing 3 changed files with 24 additions and 10 deletions.
16 changes: 11 additions & 5 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,13 +8,20 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
Please add your functional changes to the appropriate section in the PR.
Keep it human-readable, your future self will thank you!

## [Unreleased](https://github.com/ecmwf/anemoi-models/compare/0.3.0...HEAD)

- Add synchronisation workflow
## [Unreleased](https://github.com/ecmwf/anemoi-models/compare/0.4.0...HEAD)

### Added

- New AnemoiModelEncProcDecHierarchical class available in models [#37](https://github.com/ecmwf/anemoi-models/pull/37)
- Mask NaN values in training loss function [#56](https://github.com/ecmwf/anemoi-models/pull/56)
- Added dynamic NaN masking for the imputer class with two new classes: DynamicInputImputer, DynamicConstantImputer [#89](https://github.com/ecmwf/anemoi-models/pull/89)
- Reduced memory usage when using chunking in the mapper [#84](https://github.com/ecmwf/anemoi-models/pull/84)

## [0.4.0](https://github.com/ecmwf/anemoi-models/compare/0.3.0...0.4.0) - Improvements to Model Design

### Added

- Add synchronisation workflow [#60](https://github.com/ecmwf/anemoi-models/pull/60)
- Add anemoi-transform link to documentation
- Codeowners file
- Pygrep precommit hooks
Expand All @@ -23,10 +30,9 @@ Keep it human-readable, your future self will thank you!
- configurabilty of the dropout probability in the the MultiHeadSelfAttention module
- Variable Bounding as configurable model layers [#13](https://github.com/ecmwf/anemoi-models/issues/13)
- GraphTransformerMapperBlock chunking to reduce memory usage during inference [#46](https://github.com/ecmwf/anemoi-models/pull/46)
- Mask NaN values in training loss function [#271](https://github.com/ecmwf-lab/aifs-mono/issues/271)
- Added dynamic NaN masking for the imputer class with two new classes: DynamicInputImputer, DynamicConstantImputer [#89](https://github.com/ecmwf/anemoi-models/pull/89)
- New `NamedNodesAttributes` class to handle node attributes in a more flexible way [#64](https://github.com/ecmwf/anemoi-models/pull/64)
- Contributors file [#69](https://github.com/ecmwf/anemoi-models/pull/69)
- Added `supporting_arrays` argument, which contains arrays to store in checkpoints. [#97](https://github.com/ecmwf/anemoi-models/pull/97)

### Changed
- Bugfixes for CI
Expand Down
12 changes: 11 additions & 1 deletion src/anemoi/models/interface/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,8 @@ class AnemoiModelInterface(torch.nn.Module):
Statistics for the data.
metadata : dict
Metadata for the model.
supporting_arrays : dict
Numpy arraysto store in the checkpoint.
data_indices : dict
Indices for the data.
pre_processors : Processors
Expand All @@ -48,7 +50,14 @@ class AnemoiModelInterface(torch.nn.Module):
"""

def __init__(
self, *, config: DotDict, graph_data: HeteroData, statistics: dict, data_indices: dict, metadata: dict
self,
*,
config: DotDict,
graph_data: HeteroData,
statistics: dict,
data_indices: dict,
metadata: dict,
supporting_arrays: dict = None,
) -> None:
super().__init__()
self.config = config
Expand All @@ -57,6 +66,7 @@ def __init__(
self.graph_data = graph_data
self.statistics = statistics
self.metadata = metadata
self.supporting_arrays = supporting_arrays if supporting_arrays is not None else {}
self.data_indices = data_indices
self._build_model()

Expand Down
6 changes: 2 additions & 4 deletions src/anemoi/models/layers/block.py
Original file line number Diff line number Diff line change
Expand Up @@ -524,18 +524,16 @@ def forward(
edge_attr_list, edge_index_list = sort_edges_1hop_chunks(
num_nodes=size, edge_attr=edges, edge_index=edge_index, num_chunks=num_chunks
)
out = torch.zeros((x[1].shape[0], self.num_heads, self.out_channels_conv), device=x[1].device)
for i in range(num_chunks):
out1 = self.conv(
out += self.conv(
query=query,
key=key,
value=value,
edge_attr=edge_attr_list[i],
edge_index=edge_index_list[i],
size=size,
)
if i == 0:
out = torch.zeros_like(out1, device=out1.device)
out = out + out1
else:
out = self.conv(query=query, key=key, value=value, edge_attr=edges, edge_index=edge_index, size=size)

Expand Down

0 comments on commit 713a13d

Please sign in to comment.