- Fix bug in
gatr.baselines.transformer
where thenormalized_shape
parameter incorrectly normalized over all dimensions except the first. Thanks to @spinjo for identifying and providing the fix
- Replace legacy call to deprecated
torch.LongTensor
withtorch.tensor
. Thanks to @Ruibin-Liu for providing a fix
- Equivariance-breaking functions have been replaced with equivariant counterparts, breaking backwards compatibility with
ArteryGATrWrapper
models trained with old GATr versions - New argument
checkpoint
inGATr
constructor for more fine-grained control over checkpointing behaviour
- Add embeddings for rays in Plücker coordinates
- Add
nominal_flops_per_token
property to linear layers that counts FLOPs
- Equivariance-breaking functions now raise warnings
- Argument
checkpoint_blocks
inGATr
constructor is deprecated in favour ofcheckpoint
- Fix bug in
compile_equi_linear()
that made autodiff through compiled linear layers incorrect
- Experimental support for torch.compile
Minor cleanup.
- Autocast normalization layers and attention features to fp32 in mixed-precision training
- Add
CrossAttention
layer
- Fix bug in attention layers that lead to crashes when using an xformers backend and not using multi-query attention
- Expose
join_reference
kwarg inGATr.forward()
,AxialGATr.forward()
- Add utility function
gatr.utils.compile_linear.compile_equi_linear_submodules()
- Remove option to not provide a position in
embed_oriented_plane()
,embed_reflection()
Minor cleanup.
- Change dimension collapse behaviour in
AxialGATr
andBaselineAxialTransformer
- Add hooks functionality and
register_hook()
method toBaseExperiment
- Fix critical bug in
embed_3d_object_two_vec()
that lead to wrong results when any tensor dimension was 3 - Fix various minor issues in
BaseExperiment
- Improve logging in
BaseExperiment
First release.