You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This commit was created on GitHub.com and signed with GitHub’s verified signature.
The key has expired.
New functionalities:
ResHedNet model for advanced edge detection. This model is based on the holistically-nested edge detection paper. We improved the original model by replacing vanilla convolutional layers with ResNet-like blocks in each segment and by reducing the number of max-pooling operations to 2 (we found that 3 different scales are enough for learning the relevant features in typical microscopy images)
SegResNet model for general semantic segmentation as an alternative to default UNet model. It has ResNet-like connections in each segment in addition to UNet-like skip connections between encoding and decoding paths.
Bug fixes:
Fix bug that was preventing from saving/loading custom models
Fix bug that was performing a zoom-in operation even when set to False during data augmentation
Fix bug in the output_shape in BasePredictor, which required the output shape to be identical to the input shape
Improvements:
Add option to pass a custom loss function to trainers for semantic segmentation and im2spec/spec2im
Add option to store all training data on CPU when the size of the training data exceeds a certain limit (default limit is 4GB). In this case, only the individual batches are moved to a GPU device at training/test steps.
Make computation of coordinates optional for SegPredictor
Automatically save VAE models after each training cycle ("epoch") and not just at the end of training
New examples:
New notebook on constructing and using (training+predicting) a custom image denoiser with AtomAI
New notebook on applications of rotationally invariant VAE (rVAE) and class-conditioned rVAE to arbitrary rotated handwritten digits