Improve U-Net and Transformer implementations #8
Labels
enhancement
New feature or request
help wanted
Extra attention is needed
question
Further information is requested
As the documentation states, the current implementations are neither the most effective or efficient. The U-Net implementation was adapted from the The Annotated Diffusion Model and the Transformer implementation was adapted from Peebles & Xie (2022) (adaptive layer norm block). Although these produce good enough results, ideally the library would provide the best implementations out there for general use.
From what I've read, I think a good choice for the U-Net implementation would be the one used in Imagen for the Text-to-Image model, but there may well be other more recent architectures that would be a better fit. For the Transformer I'm really not sure right now. Any input on this would be greatly appreciated.
The text was updated successfully, but these errors were encountered: