Skip to content

aniketjivani/variational-autoencoders

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Variational Autoencoders

Reconstruction of MINST digits during VAE training using Convoluation/Deconvolution Neural Network, 50 latent dimensions, 2 epochs shown

About

A library using Julia's Flux Library to implement Variational Autoencoders

  • main.jl - run model with MINST dataset, this will be dropped later
  • Model.jl the basic Model, for now it's just a basic VAE
  • Dataset.jl the interface

Open Questions

  • Can KL-Divergence and reconstruction error be better balanced?
  • Can VAE be used as a pure clustering method? How else would the latent space representation be useful?
  • Is it possible (in Julia) to reconconstruct the reverse transformation (decoder), for a given encoder?
  • Can VAE be used for columnar data with missing inputs?

References

Tutorial on VAE
Tensorflow VAE
Flux.jl
Flux VAE
Auto-Encoding Variational Bayes
Stochastic Backpropagation and Approximate Inference in Deep Generative Models

Figures

Conv/Deconv VAE during 4 epochs of MINST training. 10 latent dimensions. MINST digits choosen at random from test set.

About

Experiments with variational autoencoders in Julia

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Julia 100.0%