Motivationο
-For lots of mathematical problems we need an ability to sample discrete random variables. -The problem is that due to continuous nature of deep learning optimization, the usage of truly discrete random variables is infeasible. -Thus we use different relaxation methods. -One of them, Concrete distribution or Gumbel-softmax (this is one distribution proposed in parallel by two research groups) is implemented in different DL packages. -In this project we implement different alternatives to it.
+<img src="assets/logo.svg" width="200px" />
+<h1> Just Relax It </h1>
+Discrete variables relaxation
+
π¬ Assetsο
+ +π‘ Motivationο
+For lots of mathematical problems we need an ability to sample discrete +random variables. The problem is that due to continuos nature of deep +learning optimization, the usage of truely discrete random variables is +infeasible. Thus we use different relaxation method. One of them, +Concrete distribution or +Gumbel-softmax (this is one +distribution proposed in parallel by two research groups) is implemented +in different DL packages. In this project we implement different +alternatives to it.
+<img src="assets/overview.png"/>
+
Algorithms to implement (from simplest to hardest)ο
+π Algorithms to implement (from simplest to hardest)ο
-
-
- -
- -
- -
[x] Straight-Through Bernoulli, distribution (donβt mix with Relaxed distribution from pyro)
-[ ] Invertible Gaussian reparametrization with KL implemented
-[x] Hard concrete
-[ ] REINFORCE (not a distribution actually, think how to integrate it with other distributions)
-[ ] Logit-normal distribution with KL implemented and Laplace-form approximation of Dirichlet
+- +
- +
- +
β Straight-Through Bernoulli, distribution (donβt mix with Relaxed +distribution from +pyro)
+β Invertible Gaussian +reparametrization with KL +implemented
+β Hard concrete
+β REINFORCE +(not a distribution actually, think how to integrate it with other +distributions)
+β Logit-normal +distribution +with KL implemented and Laplace-form approximation of +Dirichlet
Recommended stackο
-Some of the alternatives for GS were implemented in pyro, so it might be useful to play with them also.
+π Recommended stackο
+Some of the alternatives for GS were implemented in +pyro, so it might +be useful to play with them also.
Problem detailsο
-To make the library consistent, we integrate imports of distributions from pyro and pytorch into the library, so that all the categorical distributions can be imported from one entrypoint.
+𧩠Problem detailsο
+To make to library constistent, we integrate imports of distributions +from pyro and pytorch into the library, so that all the categorical +distributions can be imported from one entrypoint.
Contributorsο
+π₯ Contributorsο
-
-
Daniil Dorin (Basic code writing, Final demo, Algorithms)
-Igor Ignashin (Project wrapping, Documentation writing, Algorithms)
-Nikita Kiselev (Project planning, Blog post, Algorithms)
-Andrey Veprikov (Tests writing, Documentation writing, Algorithms)
+Daniil Dorin (Basic code +writing, Final demo, Algorithms)
+Igor Ignashin (Project +wrapping, Documentation writing, Algorithms)
+Nikita Kiselev (Project planning, +Blog post, Algorithms)
+Andrey Veprikov (Tests writing, +Documentation writing, Algorithms)
Useful linksο
+π Useful linksο
-
-
- -
- -
KL divergence between Dirichlet and Logistic-Normal implemented in R
-About score function (SF) and pathwise derivate (PD) estimators, VAE and REINFORCE
+- +
- +
KL divergence between Dirichlet and Logistic-Normal implemented in +R
+About score function (SF) and pathwise derivate (PD) estimators, VAE +and REINFORCE