This repository contains code to reproduce and expand on the results of Schwartz-Ziv and Tishby and Saxe et al.. It is used to investigate what role compression plays in learning in deep neural networks.
- plotting of learning dynamics in the information plane
- plotting activation histograms and single neuron activations
- different datasets and mutual information estimators
- logging experiments using Sacred
Extensive documentation including theoretical background and API documentation can be found at Read the Docs.