- GNU / Linux (we recommend Ubuntu 16.04).
- Python 3.6.
- libhdf5.
Additionally, we recommend the use of CUDA on an NVIDIA GPU to speed up TensorFlow experiments. Installation instructions are available here.
Automatic install:
For your convenience, we provide a script ./bootstrap.sh
to automate the
installation of these dependencies. Supported Linux distributions are: Ubuntu,
CentOS, and Arch Linux. The script will print installation commands for any
missing requirements. These can either be typed in by hand, or executed
automatically using:
$ ./bootstrap.sh | bash
Installing system-wide dependencies requires sudo privileges. You may be prompted for your password. Please note, CUDA must be installed manually.
$ ./configure
... # answer yes/no prompts
$ make
The configure script determines python packages to install, based on the availability of CUDA. Installation does not require sudo privileges. The only directory modified outside of this repository is ~/.ipython/kernels
.
The executable code is in the form of Jupyter notebooks. Launch the Jupter server using:
$ make run
Note: If you wish to run the Jupyter server on a remote machine (for example, you are working on a server over SSH), you will need to configure the Jupyter server for public access. See the official documentation for instructions.
The following notebooks are available:
- Language Model.ipynb - demonstrates the transforming and encoding of OpenCL source code for machine learning.
- Case Study A.ipynb - code for the Heterogeneous Mapping experiments in the paper.
- Case Study B.ipynb - code for the OpenCL Thread Coarsening experiments in the paper.
Many of the experiments are long running and computationally expensive. Run times can range from hours to days, depending on hardware. To amortize these costs, expensive experimental data is cached for re-use upon production. If you would like to remove any cached data, run:
$ make -C ../data refresh
...
Running long running experiments in Jupyter Notebooks can be a hassle, as any loss of connection to the notebook may halt execution. Because of this, we provide a headless execution mode, which converts the Jupyter Notebooks into standalone Python scripts. These scripts will produce the cached data which can be viewed from the Jupyter Notebooks. To use this headless execution mode, run:
$ make run-batch
In addition to the code necessary to re-produce our experiments, we also supply the actual experimental results we used for the paper, as obtained from our hardware. To unpack our cached data, run:
$ make -C ../data all
...
Note that this replaces any cached data you may have produced.
$ make clean
This does not require sudo privileges. The only directory modified outside of
this repository is ~/.ipython/kernels
. System-wide requirements are not
removed.