This project recreates pictures of birds in the style of Francis Bacon's paintings (1930s-1980s).
A few paintings and baconbirds-created images (20 training epochs):
The code here is a lightly-adapted version of the Tensorflow implementation of CycleGAN written by Harry Yang and Nathan Silberman.
Their Original Code | CycleGAN Project | Zhu et al. 2017
CycleGAN has been commonly implemented for style transfer, including patterns (i.e. horses to zebras) and textures (i.e. landscapes as Van Goghs). However, it is limited with respect to geometric changes (dogs to cats or apples to oranges can look a little lopsided, similarly this notable failure case demonstrates the limits of semantic segmentation + style transfer)
While somewhat abstracted examples of style transfer can be found (Boshi et al. 2017, this Northwestern project) they still tend to work better when an overall textural style is applied (Hokusai or Mondrian) rather than an object-distortion style (Picasso).
This application of CycleGAN builds on the idea of adapting abstract impressionism by transferring the style of Francis Bacon onto birds via NABirds. We use 133 Bacon paintings (1930s-1980s -- we omit the very cubist works from the late 20s). We're aiming to see if any of the distortive/horror elements of Bacon's style persist in the generated images.
Sample training data:
We are reproducing here, for reference, instructions from Yang and Silberman's implementation, and adding a few notes.
-
This works best in an anaconda virtual environment. To start one:
- for the ucla hoffman2 cluster (gpu)
qrsh -l gpu,P4 module load python/anaconda3 . "/u/local/apps/anaconda3/etc/profile.d/conda.sh" . $CONDA_DIR/etc/profile.d/conda.sh
- on mac osx
conda create -n fortf python=3.5 anaconda conda activate fortf
-
Set up your training/testing data.
- I downloaded the horse2zebra dataset (for testing) and then just left all the directory names the same (sorry to my brother, who will hate this). My baconbirds data are included here in the horse2zebra folder, but if you're making your own you'll need jpgs or pngs. The architecture is:
- CycleGAN_TensorFlow |-input folder (horse2zebra) |-trainA |-trainB |-testA |-testB
-
Create the csvs for loading/processing data.
-
Edit cyclegan_datasets.py with
- number of training/testing images for your larger dataset
- jpg or png as your file format
- paths to where your training and testing index files will go, something like: /path/to/CycleGAN_TensorFlow/input/horse2zebra_train.csv
-
Run create_cyclegan_dataset.py for your training AND testing data
python -m CycleGAN_TensorFlow.create_cyclegan_dataset --image_path_a=/path/to/trainA --image_path_b=/path/to/trainB --dataset_name="horse2zebra_train" --do_shuffle=0 python -m CycleGAN_TensorFlow.create_cyclegan_dataset --image_path_a=/path/to/testA --image_path_b=/path/to/testB --dataset_name="horse2zebra_test" --do_shuffle=0
-
-
Train the model.
- Create or edit the config file (/CycleGAN_Tensorflow/configs/exp_01.json is the official CycleGAN base setup)
- Run the main module (change the config/output links if necessary)
python -m CycleGAN_TensorFlow.main \ --to_train=1 \ --log_dir=CycleGAN_TensorFlow/output/cyclegan/exp_01 \ --config_filename=CycleGAN_TensorFlow/configs/exp_01.json
-
Keep training from a stoppage/checkpoint.
- If you stop, you can pick back up where you left off -- helpful for checking/adding more epochs.
python -m CycleGAN_TensorFlow.main \ --to_train=2 \ --log_dir=CycleGAN_TensorFlow/output/cyclegan/exp_01 \ --config_filename=CycleGAN_TensorFlow/configs/exp_01.json \ --checkpoint_dir=CycleGAN_TensorFlow/output/cyclegan/exp_01/#timestamp#
-
Test the model.
- Make sure you assembled your testing dataset and created the index csv (step 3).
- Runs on test data, saves to CycleGAN_Tensorflow/output/cyclegan/exp_01/#timestamp#
python -m CycleGAN_TensorFlow.main \ --to_train=0 \ --log_dir=CycleGAN_TensorFlow/output/cyclegan/exp_01 \ --config_filename=CycleGAN_TensorFlow/configs/exp_01_test.json \ --checkpoint_dir=CycleGAN_TensorFlow/output/cyclegan/exp_01/#old_timestamp#
- Right now in main.py, images are saved using matplotlib.pyplot.imsave. Could also use imageio.imsave, if preferred. If you're taking the code from Yang and Silberman, you'll need to change from scipy.misc.imsave, which is depreciated.
- If I don't run this in a conda environment, the tensorflow is pretty buggy (v1/v2/depreciated stuff/etc). It is also a bit of a mess in colab. Fair warning. Also, as of this writing, tf can't be used in python3.8. Worked well in 3.5.
- I don't own and didn't make any of this, it's just for fun. That said, let me know if you have thoughts or find bugs!
- After 20 epochs
- After 15 epochs
- After 10 epochs
- After 6 epochs