Skip to content

Latest commit

 

History

History
executable file
·
89 lines (64 loc) · 3.67 KB

USAGE.md

File metadata and controls

executable file
·
89 lines (64 loc) · 3.67 KB

PyTorch Implementation of the Coupled GAN algorithm for Unsupervised Image-to-Image Translation

License

Copyright (C) 2017 NVIDIA Corporation. All rights reserved. Licensed under the CC BY-NC-ND 4.0 license (https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode).

Paper

Ming-Yu Liu, Thomas Breuel, Jan Kautz, "Unsupervised Image-to-Image Translation Networks" NIPS 2017

Please cite our paper if this software is used in your publications.

Dependency

pytorch, yaml, opencv, and tensorboard (from https://github.com/dmlc/tensorboard).

If you use Anaconda2, then the following commands can be used to install all the dependencies.

conda install pytorch torchvision cuda80 -c soumith
conda install -c anaconda yaml=0.1.6
conda install -c menpo opencv=2.4.11
pip install tensorboard

Example Usage

Testing

Cat to Tiger Translation
  1. Download the pretrained model in link to <outputs/unit/cat2tiger>

  2. Go to and run to translate the first cat and second cat to tigers

    python cocogan_translate_one_image.py --config ../exps/unit/cat2tiger.yaml --a2b 1 --weights ../outputs/unit/cat2tiger/cat2tiger_gen_00500000.pkl --image_name ../images/cat001.jpg --output_image_name ../results/cat2tiger_cat001.jpg
    
    python cocogan_translate_one_image.py --config ../exps/unit/cat2tiger.yaml --a2b 1 --weights ../outputs/unit/cat2tiger/cat2tiger_gen_00500000.pkl --image_name ../images/cat002.jpg --output_image_name ../results/cat2tiger_cat002.jpg
    
  3. Check out the results in . Left: Input. Right: Output

Corgi to Husky Translation
  1. Download the pretrained model in link to <outputs/unit/corgi2husky>

  2. Go to and run to translate the first cat and second cat to tigers

    python cocogan_translate_one_image.py --config ../exps/unit/corgi2husky.yaml --a2b 1 --weights ../outputs/unit/corgi2husky/corgi2husky_gen_00500000.pkl --image_name ../images/corgi001.jpg --output_image_name ../results/corgi2husky_corgi001.jpg
    
    python cocogan_translate_one_image.py --config ../exps/unit/corgi2husky.yaml --a2b 0 --weights ../outputs/unit/corgi2husky/corgi2husky_gen_00500000.pkl --image_name ../images/husky001.jpg --output_image_name ../results/husky2corgi_husky001.jpg
    
  3. Check out the results in . Left: Input. Right: Output

Training

  1. Download the aligned and crop version of the CelebA dataset to <datasets/celeba>.

  2. Go to <datasets/celeba> and crop the middle region of the face images and resize them to 128x128

    python crop_and_resize.py;
    
  3. Setup the yaml file. Check out <exps/unit/blondhair.yaml>

  4. Go to and do training

    python cocogan_train.py --config ../exps/unit/blondhair.yaml --log ../logs
    
  5. Go to and do resume training

    python cocogan_train.py --config ../exps/unit/blondhair.yaml --log ../logs --resume 1
    
  6. Intermediate image outputs and model binary files are in <outputs/unit/blondhair>

For more pretrained models, please check out the google drive folder Pretrained models.

SVHN2MNIST Adaptation

  1. Go to and execute
    python cocogan_train_domain_adaptation.py --config ../exps/unit/svhn2mnist.yaml --log ../logs