Copyright (C) 2017 NVIDIA Corporation. All rights reserved. Licensed under the CC BY-NC-ND 4.0 license (https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode).
Ming-Yu Liu, Thomas Breuel, Jan Kautz, "Unsupervised Image-to-Image Translation Networks" NIPS 2017
Please cite our paper if this software is used in your publications.
pytorch, yaml, opencv, and tensorboard (from https://github.com/dmlc/tensorboard).
If you use Anaconda2, then the following commands can be used to install all the dependencies.
conda install pytorch torchvision cuda80 -c soumith
conda install -c anaconda yaml=0.1.6
conda install -c menpo opencv=2.4.11
pip install tensorboard
-
Download the pretrained model in link to <outputs/unit/cat2tiger>
-
Go to and run to translate the first cat and second cat to tigers
python cocogan_translate_one_image.py --config ../exps/unit/cat2tiger.yaml --a2b 1 --weights ../outputs/unit/cat2tiger/cat2tiger_gen_00500000.pkl --image_name ../images/cat001.jpg --output_image_name ../results/cat2tiger_cat001.jpg
python cocogan_translate_one_image.py --config ../exps/unit/cat2tiger.yaml --a2b 1 --weights ../outputs/unit/cat2tiger/cat2tiger_gen_00500000.pkl --image_name ../images/cat002.jpg --output_image_name ../results/cat2tiger_cat002.jpg
-
Check out the results in . Left: Input. Right: Output
-
Download the pretrained model in link to <outputs/unit/corgi2husky>
-
Go to and run to translate the first cat and second cat to tigers
python cocogan_translate_one_image.py --config ../exps/unit/corgi2husky.yaml --a2b 1 --weights ../outputs/unit/corgi2husky/corgi2husky_gen_00500000.pkl --image_name ../images/corgi001.jpg --output_image_name ../results/corgi2husky_corgi001.jpg
python cocogan_translate_one_image.py --config ../exps/unit/corgi2husky.yaml --a2b 0 --weights ../outputs/unit/corgi2husky/corgi2husky_gen_00500000.pkl --image_name ../images/husky001.jpg --output_image_name ../results/husky2corgi_husky001.jpg
-
Check out the results in . Left: Input. Right: Output
-
Download the aligned and crop version of the CelebA dataset to <datasets/celeba>.
-
Go to <datasets/celeba> and crop the middle region of the face images and resize them to 128x128
python crop_and_resize.py;
-
Setup the yaml file. Check out <exps/unit/blondhair.yaml>
-
Go to and do training
python cocogan_train.py --config ../exps/unit/blondhair.yaml --log ../logs
-
Go to and do resume training
python cocogan_train.py --config ../exps/unit/blondhair.yaml --log ../logs --resume 1
-
Intermediate image outputs and model binary files are in <outputs/unit/blondhair>
For more pretrained models, please check out the google drive folder Pretrained models.
- Go to and execute
python cocogan_train_domain_adaptation.py --config ../exps/unit/svhn2mnist.yaml --log ../logs