Žiadny popis

Andrew a7b32b41f6 add loss equations to README.md 7 rokov pred
assets d1bb8e37fb add loss equations 7 rokov pred
.gitignore d4e5104e3a implement bicycle-gan 7 rokov pred
LICENSE 6d450faeb4 Initial commit 7 rokov pred
README.md a7b32b41f6 add loss equations to README.md 7 rokov pred
bicycle-gan.py bb1a161d4e fix typos 7 rokov pred
data_loader.py d4e5104e3a implement bicycle-gan 7 rokov pred
discriminator.py d4e5104e3a implement bicycle-gan 7 rokov pred
download_pix2pix_dataset.sh d4e5104e3a implement bicycle-gan 7 rokov pred
encoder.py 5597a8e691 update bicycleGAN following NIPS2017 paper 7 rokov pred
generator.py 5597a8e691 update bicycleGAN following NIPS2017 paper 7 rokov pred
model.py bb1a161d4e fix typos 7 rokov pred
ops.py 5597a8e691 update bicycleGAN following NIPS2017 paper 7 rokov pred
utils.py ed2d2d9d55 fix the logger title 7 rokov pred

README.md

BicycleGAN implementation in Tensorflow

As part of the implementation series of Joseph Lim's group at USC, our motivation is to accelerate (or sometimes delay) research in the AI community by promoting open-source projects. To this end, we implement state-of-the-art research papers, and publicly share them with concise reports. Please visit our group github site for other projects.

This project is implemented by Youngwoon Lee and the codes have been reviewed by Yuan-Hong Liao before being published.

Description

This repo is a Tensorflow implementation of BicycleGAN on Pix2Pix datasets: Toward Multimodal Image-to-Image Translation.

This paper presents a framework addressing the image-to-image translation task, where we are interested in converting an image from one domain (e.g., sketch) to another domain (e.g., image). While the previous method (pix2pix) cannot generate diverse output, this paper proposes a method that one image (e.g., a sketch of shoes) can be transformed into a set of images (e.g., shoes with different colors/textures).

The proposed method encourages diverse results by generating output images with noise and then reconstructing noise from the output images. The framework consists of two cycles, B -> z' -> B' and noise z -> output B' -> noise z'.

The first step is the conditional Variational Auto Encoder GAN (cVAE-GAN) whose architecture is similar to pix2pix network with noise. In cVAE-GAN, a generator G takes an input image A (sketch) and a noise z and outputs its counterpart in domain B (image) with variations. However, it was reported that the generator G ends up with ignoring the added noise.

The second part, conditional Latent Regressor GAN (cLR-GAN), enforces the generator to follow the noise z. An encoder E maps visual features (color and texture) of a generative image B' to the latent vector z' which is close to an original noise z. To minimize |z-z'|, images computed with different noises should be different. Therefore, the cLR-GAN can alleviate the issue of mode collapse. Moreover, a KL-divergence loss KL(p(z);N(0;I)) encourages the latent vectors to follow gaussian distribution, so a gaussian noise can be used in testing time.

Therefore, the total loss term for Bi-Cycle-GAN is:

<img src="assets/Bi-Cycle-GAN-loss.png" width=500>

Dependencies

Usage

  • Execute the following command to download the specified dataset as well as train a model:
$ python bicycle-gan.py --task edges2shoes --image_size 256
  • To reconstruct 256x256 images, set --image_size to 256; otherwise it will resize to and generate images in 128x128. Once training is ended, testing images will be converted to the target domain and the results will be saved to ./results/edges2shoes_2017-07-07_07-07-07/.
  • Available datasets: edges2shoes, edges2handbags, maps, cityscapes, facades

  • Check the training status on Tensorboard:

$ tensorboard --logdir=./logs

Results

edges2shoes

Linearly sample noise Randomly sample noise
edges2shoes1_linear edges2shoes2_random
edges2shoes2_linear edges2shoes2_random

training-edges2shoes.jpg

day2night

In-progress

References