SUN Hao f452a354b5 migrate to tenforflowf 2.0 | 4 gadi atpakaļ | |
---|---|---|
data | 6 gadi atpakaļ | |
figs | 6 gadi atpakaļ | |
model | 6 gadi atpakaļ | |
LICENSE | 5 gadi atpakaļ | |
README.md | 5 gadi atpakaļ | |
main.py | 4 gadi atpakaļ | |
model.py | 4 gadi atpakaļ | |
utils.py | 6 gadi atpakaļ |
This is a Tensorflow implement of RetinexNet
Deep Retinex Decomposition for Low-Light Enhancement. In BMVC'18 (Oral Presentation)
Chen Wei*, Wenjing Wang*, Wenhan Yang, Jiaying Liu. (* indicates equal contributions)
To quickly test your own images with our model, you can just run through
python main.py
--use_gpu=1 \ # use gpu or not
--gpu_idx=0 \
--gpu_mem=0.5 \ # gpu memory usage
--phase=test \
--test_dir=/path/to/your/test/dir/ \
--save_dir=/path/to/save/results/ \
--decom=0 # save only enhanced results or together with decomposition results
Or you can just see some demo cases by
python main.py --phase=test
, the results will be saved under ./test_results/
.
First, download training data set from our project page. Save training pairs of our LOL dataset under ./data/our485/
, and synthetic pairs under ./data/syn/
.
Then, just run
python main.py
--use_gpu=1 \ # use gpu or not
--gpu_idx=0 \
--gpu_mem=0.5 \ # gpu memory usage
--phase=train \
--epoch=100 \ # number of training epoches
--batch_size=16 \
--patch_size=48 \ # size of training patches
--start_lr=0.001 \ # initial learning rate for adm
--eval_every_epoch=20 \ # evaluate and save checkpoints for every # epoches
--checkpoint_dir=./checkpoint # if it is not existed, automatically make dirs
--sample_dir=./sample # dir for saving evaluation results during training
Tips:
### Citation ###
@inproceedings{Chen2018Retinex,
title={Deep Retinex Decomposition for Low-Light Enhancement},
author={Chen Wei, Wenjing Wang, Wenhan Yang, Jiaying Liu},
booktitle={British Machine Vision Conference},
year={2018},
organization={British Machine Vision Association}
}