This repository is implementation of the "Image Restoration Using Very Deep Convolutional Encoder-Decoder Networks with Symmetric Skip Connections".
To reduce computational cost, it adopts stride 2 for the first convolution layer and the last transposed convolution layer.
Credits: https://github.com/yjn870/REDNet-pytorch
- PyTorch
- tqdm
- Numpy
- Pillow
Input | JPEG (Quality 10) |
![]() |
![]() |
AR-CNN | RED-Net 10 |
![]() |
![]() |
RED-Net 20 | RED-Net 30 |
![]() |
![]() |
-
In dataset.py vary the train/val set sizes. By default, I am using first 50 tfrecords for training and the next 20 (50-70) as validation set. (Keep it as is now)
-
dataset.py also currently takes 5Mbps images for Waymo (in line #15) and 1Mbps images for BDD Dataset (line #128)
-
Expected data structure for raw and comp_images_dir is:
raw_images_dir
|
------- tfrecord_xxxx
|
------- *.png or *.jpg
comp_images_dir
|
--------- tfrecord_xxxx
|
------ val_cbr_xMbps_xMbuf
|
---------- val
|
-------- *.png
- When training begins, the model weights will be saved every epoch.
python train.py --arch "REDNet30" \ # REDNet10, REDNet20, REDNet30
--raw_images_dir "" \
--comp_images_dir "" \
--outputs_dir "" \ # Where the weights are stored
--patch_size 50 \
--batch_size 2 \
--num_epochs 20 \
--lr 1e-4 \
--threads 8 \
--seed 123
- Feed the compressed images to the model to get the enhanced image.
python inference.py --arch "REDNet30" \ # REDNet10, REDNet20, REDNet30
--weights_path "" \
--image_path "" \ # Folder containing compressed images
--outputs_dir "" \ # Folder to store the enhanced images
-
The expected --image_path is a folder with all the 200 tfrecords (h264 compressed images).
-
In line #52 and line #53 we enhance images only after tfrecord 50
-
In line #56 we enhance 5Mbps images only