Skip to content

Yeeesir/DVS_RDNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Restoring Real-World Degraded Events Improves Deblurring Quality (ACMMM2024)

fig1

Abstract

Due to its high speed and low latency, DVS is frequently employed in motion deblurring. Ideally, high-quality events would adeptly capture intricate motion information. However, real-world events are generally degraded, thereby introducing significant artifacts into the deblurred results. In response to this challenge, we model the degradation of events and propose RDNet to improve the quality of image deblurring. Specifically, we first analyze the mechanisms underlying degradation and simulate paired events based on that. These paired events are then fed into the first stage of the RDNet for training the restoration model. The events restored in this stage serve as a guide for the second-stage deblurring process. To better assess the deblurring performance of different methods on real-world degraded events, we present a new real-world dataset named DavisMCR. This dataset incorporates events with diverse degradation levels, collected by manipulating environmental brightness and target object contrast. Our experiments are conducted on synthetic datasets (GOPRO), real-world datasets (REBlur), and the proposed dataset (DavisMCR). The results demonstrate that RDNet outperforms classical event denoising methods in event restoration. Furthermore, RDNet exhibits better performance in deblurring tasks compared to state-of-the-art methods.

Method

pipeline2 The event degradation process and the pipeline of RDNet. The red region (1) above illustrates the event degradation process for constructing paired data of undegraded Eu and degraded events Ed. (a) illustrates how threshold bias introduces differences in events. (b) represents how limited bandwidth leads to event loss. (c) provides visualization of simulated circuit noise. The yellow region (2) below is the first-stage event restoration. Degraded events Ed and blurry image Ib are fed into dual-branch encoders, and a single-branch event decoder generates the restored event Er. The ground-truth is undegraded event Eu, and the loss is Ler. The green region (3) below is the second-stage event-based deblurring. Restored event Er and blurry image Ib are fed into dual-branch encoders, and a single-branch image decoder generates the deblurred image Id. The ground-truth is sharp images Is, and the loss is Ld.

DavisMCR

dataset_details

The innovation of DavisMCR dataset. (a) represents the control group, capturing a normal contrast text motion scene under the illumination of lux=800. The events exhibit clear textures with minimal noise. (b) depicts a low-contrast text motion scene, where events are relatively weak, and the edges are less defined. (c) showcases a text motion scene captured in a high-lux environment, displaying events with clear edges and minimal noise. (d) presents a text motion scene with a dark background, showing events with severe background noise. (e) illustrates a natural scene with events containing diverse forms and various intensity levels.

dataset_all

The columns display images and events captured under different ambient brightness conditions. Distinct ambient brightness levels are typically associated with varying signal-to-noise ratios. APS1 and APS2 represent bright and dark background brightness, respectively. DVS2 captured against a dark background exhibits more noise than DVS1. Objects in different rows within each image have different contrasts. The events in areas with strong contrast are dense and clear.

Download

Preprocessed subset dataset link: Baidu(h0fr) | GoogleDrive

  • [root]
    • [lux300_40ms] (Lux=300, exposure time=40ms)
      • [scene1]
        • [img1.png]
        • [events1.bin] (Events corresponding to img1.png)
        • [event_vis1.png](Visualization of events1.bin)

Complete dataset raw file link: Baidu(h79b) | GoogleDrive

  • [root]
    • [lux300_40ms] (Lux=300, exposure time=40ms)
      • dvSave-2023_11_08_15_37_18.aedat4

Note: To parse the raw (.aedat4) files, you can download the data to the ./data directory and then use the provided tools (1_parse_aedat.py, 2_gen_event.py, 3_visualize_event.py) for data parsing and event visualization.

Citation

You can use the following BibTeX entry to cite our paper:

@inproceedings{
shen2024restoring,
title={Restoring Real-World Degraded Events Improves Deblurring Quality},
author={Yeqing Shen and Shang Li and Kun Song},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=TvsocONzcC}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages