Learning Parallax for Stereo Event-based Motion Deblurring

Mingyuan Lin1, Chi Zhang1, Chu He1, Lei Yu1.
1School of Electronic Information, Wuhan University.
Corresponding authors.
Blurry image
Events
Deblurred image
Predicted disparity


Abstract

Due to the extremely low latency, events have been recently exploited to supplement lost information for motion deblurring. However, existing approaches are proposed under the assumption that intensity images and corresponding events are aligned accurately at the pixel level, limiting the performance of the misaligned event and intensity camera setup in the real world. To tackle this problem, we propose a novel coarse-to-fine framework, named NETwork of Event-based motion Deblurring with STereo event-intensity cameras (St-EDNet), to recover high-quality images directly from the misaligned inputs, consisting of a single blurry image and the concurrent event streams. Specifically, the coarse spatial alignment of the blurry image and the event streams is first implemented with a cross-modal stereo matching module without the need for ground-truth depths. Then, a dual-feature embedding architecture is proposed to gradually build the fine bidirectional association of the coarsely aligned data and reconstruct the sequence of the latent sharp images. Furthermore, we build a new dataset with STereo Event and Intensity cameras (St-EIC), containing real-world events, intensity images and dense disparity maps. Experiments on real-world datasets demonstrate the superiority of the proposed network over state-of-the-art methods.



Method

Overview of the proposed St-EDNet

More Qualitative Results in Real-World Scenarios

Results of Motion Deblurring

We compare our St-EDNet with two conventional intensity-based methods, i.e., LEVS and Motion-ETR, one intensity-based stereo deblurring method, i.e., DAVANet, and four event-based methods, i.e., EDI, eSL-Net, LEDVDI, RED-Net, and E-CIR.

Inputs
DAVANet
LEVS
Motion-ETR
EDI
eSL-Net
LEDVDI
RED-Net
E-CIR
St-EDNet (Ours)

Results of Stereo Matching

With the stereo event and intensity camera setup, we evaluate the performance of St-EDNet in the single disparity estimation task, combing motion deblurring methods, i.e., Motion-ETR and the proposed St-EDNet, with two existing cross-modal stereo matching algorithms, i.e., HSM and SSIE.

Blurry image
E2VID+AANet
E2VID+ACVNet
Ours
Events
Motion-ETR + HSM
Motion-ETR + SIES
GT


BibTeX


        @article{lin2023learning,
        	title={Learning Parallax for Stereo Event-based Motion Deblurring}, 
        	author={Lin, Mingyuan and Zhang, Chi and He, Chu and Yu, Lei},
        	journal={arXiv preprint arXiv:2309.09513},
        	year={2023}
        	}