Temporal-Spatial Deep Neural Field for Rolling Shutter Correction with Provable Convergence

Main Article Content

Camila Torres
Muhammad Arif Putra
Daniel Tadesse

Abstract




Rolling shutter effect introduces geometric distortions in images captured by CMOS sensors exposing rows sequentially. We propose a temporal-spatial deep neural field approach modeling pixelwise temporal offsets for effective rolling shutter correction. Our network integrates motion priors and learns end-to-end correction with guaranteed stability and error bounds. We validate on a synthetic toy dataset and provide a convergence theorem supporting the method.




Article Details

How to Cite
Torres, C., Putra, M. A., & Tadesse, D. (2025). Temporal-Spatial Deep Neural Field for Rolling Shutter Correction with Provable Convergence. Special Interest Group on Artificial Intelligence Research, 2(1). Retrieved from https://sigair.org/index.php/journal/article/view/26
Section
Articles

References

L. Oth, P. Furgale, L. Kneip, and R. Siegwart, “Rolling shutter camera calibration,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 1360–1367.

D. Qu, B. Liao, H. Zhang, O. Ait-Aider, and Y. Lao, “Fast rolling shutter correction in the wild,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 10, pp. 11 778–11 795, 2023.

M. Jin, G. Meishvili, and P. Favaro, “Learning to extract a video sequence from a single motion-blurred image,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 6334–6342.

K. Burnett, A. P. Schoellig, and T. D. Barfoot, “Do we need to compensate for motion distortion and doppler effects in spinning radar navigation?” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 771–778, 2021.

Y. Zhang, B. Liao, D. Qu, J. Wu, X. Lu, W. Li, Y. Xue, and Y. Lao, “Ego-motion estimation for vehicles with a rolling shutter camera,” IEEE Transactions on Intelligent Vehicles, 2024.

B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “Nerf: Representing scenes as neural radiance fields for view synthesis,” Communications of the ACM, vol. 65, no. 1, pp. 99–106, 2021.