Unified 3D Gaussian Splatting for Motion and Defocus Blur Reconstruction

CAD/Graphics 2025 & Visual Informatics paper.

Li Liu, Jing Duan, Xiaodong Fu, Wei Peng, Lijun Liu

Kunming University of Science and Technology

Related content and links are being updated.

Teaser figure

Given a set of multi-view blurry images containing both motion and defocus blur, our method can reconstruct a high-quality sharp 3D scene representation.

Abstract

This paper proposes a unified 3D Gaussian splatting framework consisting of three key components for motion and defocus blur reconstruction. First, a dual-blur perception module is designed to generate pixel-wise masks and predict the types of motion and defocus blur, guiding structural feature extraction. Second, a blur-aware Gaussian splat- ting integrates blur-aware features into the splatting process for accurate modeling of the global and local scene structure. Third, an Unoptimized Gaussian Ratio (UGR)-opacity joint optimization strategy is proposed to refine under-optimized regions, improving reconstruction accuracy under complex blur conditions. Experiments on a newly con- structed motion and defocus blur dataset demonstrate the effectiveness of the proposed method for novel view synthesis. Compared with state-of-the-art methods, our frame- work achieves improvements of 0.28 dB, 2.46% and 39.88% on PSNR, SSIM, and LPIPS, respectively. For deblurring tasks, it achieves improvements of 0.36 dB, 3.24% and 28.96% on on the same metrics. These results highlight the robustness and effec- tiveness of this approach.

Keywords

3D Gaussian Splatting, blur reconstruction, dual-blur perception, blur-aware feature, Joint optimization

Method Overview

Our three-stage framework consists of dual-blur perception, blur-aware Gaussian splatting, and joint UGR-opacity optimization. Given a sequence of dual-blur images, the dual-blur perception module generates pixel-wise blur masks for each image. Blur-aware Gaussian splatting applies the masks to both input and rendered images to jointly optimize the 3D Gaussians and camera poses. The UGR-opacity optimization strategy adaptively increases updates in under-optimized masked regions. The final output is a clear 3D scene reconstruction.

Pipeline overview

Rendering Results

Left: video renderings produced by our method; Right: input images

Limitations

  • Performance drops in regions with severe local blur.
  • Linear camera motion modeling limits accuracy for complex trajectories.
  • Cannot handle per-pixel mixed-blur types (e.g., the entanglement of motion and defocus blur).

References

[Placeholder for formatted references]