DS Journal of Digital Science and Technology (DS-DST)

Research Article | Open Access | Download Full Text

Volume 4 | Issue 2 | Year 2025 | Article Id: DST-V4I2P101 DOI: https://doi.org/10.59232/DST-V4I2P101

Optimization of 3D Gaussian Splatting for Accurate Image Reconstruction

Thanh Dang, Tan Thanh Nguyen, Thanh Cao, Vuong Pham

ReceivedRevisedAcceptedPublished
07 Feb 202506 Mar 202505 Apr 202530 Apr 2025

Citation

Thanh Dang, Tan Thanh Nguyen, Thanh Cao, Vuong Pham. “Optimization of 3D Gaussian Splatting for Accurate Image Reconstruction.” DS Journal of Digital Science and Technology, vol. 4, no. 2, pp. 1-22, 2025.

Abstract

This study focuses on Optimizing 3D Gaussian Splatting (3DGS) to improve reconstruction accuracy while retaining real-time rendering efficiency. Redundant Gaussian representations, ineffective densification techniques, and unsatisfactory Spherical Harmonics (SH) modelling characterize current 3DGS implementations, therefore restricting both visual quality and computing economy. This work presents an adaptive Gaussian pruning method that dynamically removes unnecessary points while maintaining picture integrity to handle these difficulties. By including controlled noise in the point replication process, an enhanced densification technique further reduces over-shrinking effects and guarantees consistent spatial coverage. By continuously improving the resolution of the rebuilt scene, a multi-scale rendering technique is presented to hasten training convergence. Moreover, a good SH representation improves lighting consistency and colour accuracy. Experimental results show that, while maintaining a rendering speed of at least 25 frames per second at 1080p resolution, the authors’ method achieves better reconstruction fidelity with enhanced Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) than baseline 3DGS models. These developments enable 3DGS to be more appropriate for real-time uses, including virtual reality, digital twins, and high-fidelity scene reconstruction.

Keywords

3d Gaussian Splatting, Adaptive pruning, Gaussian pruning, Image reconstruction, Multi-scale training, Real-time rendering, Scene representation, Spherical harmonics optimization.

References

[1] Jonathan T. Barron et al., “MIP-NERF 360: Unbounded Anti-Aliased Neural Radiance Fields,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5470-5479, 2022.

[Google Scholar] [Publisher Link]

[2] Anpei Chen et al., “TensoRF: Tensorial Radiance Fields,” Computer Vision - ECCV : 17th European Conference, Tel Aviv, Israel, pp. 333-350, 2022.

[CrossRef] [Google Scholar] [Publisher Link]

[3] Bernhard Kerb et al., “3D Gaussian Splatting for Real-Time Radiance Field Rendering,” ACM Transactions on Graphics, vol. 42, no. 4, pp. 1-14, 2023.

[CrossRef] [Google Scholar] [Publisher Link]

[4] Ben Mildenhall et al., “NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis,” Communications of the ACM, vol. 65 no. 1, pp. 99 -106, 2021.

[CrossRef] [Google Scholar] [Publisher Link]

[5] Thomas Müller et al., “Instant Neural Graphics Primitives with a Multiresolution Hash Encoding,” ACM Transactions on Graphics, vol. 41, no. 4, pp. 1-15, 2022.

[CrossRef] [Google Scholar] [Publisher Link]

[6] Sara Fridovich-Keil et al., “Plenoxels: Radiance Fields without Neural Networks,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5501-5510, 2022.

[Google Scholar] [Publisher Link]

[7] Wong Jing Yuan, Tanks&Temples - M60 (COLMAP Preprocessed) datasets, Kaggle, 2024. [Online]. Available: https://www.kaggle.com/datasets/jinnywjy/tanks-and-temple-m60-colmap-preprocessed

[8] Minh-Anh Truong, Deep Blending datasets, Kaggle, 2021. [Online]. Available: https://www.kaggle.com/datasets/minhanhtruong/deep-blending-dataset

[9] Thành Đặng, Mip-NeRF 360 datasets, Kaggle, 2025 [Online]. Available: https://www.kaggle.com/datasets/thnhdg/testing

[10] Yoshua Bengio et al., “Curriculum Learning,” Proceedings of the 26th Annual International Conference on Machine Learning, New York, NY, United States, pp. 41-48, 2009.

[CrossRef] [Google Scholar] [Publisher Link]

[11] David B. Lindell et al., “AutoInt: Automatic Integration for Fast Neural Volume Rendering,” IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14556-14565, 2021.

[Google Scholar] [Publisher Link]

[12] Westover, Lee Alan, “SPLATTING: A Parallel, Feed-Forward Volume Rendering Algorithm,” The University of North Carolina at Chapel Hill ProQuest Dissertations and Theses, 1991.

[Google Scholar] [Publisher Link]

[13] Justin Johnson, Alexandre Alahi, and Li Fei-Fei, “Perceptual Losses for Real-Time Style Transfer and Super-Resolution,” Computer Vision – ECCV, pp. 694-711, 2016.

[CrossRef] [Google Scholar] [Publisher Link]

[14] Richard Zhang et al., “The Unreasonable Effectiveness of Deep Features as a Perceptual Metric,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586-595, 2018.

[Google Scholar] [Publisher Link]

[15] Yiheng Xie et al., “Neural Fields in Visual Computing and Beyond,” Computer Graphics Forum, vol. 41, no. 2, pp. 641-676, 2022.

[CrossRef] [Google Scholar] [Publisher Link]

[16] Yangjiheng, 3DGS and Beyond Docs, 2025. [Online]. Available: https://github.com/yangjiheng/3DGS_and_Beyond_Docs

[17] Ricardo Martin-Brualla et al., “NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections,” Conference on Computer Vision and Pattern Recognition, pp. 7210-7219, 2021.

[Google Scholar] [Publisher Link]

[18] Johannes L. Schonberger, and Jan-Michael Frahm, “Structure-from-Motion Revisited,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 4104-4113, 2016.

[Google Scholar] [Publisher Link]

[19] FFMPEG [Online]. Available: https://ffmpeg.org

[20] NVIDIA CUDA Toolkit, 2025. [Online]. Available: https://developer.nvidia.com/cuda-toolkit

[21] Graphdeco-Inria, 3D Gaussian Splatting Viewer, 2025. [Online]. Available: https://github.com/graphdeco-inria/gaussian-splatting

[22] PyTorch3D, 2024. [Online]. Available: https://pytorch3d.org

[23] Ricardo Martin-Brualla, David Gallup, and Steven M. Seitz, “3D Time-Lapse Reconstruction from Internet Photos,” Proceedings of the IEEE International Conference on Computer Vision, pp. 1398-1406, 2015.

[Google Scholar] [Publisher Link]

[24] Jim Canary, NeRF: Revolutionizing 3D Scene Reconstruction with Neural Radiance Fields, Medium, 2025. [Online]. Available: https://medium.com/@jimcanary/nerf-revolutionizing-3d-scene-reconstruction-with-neural-radiance-fields-1c28df282857

[25] Michael Broxton et al., “Immersive Light Field Video with A Layered Mesh Representation,” ACM Transactions on Graphics, vol. 39, no. 4, pp. 1-15, 2020.

[CrossRef] [Google Scholar] [Publisher Link]

 

Optimization of 3D Gaussian Splatting for Accurate Image Reconstruction