<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.4 20241031//EN"
        "https://jats.nlm.nih.gov/publishing/1.4/JATS-journalpublishing1-4.dtd">
<article  article-type="research-article"        dtd-version="1.4">
            <front>

                <journal-meta>
                                                                <journal-id>saucis</journal-id>
            <journal-title-group>
                                                                                    <journal-title>Sakarya University Journal of Computer and Information Sciences</journal-title>
            </journal-title-group>
                                        <issn pub-type="epub">2636-8129</issn>
                                                                                            <publisher>
                    <publisher-name>Sakarya University</publisher-name>
                </publisher>
                    </journal-meta>
                <article-meta>
                                        <article-id pub-id-type="doi">10.35377/saucis...1637290</article-id>
                                                                <article-categories>
                                            <subj-group  xml:lang="en">
                                                            <subject>Software Engineering (Other)</subject>
                                                    </subj-group>
                                            <subj-group  xml:lang="tr">
                                                            <subject>Yazılım Mühendisliği (Diğer)</subject>
                                                    </subj-group>
                                    </article-categories>
                                                                                                                                                        <title-group>
                                                                                                                                                            <article-title>Feature Enhancement of TUM-RGBD Depth Images and Performance Evaluation of Gaussian Splatting-Based SplaTAM Method</article-title>
                                                                                                    </title-group>
            
                                                    <contrib-group content-type="authors">
                                                                        <contrib contrib-type="author">
                                                                    <contrib-id contrib-id-type="orcid">
                                        https://orcid.org/0009-0000-8316-450X</contrib-id>
                                                                <name>
                                    <surname>Zeyveli</surname>
                                    <given-names>Cemil</given-names>
                                </name>
                                                                    <aff>KARABUK UNIVERSITY</aff>
                                                            </contrib>
                                                    <contrib contrib-type="author">
                                                                    <contrib-id contrib-id-type="orcid">
                                        https://orcid.org/0000-0002-4155-5956</contrib-id>
                                                                <name>
                                    <surname>Kamanlı</surname>
                                    <given-names>Ali Furkan</given-names>
                                </name>
                                                                    <aff>SAKARYA UNIVERSITY OF APPLIED SCIENCES</aff>
                                                            </contrib>
                                                                                </contrib-group>
                        
                                        <pub-date pub-type="pub" iso-8601-date="20250630">
                    <day>06</day>
                    <month>30</month>
                    <year>2025</year>
                </pub-date>
                                        <volume>8</volume>
                                        <issue>2</issue>
                                        <fpage>260</fpage>
                                        <lpage>272</lpage>
                        
                        <history>
                                    <date date-type="received" iso-8601-date="20250211">
                        <day>02</day>
                        <month>11</month>
                        <year>2025</year>
                    </date>
                                                    <date date-type="accepted" iso-8601-date="20250605">
                        <day>06</day>
                        <month>05</month>
                        <year>2025</year>
                    </date>
                            </history>
                                        <permissions>
                    <copyright-statement>Copyright © 2018, Sakarya University Journal of Computer and Information Sciences</copyright-statement>
                    <copyright-year>2018</copyright-year>
                    <copyright-holder>Sakarya University Journal of Computer and Information Sciences</copyright-holder>
                </permissions>
            
                                                                                                                        <abstract><p>Simultaneous Localization and Mapping (SLAM) methods are used in autonomous systems to determine their locations in unknown environments and map these environments. Autonomous systems need to act autonomously without external intervention. These methods are widely used in robotics and AR/VR applications. Gaussian Splatting SLAM is a Visual SLAM method that performs mapping and localization using depth and RGB images and uses Gaussian structures for scene representation. Popular datasets such as TUM-RGBD, Replica, and Scannet++ are used in the performance evaluation and testing of the visual SLAM methods. However, the depth images in the TUM-RGBD dataset are of lower quality than other datasets. This problem negatively affects the depth data&#039;s accuracy and reduces the quality of mapping results. In this study, to increase the quality of depth images, the features of depth images were corrected using the median filter, which is the depth smoothing method, and a cleaner depth dataset was obtained. The new dataset obtained was processed using the Gaussian Splatting SLAM method, and better metric results (PSNR, SSIM, and LPIPS) were obtained compared to the original dataset. As a result, in the dataset with corrected features, an improvement of 8.08% in the first scene and 4.69% in the second scene was achieved according to metric values compared to the original dataset.</p></abstract>
                                                            
            
                                                                                        <kwd-group>
                                                    <kwd>SLAM</kwd>
                                                    <kwd>  Gaussian Splatting</kwd>
                                                    <kwd>  Median Filter</kwd>
                                            </kwd-group>
                            
                                                                                                                                                    </article-meta>
    </front>
    <back>
                            <ref-list>
                                    <ref id="ref1">
                        <label>1</label>
                        <mixed-citation publication-type="journal">H. Durrant-Whyte, D. Rye, and E. Nebot, “Localization of Autonomous Guided Vehicles,” Robotics Research, pp. 613–625, 1996, doi: 10.1007/978-1-4471-1021-7_69.</mixed-citation>
                    </ref>
                                    <ref id="ref2">
                        <label>2</label>
                        <mixed-citation publication-type="journal">H. Durrant-Whyte and T. Bailey, “Simultaneous localization and mapping: Part I,” IEEE Robotics and Automation Magazine, vol. 13, no. 2, pp. 99–108, Jun. 2006, doi: 10.1109/MRA.2006.1638022.</mixed-citation>
                    </ref>
                                    <ref id="ref3">
                        <label>3</label>
                        <mixed-citation publication-type="journal">R. C. Smith and P. Cheeseman, “On the Representation and Estimation of Spatial Uncertainty,” The international journal of Robotics Research, vol. 5, no. 4, pp. 56–68, Dec. 1986, doi: 10.1177/027836498600500404.</mixed-citation>
                    </ref>
                                    <ref id="ref4">
                        <label>4</label>
                        <mixed-citation publication-type="journal">H. Taheri and Z. C. Xia, “SLAM; definition and evolution,” Engineering Applications of Artificial Intelligence, vol. 97, p. 104032, Jan. 2021, doi: 10.1016/J.ENGAPPAI.2020.104032.</mixed-citation>
                    </ref>
                                    <ref id="ref5">
                        <label>5</label>
                        <mixed-citation publication-type="journal">T. J. Chong, X. J. Tang, C. H. Leng, M. Yogeswaran, O. E. Ng, and Y. Z. Chong, “Sensor Technologies and Simultaneous Localization and Mapping (SLAM),” Procedia Computer Science, vol. 76, pp. 174–179, Jan. 2015, doi: 10.1016/J.PROCS.2015.12.336.</mixed-citation>
                    </ref>
                                    <ref id="ref6">
                        <label>6</label>
                        <mixed-citation publication-type="journal">W. Chen et al., “An Overview on Visual SLAM: From Tradition to Semantic,” Remote Sensing 2022, Vol. 14, Page 3010, vol. 14, no. 13, p. 3010, Jun. 2022, doi: 10.3390/RS14133010.</mixed-citation>
                    </ref>
                                    <ref id="ref7">
                        <label>7</label>
                        <mixed-citation publication-type="journal">A. R. Sahili et al., “A Survey of Visual SLAM Methods,” IEEE Access, vol. 11, pp. 139643–139677, 2023, doi: 10.1109/ACCESS.2023.3341489.</mixed-citation>
                    </ref>
                                    <ref id="ref8">
                        <label>8</label>
                        <mixed-citation publication-type="journal">A. Macario Barros, M. Michel, Y. Moline, G. Corre, and F. Carrel, “A Comprehensive Survey of Visual SLAM Algorithms,” Robotics 2022, Vol. 11, Page 24, vol. 11, no. 1, p. 24, Feb. 2022, doi: 10.3390/ROBOTICS11010024.</mixed-citation>
                    </ref>
                                    <ref id="ref9">
                        <label>9</label>
                        <mixed-citation publication-type="journal">E. Sandström, Y. Li, L. van Gool, M. R. Oswald, E. Zürich, and K. Leuven, “Point-SLAM: Dense Neural Point Cloud-based SLAM.” pp. 18433–18444, 2023.</mixed-citation>
                    </ref>
                                    <ref id="ref10">
                        <label>10</label>
                        <mixed-citation publication-type="journal">N. Keetha et al., “SplaTAM: Splat Track &amp; Map 3D Gaussians for Dense RGB-D SLAM.” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 21357–21366, 2024.</mixed-citation>
                    </ref>
                                    <ref id="ref11">
                        <label>11</label>
                        <mixed-citation publication-type="journal">C. Yan et al., “GS-SLAM: Dense Visual SLAM with 3D Gaussian Splatting.” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.     pp. 19595–19604, 2024.</mixed-citation>
                    </ref>
                                    <ref id="ref12">
                        <label>12</label>
                        <mixed-citation publication-type="journal">B. Kerbl, G. Kopanas, T. Leimkuehler, and G. Drettakis, “3D Gaussian Splatting for Real-Time Radiance Field Rendering,” ACM Transactions on Graphics, vol. 42, no. 4, p. 14, Aug. 2023, doi: 10.1145/3592433.</mixed-citation>
                    </ref>
                                    <ref id="ref13">
                        <label>13</label>
                        <mixed-citation publication-type="journal">R. A. Newcombe, S. J. Lovegrove, and A. J. Davison, “DTAM: Dense tracking and mapping in real-time,” Proceedings of the IEEE International Conference on Computer Vision, pp. 2320–2327, 2011, doi: 10.1109/ICCV.2011.6126513.</mixed-citation>
                    </ref>
                                    <ref id="ref14">
                        <label>14</label>
                        <mixed-citation publication-type="journal">R. A. Newcombe et al., “KinectFusion: Real-time dense surface mapping and tracking,” 2011 10th IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2011, pp. 127–136, 2011, doi: 10.1109/ISMAR.2011.6092378.</mixed-citation>
                    </ref>
                                    <ref id="ref15">
                        <label>15</label>
                        <mixed-citation publication-type="journal">T. Whelan, R. F. Salas-Moreno, B. Glocker, A. J. Davison, and S. Leutenegger, “ElasticFusion: Real-time dense SLAM and light source estimation,” The International Journal of Robotics Research, vol. 35, no. 14, pp. 1697–1716, Sep. 2016, doi: 10.1177/0278364916669237.</mixed-citation>
                    </ref>
                                    <ref id="ref16">
                        <label>16</label>
                        <mixed-citation publication-type="journal">T. Schops, T. Sattler, and M. Pollefeys, “BAD SLAM: Bundle Adjusted Direct RGB-D SLAM.” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 134–144, 2019.</mixed-citation>
                    </ref>
                                    <ref id="ref17">
                        <label>17</label>
                        <mixed-citation publication-type="journal">Z. Teed and J. Deng, “DROID-SLAM: Deep Visual SLAM for Monocular, Stereo, and RGB-D Cameras,” Advances in Neural Information Processing Systems, vol. 20, pp. 16558–16569, Aug. 2021.</mixed-citation>
                    </ref>
                                    <ref id="ref18">
                        <label>18</label>
                        <mixed-citation publication-type="journal">E. Sucar, S. Liu, J. Ortiz, and A. J. Davison, “iMAP: Implicit Mapping and Positioning in Real-Time,” Proceedings of the IEEE International Conference on Computer Vision, pp. 6209–6218, 2021, doi: 10.1109/ICCV48922.2021.00617.</mixed-citation>
                    </ref>
                                    <ref id="ref19">
                        <label>19</label>
                        <mixed-citation publication-type="journal">Z. Zhu et al., “NICE-SLAM: Neural Implicit Scalable Encoding for SLAM,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2022-June, pp. 12776–12786, 2022, doi: 10.1109/CVPR52688.2022.01245.</mixed-citation>
                    </ref>
                                    <ref id="ref20">
                        <label>20</label>
                        <mixed-citation publication-type="journal">X. Yang, H. Li, H. Zhai, Y. Ming, Y. Liu, and G. Zhang, “Vox-Fusion: Dense Tracking and Mapping with Voxel-based Neural Implicit Representation,” Proceedings - 2022 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2022, pp. 499–507, Oct. 2022, doi: 10.1109/ISMAR55827.2022.00066.</mixed-citation>
                    </ref>
                                    <ref id="ref21">
                        <label>21</label>
                        <mixed-citation publication-type="journal">H. Wang, J. Wang, and L. Agapito, “Co-SLAM: Joint Coordinate and Sparse Parametric Encodings for Neural Real-Time SLAM,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2023-June, pp. 13293–13302, Apr. 2023, doi: 10.1109/CVPR52729.2023.01277.</mixed-citation>
                    </ref>
                                    <ref id="ref22">
                        <label>22</label>
                        <mixed-citation publication-type="journal">H. Matsuki, R. Murai, P. H. J. Kelly, and A. J. Davison, “Gaussian Splatting SLAM.” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18039–18048, 2024.</mixed-citation>
                    </ref>
                                    <ref id="ref23">
                        <label>23</label>
                        <mixed-citation publication-type="journal">C. Yan et al., “GS-SLAM: Dense Visual SLAM with 3D Gaussian Splatting,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 19595–19604, 2024.</mixed-citation>
                    </ref>
                                    <ref id="ref24">
                        <label>24</label>
                        <mixed-citation publication-type="journal">J. Straub et al., “The Replica Dataset: A Digital Replica of Indoor Spaces,” Jun. 2019, Accessed: Jan. 24, 2025. [Online]. Available: https://arxiv.org/abs/1906.05797v1</mixed-citation>
                    </ref>
                                    <ref id="ref25">
                        <label>25</label>
                        <mixed-citation publication-type="journal">A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, and M. Niessner, “ScanNet: Richly-Annotated 3D Reconstructions of Indoor Scenes,” Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5828–5839, 2017.</mixed-citation>
                    </ref>
                                    <ref id="ref26">
                        <label>26</label>
                        <mixed-citation publication-type="journal">C. Yeshwanth, Y.-C. Liu, M. Nießner, and A. Dai, “ScanNet++: A High-Fidelity Dataset of 3D Indoor Scenes,” Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 12–22, 2023.</mixed-citation>
                    </ref>
                                    <ref id="ref27">
                        <label>27</label>
                        <mixed-citation publication-type="journal">J. Sturm, W. Burgard, and D. Cremers, “Evaluating Egomotion and Structure-from-Motion Approaches Using the TUM RGB-D Benchmark,” Proc. of the Workshop on Color-Depth Camera Fusion in Robotics at the IEEE/RJS International Conference on Intelligent Robot Systems (IROS), vol. 13, 2012.</mixed-citation>
                    </ref>
                                    <ref id="ref28">
                        <label>28</label>
                        <mixed-citation publication-type="journal">Q. Yang, R. Yang, J. Davis, and D. Nistér, “Spatial-depth super resolution for range images,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2007, doi: 10.1109/CVPR.2007.383211.</mixed-citation>
                    </ref>
                                    <ref id="ref29">
                        <label>29</label>
                        <mixed-citation publication-type="journal">G. Deng and L. W. Cahill, “Adaptive Gaussian filter for noise reduction and edge detection,” IEEE Nuclear Science Symposium &amp; Medical Imaging Conference, no. pt 3, pp. 1615–1619, 1994, doi: 10.1109/NSSMIC.1993.373563.</mixed-citation>
                    </ref>
                                    <ref id="ref30">
                        <label>30</label>
                        <mixed-citation publication-type="journal">I. Pitas and A. N. Venetsanopoulos, “Nonlinear Mean Filters in Image Processing,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 34, no. 3, pp. 573–584, Jun. 1986, doi: 10.1109/TASSP.1986.1164857.</mixed-citation>
                    </ref>
                                    <ref id="ref31">
                        <label>31</label>
                        <mixed-citation publication-type="journal">M. Kazubek, “Wavelet domain image denoising by thresholding and Wiener filtering,” IEEE Signal Processing Letters, vol. 10, no. 11, pp. 324–326, Nov. 2003, doi: 10.1109/LSP.2003.818225.</mixed-citation>
                    </ref>
                                    <ref id="ref32">
                        <label>32</label>
                        <mixed-citation publication-type="journal">P. Jain and V. Tyagi, “A survey of edge-preserving image denoising methods,” Information Systems Frontiers, vol. 18, no. 1, pp. 159–170, Feb. 2016, doi: 10.1007/S10796-014-9527-0/TABLES/1.</mixed-citation>
                    </ref>
                                    <ref id="ref33">
                        <label>33</label>
                        <mixed-citation publication-type="journal">T. S. Huang, G. J. Yang, and G. Y. Tang, “A Fast Two-Dimensional Median Filtering Algorithm,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 27, no. 1, pp. 13–18, 1979, doi: 10.1109/TASSP.1979.1163188.</mixed-citation>
                    </ref>
                                    <ref id="ref34">
                        <label>34</label>
                        <mixed-citation publication-type="journal">A. Ravishankar, S. Anusha, H. K. Akshatha, A. Raj, S. Jahnavi, and J. Madhura, “A survey on noise reduction techniques in medical images,” Proceedings of the International Conference on Electronics, Communication and Aerospace Technology, ICECA 2017, vol. 2017-January, pp. 385–389, 2017, doi: 10.1109/ICECA.2017.8203711.</mixed-citation>
                    </ref>
                                    <ref id="ref35">
                        <label>35</label>
                        <mixed-citation publication-type="journal">F. Artuğer and F. Özkaynak, “Görüntü Sıkıştırma Algoritmalarının Performans Analizi İçin Değerlendirme Rehberi,” International Journal of Pure and Applied Sciences, vol. 8, no. 1, pp. 102–110, Jun. 2022, doi: 10.29132/IJPAS.1012013.</mixed-citation>
                    </ref>
                                    <ref id="ref36">
                        <label>36</label>
                        <mixed-citation publication-type="journal">S. Ghazanfari, S. Garg, P. Krishnamurthy, F. Khorrami, and A. Araujo, “R-LPIPS: An Adversarially Robust Perceptual Similarity Metric,” Jul. 2023.</mixed-citation>
                    </ref>
                            </ref-list>
                    </back>
    </article>
