<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.4 20241031//EN"
        "https://jats.nlm.nih.gov/publishing/1.4/JATS-journalpublishing1-4.dtd">
<article  article-type="research-article"        dtd-version="1.4">
            <front>

                <journal-meta>
                                                                <journal-id>saucis</journal-id>
            <journal-title-group>
                                                                                    <journal-title>Sakarya University Journal of Computer and Information Sciences</journal-title>
            </journal-title-group>
                                        <issn pub-type="epub">2636-8129</issn>
                                                                                            <publisher>
                    <publisher-name>Sakarya University</publisher-name>
                </publisher>
                    </journal-meta>
                <article-meta>
                                        <article-id pub-id-type="doi">10.35377/saucis...1073355</article-id>
                                                                <article-categories>
                                            <subj-group  xml:lang="en">
                                                            <subject>Computer Software</subject>
                                                    </subj-group>
                                            <subj-group  xml:lang="tr">
                                                            <subject>Bilgisayar Yazılımı</subject>
                                                    </subj-group>
                                    </article-categories>
                                                                                                                                                        <title-group>
                                                                                                                                                            <article-title>An Implementation of Traffic Signs and Road Objects Detection Using Faster R-CNN</article-title>
                                                                                                    </title-group>
            
                                                    <contrib-group content-type="authors">
                                                                        <contrib contrib-type="author">
                                                                    <contrib-id contrib-id-type="orcid">
                                        https://orcid.org/0000-0003-0098-9018</contrib-id>
                                                                <name>
                                    <surname>Güney</surname>
                                    <given-names>Emin</given-names>
                                </name>
                                                                    <aff>SAKARYA UYGULAMALI BİLİMLER ÜNİVERSİTESİ</aff>
                                                            </contrib>
                                                    <contrib contrib-type="author">
                                                                    <contrib-id contrib-id-type="orcid">
                                        https://orcid.org/0000-0003-1058-7100</contrib-id>
                                                                <name>
                                    <surname>Bayılmış</surname>
                                    <given-names>Cüneyt</given-names>
                                </name>
                                                                    <aff>SAKARYA ÜNİVERSİTESİ, BİLGİSAYAR VE BİLİŞİM BİLİMLERİ FAKÜLTESİ, BİLGİSAYAR MÜHENDİSLİĞİ BÖLÜMÜ</aff>
                                                            </contrib>
                                                                                </contrib-group>
                        
                                        <pub-date pub-type="pub" iso-8601-date="20220831">
                    <day>08</day>
                    <month>31</month>
                    <year>2022</year>
                </pub-date>
                                        <volume>5</volume>
                                        <issue>2</issue>
                                        <fpage>216</fpage>
                                        <lpage>224</lpage>
                        
                        <history>
                                    <date date-type="received" iso-8601-date="20220214">
                        <day>02</day>
                        <month>14</month>
                        <year>2022</year>
                    </date>
                                                    <date date-type="accepted" iso-8601-date="20220722">
                        <day>07</day>
                        <month>22</month>
                        <year>2022</year>
                    </date>
                            </history>
                                        <permissions>
                    <copyright-statement>Copyright © 2018, Sakarya University Journal of Computer and Information Sciences</copyright-statement>
                    <copyright-year>2018</copyright-year>
                    <copyright-holder>Sakarya University Journal of Computer and Information Sciences</copyright-holder>
                </permissions>
            
                                                                                                                        <abstract><p>Traffic signs and road objects detection is significant issue for driver safety. It has become popular with the development of autonomous vehicles and driver-assistant systems. This study presents a real-time system that detects traffic signs and various objects in the driving environment with a camera. Faster R-CNN architecture was used as a detection method in this study. This architecture is a well-known two-stage approach for object detection. Dataset was created by collecting various images for training and testing of the model. The dataset consists of 1880 images containing traffic signs and objects collected from Turkey with the GTSRB dataset. These images were combined and divided into the training set and testing set with the ratio of 80/20. The model&#039;s training was carried out in the computer environment for 8.5 hours and approximately 10000 iterations. Experimental results show the real-time performance of Faster R-CNN for robustly traffic signs and objects detection.</p></abstract>
                                                            
            
                                                                                        <kwd-group>
                                                    <kwd>deep learning</kwd>
                                                    <kwd>  traffic sign detection and recognition (TSDR)</kwd>
                                                    <kwd>  object detection</kwd>
                                                    <kwd>  faster r-cnn</kwd>
                                            </kwd-group>
                            
                                                                                                                                                <funding-group specific-use="FundRef">
                    <award-group>
                                                    <funding-source>
                                <named-content content-type="funder_name">Sakarya University Scientific Research Projects Coordination Unit</named-content>
                            </funding-source>
                                                                            <award-id>2021-7-24-20</award-id>
                                            </award-group>
                </funding-group>
                                </article-meta>
    </front>
    <back>
                            <ref-list>
                                    <ref id="ref1">
                        <label>1</label>
                        <mixed-citation publication-type="journal">[1] A. Ruta, Y. Li, and X. Liu, “Real-time traffic sign recognition from video by class-spec. discriminative features,” Pattern Recognition, vol. 43, no. 1, pp. 416–430, 2010.</mixed-citation>
                    </ref>
                                    <ref id="ref2">
                        <label>2</label>
                        <mixed-citation publication-type="journal">[2] H. Li, F. Sun, L. Liu, and L. Wang, “Neurocomputing A novel traffic sign detection method via color segmentation and robust shape matching,” Neurocomputing, vol. 169, pp. 77–88, 2015.</mixed-citation>
                    </ref>
                                    <ref id="ref3">
                        <label>3</label>
                        <mixed-citation publication-type="journal">[3] S. Yin, P. Ouyang, L. Liu, Y. Guo, and S. Wei, “Fast Traffic Sign Recognition with a Rotation Invariant Binary Pattern Based Feature,” pp. 2161–2180, 2015.</mixed-citation>
                    </ref>
                                    <ref id="ref4">
                        <label>4</label>
                        <mixed-citation publication-type="journal">[4] R. Qian, B. Zhang, Z. Wang, and F. Coenen, “Robust Chinese Traffic Sign Detection and Recognition with Deep Convolutional Neural Network,” pp. 791–796, 2015.</mixed-citation>
                    </ref>
                                    <ref id="ref5">
                        <label>5</label>
                        <mixed-citation publication-type="journal">[5] X. Changzhen, W. Cong, M. Weixin, and S. Yanmei, “A Traffic Sign Detection Algorithm Based on Deep Convolutional Neural Network,” pp. 6–9, 2016.</mixed-citation>
                    </ref>
                                    <ref id="ref6">
                        <label>6</label>
                        <mixed-citation publication-type="journal">[6] J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel, “The German Traffic Sign Recognition Benchmark: A multi-class classification competition,” Proceedings of the International Joint Conference on Neural Networks, pp. 1453–1460, 2011.</mixed-citation>
                    </ref>
                                    <ref id="ref7">
                        <label>7</label>
                        <mixed-citation publication-type="journal">[7] J. Zhang, M. Huang, X. Jin, and X. Li, “A Real-Time Chinese Traffic Sign Detection Algorithm Based on Modified YOLOv2,” pp. 1–13, 2017.</mixed-citation>
                    </ref>
                                    <ref id="ref8">
                        <label>8</label>
                        <mixed-citation publication-type="journal">[8] Y. Bengio and P. Haffner, “Gradient-Based Learning Applied to Document Recognition,” vol. 86, no. 11, 1998.</mixed-citation>
                    </ref>
                                    <ref id="ref9">
                        <label>9</label>
                        <mixed-citation publication-type="journal">[9] C. Liu, F. Yin, D. Wang, and Q. Wang, “Chinese Handwriting Recognition Contest 2010,” pp. 3–7, 2010.</mixed-citation>
                    </ref>
                                    <ref id="ref10">
                        <label>10</label>
                        <mixed-citation publication-type="journal">[10] M. Mathias, R. Timofte, R. Benenson, and L. van Gool, “Traffic sign recognition - How far are we from the solution?,” Proceedings of the International Joint Conference on Neural Networks, 2013.</mixed-citation>
                    </ref>
                                    <ref id="ref11">
                        <label>11</label>
                        <mixed-citation publication-type="journal">[11] T.-Y. Lin et al., “LNCS 8693 - Microsoft COCO: Common Objects in Context,” 2014.</mixed-citation>
                    </ref>
                                    <ref id="ref12">
                        <label>12</label>
                        <mixed-citation publication-type="journal">[12] “INRIA Annotations for Graz-02 (IG02).” https://lear.inrialpes.fr/people/marszalek/data/ig02/ (accessed Nov. 20, 2021).</mixed-citation>
                    </ref>
                                    <ref id="ref13">
                        <label>13</label>
                        <mixed-citation publication-type="journal">[13] X. Xu, J. Jin, S. Zhang, L. Zhang, S. Pu, and Z. Chen, “Smart data driven traffic sign detection method based on adaptive color threshold and shape symmetry,” Future Generation Computer Systems, vol. 94, pp. 381–391, 2019.</mixed-citation>
                    </ref>
                                    <ref id="ref14">
                        <label>14</label>
                        <mixed-citation publication-type="journal">[14] G. Ozturk, R. Koker, O. Eldogan, and D. Karayel, “Recognition of Vehicles, Pedestrians and Traffic Signs Using Convolutional Neural Networks,” Oct. 2020.</mixed-citation>
                    </ref>
                                    <ref id="ref15">
                        <label>15</label>
                        <mixed-citation publication-type="journal">[15] C. Han, G. Gao, and Y. Zhang, “Real-time small traffic sign detection with revised faster-RCNN,” Multimedia Tools and Applications, vol. 78, no. 10, pp. 13263–13278, May 2019.</mixed-citation>
                    </ref>
                                    <ref id="ref16">
                        <label>16</label>
                        <mixed-citation publication-type="journal">[16] K. Zhou, Y. Zhan, and D. Fu, “Learning region-based attention network for traffic sign recognition,” Sensors (Switzerland), vol. 21, no. 3, pp. 1–21, 2021.</mixed-citation>
                    </ref>
                                    <ref id="ref17">
                        <label>17</label>
                        <mixed-citation publication-type="journal">[17] F. Shao, X. Wang, F. Meng, J. Zhu, D. Wang, and J. Dai, “Improved faster R-CNN traffic sign detection based on a second region of interest and highly possible regions proposal network,” Sensors (Switzerland), vol. 19, no. 10, May 2019.</mixed-citation>
                    </ref>
                                    <ref id="ref18">
                        <label>18</label>
                        <mixed-citation publication-type="journal">[18] X. Dai et al., “Multi-task faster R-CNN for nighttime pedestrian detection and distance estimation,” Infrared Physics and Technology, vol. 115, Jun. 2021, doi: 10.1016/j.infrared.2021.103694.</mixed-citation>
                    </ref>
                                    <ref id="ref19">
                        <label>19</label>
                        <mixed-citation publication-type="journal">[19] “Make Sense,” Victorian Studies, vol. 48, no. 3. pp. 395–438, 2006.</mixed-citation>
                    </ref>
                                    <ref id="ref20">
                        <label>20</label>
                        <mixed-citation publication-type="journal">[20] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask R-CNN.”</mixed-citation>
                    </ref>
                                    <ref id="ref21">
                        <label>21</label>
                        <mixed-citation publication-type="journal">[21] R. Girshick, “Fast R-CNN,” Proceedings of the IEEE International Conference on Computer Vision, vol. 2015 Inter, pp. 1440–1448, 2015.</mixed-citation>
                    </ref>
                                    <ref id="ref22">
                        <label>22</label>
                        <mixed-citation publication-type="journal">[22] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object sdetection,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2016-Decem, pp. 779–788, 2016.</mixed-citation>
                    </ref>
                                    <ref id="ref23">
                        <label>23</label>
                        <mixed-citation publication-type="journal">[23] W. Liu et al., “SSD: Single shot multibox detector,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9905 LNCS, pp. 21–37, 2016.</mixed-citation>
                    </ref>
                                    <ref id="ref24">
                        <label>24</label>
                        <mixed-citation publication-type="journal">[24] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks”.</mixed-citation>
                    </ref>
                                    <ref id="ref25">
                        <label>25</label>
                        <mixed-citation publication-type="journal">[25] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation.” pp. 580–587, 2014.</mixed-citation>
                    </ref>
                                    <ref id="ref26">
                        <label>26</label>
                        <mixed-citation publication-type="journal">[26] J. R. R. Uijlings, K. E. A. van de Sande, T. Gevers, and A. W. M. Smeulders, “Selective search for object recognition,” International Journal of Computer Vision, vol. 104, no. 2, pp. 154–171, Sep. 2013.</mixed-citation>
                    </ref>
                                    <ref id="ref27">
                        <label>27</label>
                        <mixed-citation publication-type="journal">[27] “Flagly.” https://www.flagly.org/project/projects/4/sections/42/ (Accessed Dec. 15, 2021).</mixed-citation>
                    </ref>
                            </ref-list>
                    </back>
    </article>
