<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.4 20241031//EN"
        "https://jats.nlm.nih.gov/publishing/1.4/JATS-journalpublishing1-4.dtd">
<article  article-type="research-article"        dtd-version="1.4">
            <front>

                <journal-meta>
                                    <journal-id></journal-id>
            <journal-title-group>
                                                                                    <journal-title>Balkan Journal of Electrical and Computer Engineering</journal-title>
            </journal-title-group>
                            <issn pub-type="ppub">2147-284X</issn>
                                        <issn pub-type="epub">2147-284X</issn>
                                                                                            <publisher>
                    <publisher-name>MUSA YILMAZ</publisher-name>
                </publisher>
                    </journal-meta>
                <article-meta>
                                        <article-id pub-id-type="doi">10.17694/bajece.1024073</article-id>
                                                                <article-categories>
                                            <subj-group  xml:lang="en">
                                                            <subject>Artificial Intelligence</subject>
                                                    </subj-group>
                                            <subj-group  xml:lang="tr">
                                                            <subject>Yapay Zeka</subject>
                                                    </subj-group>
                                    </article-categories>
                                                                                                                                                        <title-group>
                                                                                                                                                            <article-title>Ear semantic segmentation in natural images with Tversky loss function supported DeepLabv3+ convolutional neural network</article-title>
                                                                                                    </title-group>
            
                                                    <contrib-group content-type="authors">
                                                                        <contrib contrib-type="author">
                                                                    <contrib-id contrib-id-type="orcid">
                                        https://orcid.org/0000-0002-8612-122X</contrib-id>
                                                                <name>
                                    <surname>Inan</surname>
                                    <given-names>Tolga</given-names>
                                </name>
                                                                    <aff>ÇANKAYA ÜNİVERSİTESİ</aff>
                                                            </contrib>
                                                    <contrib contrib-type="author">
                                                                    <contrib-id contrib-id-type="orcid">
                                        https://orcid.org/0000-0003-1660-0775</contrib-id>
                                                                <name>
                                    <surname>Kacar</surname>
                                    <given-names>Umit</given-names>
                                </name>
                                                                    <aff>ÇANKAYA ÜNİVERSİTESİ</aff>
                                                            </contrib>
                                                                                </contrib-group>
                        
                                        <pub-date pub-type="pub" iso-8601-date="20220730">
                    <day>07</day>
                    <month>30</month>
                    <year>2022</year>
                </pub-date>
                                        <volume>10</volume>
                                        <issue>3</issue>
                                        <fpage>337</fpage>
                                        <lpage>346</lpage>
                        
                        <history>
                                    <date date-type="received" iso-8601-date="20211115">
                        <day>11</day>
                        <month>15</month>
                        <year>2021</year>
                    </date>
                                                    <date date-type="accepted" iso-8601-date="20220718">
                        <day>07</day>
                        <month>18</month>
                        <year>2022</year>
                    </date>
                            </history>
                                        <permissions>
                    <copyright-statement>Copyright © 2013, Balkan Journal of Electrical and Computer Engineering</copyright-statement>
                    <copyright-year>2013</copyright-year>
                    <copyright-holder>Balkan Journal of Electrical and Computer Engineering</copyright-holder>
                </permissions>
            
                                                                                                                        <abstract><p>Semantic segmentation is a fundamental problemfor computer vision. On the other hand, for studies in the fieldof biometrics, semantic segmentation is gaining more importance.Many successful biometric recognition systems require a high-performance semantic segmentation algorithm. In this study, wepresent an effective ear segmentation technique in natural images.A convolutional neural network is trained for pixel-based earsegmentation. DeepLab v3+ network structure, with ResNet-18 asthe backbone and Tversky lost function layer as the last layer, hasbeen trained with natural and uncontrolled images. We performthe proposed network training using only the 750 images in theAnnotated Web Ears (AWE) training set. The corresponding testsare performed on the AWE Test Set, University of LjubljanaTest Set, and the Collection A of In-The-Wild dataset. For theAnnotated Web Ears (AWE) dataset, intersection over union(IoU) is measured as 86.3% for the AWE database. To the best ofour knowledge, this is the highest performance achieved amongthe algorithms tested on the AWE test set.</p></abstract>
                                                            
            
                                                                                        <kwd-group>
                                                    <kwd>Semantic Segmentation</kwd>
                                                    <kwd>  Ear Segmentation</kwd>
                                                    <kwd>  Convolutional Neural Networks</kwd>
                                                    <kwd>  Tversky Loss Function</kwd>
                                                    <kwd>  biometrics</kwd>
                                            </kwd-group>
                            
                                                                                                                                                    </article-meta>
    </front>
    <back>
                            <ref-list>
                                    <ref id="ref1">
                        <label>1</label>
                        <mixed-citation publication-type="journal">[1] A. Abaza, A. Ross, C. Hebert, M. A. F. Harrison, and M. S.Nixon, “A survey on ear biometrics,” ACM Computing Surveys,vol. 45, no. 2, pp. 1–35, Feb. 2013, number: 2 Reporter: ACMComputing Surveys. [Online]. Available: http://dl.acm.org/citation.cfm?doid=2431211.2431221</mixed-citation>
                    </ref>
                                    <ref id="ref2">
                        <label>2</label>
                        <mixed-citation publication-type="journal">[2] A. Pflug and C. Busch, “Ear biometrics: a survey of detection,feature extraction and recognition methods,” IET Biometrics, vol. 1,no. 2, pp. 114–129, Jun. 2012, number: 2 Reporter: IET Biometrics.[Online]. Available: https://digital-library.theiet.org/content/journals/10.1049/iet-bmt.2011.0003</mixed-citation>
                    </ref>
                                    <ref id="ref3">
                        <label>3</label>
                        <mixed-citation publication-type="journal">[3] Z. Emersic, D. Stepec, V. Struc, P. Peer, A. George, A. Ahmad, E. Omar,T. E. Boult, R. Safdaii, Y. Zhou, S. Zafeiriou, D. Yaman, F. I. Eyiokur,and H. K. Ekenel, “The unconstrained ear recognition challenge,” in2017 IEEE International Joint Conference on Biometrics (IJCB), Oct.2017, pp. 715–724, meeting Name: 2017 IEEE International JointConference on Biometrics (IJCB) Reporter: 2017 IEEE InternationalJoint Conference on Biometrics (IJCB) ISSN: 2474-9699.</mixed-citation>
                    </ref>
                                    <ref id="ref4">
                        <label>4</label>
                        <mixed-citation publication-type="journal">[4] Z. Emersic, A. K. S. V, B. S. Harish, W. Gutfeter, J. N. Khiarak,A. Pacut, E. Hansley, M. P. Segundo, S. Sarkar, H. J. Park, G. P. Nam, I.-J. Kim, S. G. Sangodkar, U. Kacar, M. Kirci, L. Yuan, J. Yuan, H. Zhao,F. Lu, J. Mao, X. Zhang, D. Yaman, F. I. Eyiokur, K. B. Özler, H. K.Ekenel, D. P. Chowdhury, S. Bakshi, P. K. Sa, B. Majhi, P. Peer, andV. Štruc, “The Unconstrained Ear Recognition Challenge 2019,” in 2019International Conference on Biometrics (ICB), 2019, pp. 1–15.</mixed-citation>
                    </ref>
                                    <ref id="ref5">
                        <label>5</label>
                        <mixed-citation publication-type="journal">[5] Z. Emersic, J. Krizaj, V. Struc, and P. Peer, “Deep Ear RecognitionPipeline,” in Recent Advances in Computer Vision: Theories andApplications, ser. Studies in Computational Intelligence, M. Hassaballahand K. M. Hosny, Eds. Cham: Springer International Publishing,2019, pp. 333–362, reporter: Recent Advances in Computer Vision:Theories and Applications. [Online]. Available: https://doi.org/10.1007/978-3-030-03000-1 14</mixed-citation>
                    </ref>
                                    <ref id="ref6">
                        <label>6</label>
                        <mixed-citation publication-type="journal">[6] Z. Zou, Z. Shi, Y. Guo, and J. Ye, “Object Detection in 20Years: A Survey,” arXiv:1905.05055 [cs], May 2019, reporter:arXiv:1905.05055 [cs] arXiv: 1905.05055. [Online]. Available: http://arxiv.org/abs/1905.05055</mixed-citation>
                    </ref>
                                    <ref id="ref7">
                        <label>7</label>
                        <mixed-citation publication-type="journal">[7] A. Paszke, A. Chaurasia, S. Kim, and E. Culurciello, “ENet: A DeepNeural Network Architecture for Real-Time Semantic Segmentation,”arXiv:1606.02147 [cs], Jun. 2016, reporter: arXiv:1606.02147 [cs]arXiv: 1606.02147. [Online]. Available: http://arxiv.org/abs/1606.02147</mixed-citation>
                    </ref>
                                    <ref id="ref8">
                        <label>8</label>
                        <mixed-citation publication-type="journal">[8] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L.Yuille, “Semantic Image Segmentation with Deep ConvolutionalNets and Fully Connected CRFs,” arXiv:1412.7062 [cs], Jun. 2016,reporter: arXiv:1412.7062 [cs] arXiv: 1412.7062. [Online]. Available:http://arxiv.org/abs/1412.7062</mixed-citation>
                    </ref>
                                    <ref id="ref9">
                        <label>9</label>
                        <mixed-citation publication-type="journal">[9] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn,and A. Zisserman, The PASCAL Visual Object Classes Challenge2012 (VOC2012) Results, 2012. [Online]. Available: http://www.pascal-network.org/challenges/VOC/voc2012/workshop/index.html</mixed-citation>
                    </ref>
                                    <ref id="ref10">
                        <label>10</label>
                        <mixed-citation publication-type="journal">[10] G. Lin, A. Milan, C. Shen, and I. Reid, “RefineNet: Multi-pathRefinement Networks for High-Resolution Semantic Segmentation,” in2017 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). Honolulu, HI: IEEE, Jul. 2017, pp. 5168–5177, meetingName: 2017 IEEE Conference on Computer Vision and PatternRecognition (CVPR)</mixed-citation>
                    </ref>
                                    <ref id="ref11">
                        <label>11</label>
                        <mixed-citation publication-type="journal">[11] Y. Xian, S. Choudhury, Y. He, B. Schiele, and Z. Akata, “SemanticProjection Network for Zero- and Few-Label Semantic Segmentation,”in 2019 IEEE/CVF Conference on Computer Vision and PatternRecognition (CVPR). Long Beach, CA, USA: IEEE, Jun. 2019, pp.8248–8257, meeting Name: 2019 IEEE/CVF Conference on ComputerVision and Pattern Recognition (CVPR)</mixed-citation>
                    </ref>
                                    <ref id="ref12">
                        <label>12</label>
                        <mixed-citation publication-type="journal">[12] Y. Xian, C. H. Lampert, B. Schiele, and Z. Akata, “Zero-ShotLearning - A Comprehensive Evaluation of the Good, the Bad and theUgly,” arXiv:1707.00600 [cs], Aug. 2018, arXiv: 1707.00600. [Online].Available: http://arxiv.org/abs/1707.00600</mixed-citation>
                    </ref>
                                    <ref id="ref13">
                        <label>13</label>
                        <mixed-citation publication-type="journal">[13] H. Caesar, J. Uijlings, and V. Ferrari, “Coco-stuff: Thing and stuff classesin context,” in Proceedings of the IEEE Conference on Computer Visionand Pattern Recognition, 2018, pp. 1209–1218.</mixed-citation>
                    </ref>
                                    <ref id="ref14">
                        <label>14</label>
                        <mixed-citation publication-type="journal">[14] Z. Huang, X. Wang, L. Huang, C. Huang, Y. Wei, and W. Liu, “CCNet:Criss-Cross Attention for Semantic Segmentation,” in 2019 IEEE/CVFInternational Conference on Computer Vision (ICCV). Seoul, Korea(South): IEEE, Oct. 2019, pp. 603–612, meeting Name: 2019 IEEE/CVFInternational Conference on Computer Vision (ICCV) Reporter: 2019IEEE/CVF International Conference on Computer Vision (ICCV).[Online]. Available: https://ieeexplore.ieee.org/document/9009011/</mixed-citation>
                    </ref>
                                    <ref id="ref15">
                        <label>15</label>
                        <mixed-citation publication-type="journal">[15] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Be-nenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes datasetfor semantic urban scene understanding,” in Proceedings of the IEEEconference on computer vision and pattern recognition, 2016, pp. 3213–3223.</mixed-citation>
                    </ref>
                                    <ref id="ref16">
                        <label>16</label>
                        <mixed-citation publication-type="journal">[16] B. Zhou, H. Zhao, X. Puig, S. Fidler, A. Barriuso, and A. Torralba,“Scene parsing through ade20k dataset,” in Proceedings of the IEEEconference on computer vision and pattern recognition, 2017, pp. 633–641.</mixed-citation>
                    </ref>
                                    <ref id="ref17">
                        <label>17</label>
                        <mixed-citation publication-type="journal">[17] L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam,“Rethinking Atrous Convolution for Semantic Image Segmentation,”arXiv:1706.05587 [cs], Dec. 2017, reporter: arXiv:1706.05587 [cs]arXiv: 1706.05587. [Online]. Available: http://arxiv.org/abs/1706.05587</mixed-citation>
                    </ref>
                                    <ref id="ref18">
                        <label>18</label>
                        <mixed-citation publication-type="journal">[18] A. Kirillov, K. He, R. Girshick, C. Rother, and P. Dollar,“Panoptic Segmentation,” in 2019 IEEE/CVF Conference on ComputerVision and Pattern Recognition (CVPR). Long Beach, CA, USA:IEEE, Jun. 2019, pp. 9396–9405, meeting Name: 2019 IEEE/CVFConference on Computer Vision and Pattern Recognition (CVPR)Reporter: 2019 IEEE/CVF Conference on Computer Vision and PatternRecognition (CVPR). [Online].</mixed-citation>
                    </ref>
                                    <ref id="ref19">
                        <label>19</label>
                        <mixed-citation publication-type="journal">[19] C. Liu, L.-C. Chen, F. Schroff, H. Adam, W. Hua, A. L. Yuille,and L. Fei-Fei, “Auto-DeepLab: Hierarchical Neural ArchitectureSearch for Semantic Image Segmentation,” in 2019 IEEE/CVFConference on Computer Vision and Pattern Recognition (CVPR).Long Beach, CA, USA: IEEE, Jun. 2019, pp. 82–92</mixed-citation>
                    </ref>
                                    <ref id="ref20">
                        <label>20</label>
                        <mixed-citation publication-type="journal">[20] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet:A large-scale hierarchical image database,” in 2009 IEEE conference oncomputer vision and pattern recognition. Ieee, 2009, pp. 248–255.</mixed-citation>
                    </ref>
                                    <ref id="ref21">
                        <label>21</label>
                        <mixed-citation publication-type="journal">[21] S. Mittal, M. Tatarchenko, and T. Brox, “Semi-Supervised Semantic Seg-mentation with High- and Low-level Consistency,” IEEE Transactionson Pattern Analysis and Machine Intelligence, pp. 1–1, 2019, reporter:IEEE Transactions on Pattern Analysis and Machine Intelligence.[Online]. Available: https://ieeexplore.ieee.org/document/8935407/</mixed-citation>
                    </ref>
                                    <ref id="ref22">
                        <label>22</label>
                        <mixed-citation publication-type="journal">[22] A. Garcia-Garcia, S. Orts-Escolano, S. Oprea, V. Villena-Martinez,P. Martinez-Gonzalez, and J. Garcia-Rodriguez, “A survey on deeplearning techniques for image and video semantic segmentation,”Applied Soft Computing, vol. 70, pp. 41–65, Sep. 2018, reporter:Applied Soft Computing. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S1568494618302813</mixed-citation>
                    </ref>
                                    <ref id="ref23">
                        <label>23</label>
                        <mixed-citation publication-type="journal">[23] S. Minaee, Y. Boykov, F. Porikli, A. Plaza, N. Kehtarnavaz, andD. Terzopoulos, “Image Segmentation Using Deep Learning: A Survey,”arXiv:2001.05566 [cs], Jan. 2020, reporter: arXiv:2001.05566 [cs]arXiv: 2001.05566. [Online]. Available: http://arxiv.org/abs/2001.05566</mixed-citation>
                    </ref>
                                    <ref id="ref24">
                        <label>24</label>
                        <mixed-citation publication-type="journal">[24] F. Lateef and Y. Ruichek, “Survey on semantic segmentationusing deep learning techniques,” Neurocomputing, vol. 338, pp.321–348, Apr. 2019, reporter: Neurocomputing. [Online]. Available:https://linkinghub.elsevier.com/retrieve/pii/S092523121930181X</mixed-citation>
                    </ref>
                                    <ref id="ref25">
                        <label>25</label>
                        <mixed-citation publication-type="journal">[25] I. Ulku and E. Akagunduz, “A Survey on Deep Learning-based Architec-tures for Semantic Segmentation on 2D images,” IEEE TRANSACTIONSON KNOWLEDGE AND DATA ENGINEERING, p. 14, 2019, reporter:IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEER-ING.</mixed-citation>
                    </ref>
                                    <ref id="ref26">
                        <label>26</label>
                        <mixed-citation publication-type="journal">[26] Z. Emersic, V. Struc, and P. Peer, “Ear recognition: More than asurvey,” Neurocomputing, vol. 255, pp. 26–39, Sep. 2017, reporter:Neurocomputing. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S092523121730543X</mixed-citation>
                    </ref>
                                    <ref id="ref27">
                        <label>27</label>
                        <mixed-citation publication-type="journal">[27] M. Bizjak, P. Peer, and Z. Emersic, “Mask R-CNN for EarDetection,” in 2019 42nd International Convention on Information andCommunication Technology, Electronics and Microelectronics (MIPRO).Opatija, Croatia: IEEE, May 2019, pp. 1624–1628,</mixed-citation>
                    </ref>
                                    <ref id="ref28">
                        <label>28</label>
                        <mixed-citation publication-type="journal">[28] R. Raposo, E. Hoyle, A. Peixinho, and H. Proença, “UBEAR: Adataset of ear images captured on-the-move in uncontrolled conditions,”2011 IEEE Workshop on Computational Intelligence in Biometrics andIdentity Management (CIBIM), pp. 84–90, 2011.</mixed-citation>
                    </ref>
                                    <ref id="ref29">
                        <label>29</label>
                        <mixed-citation publication-type="journal">[29] Y. Zhou and S. Zaferiou, “Deformable Models of Ears in-the-Wildfor Alignment and Recognition,” in 2017 12th IEEE InternationalConference on Automatic Face Gesture Recognition (FG 2017), 2017,pp. 626–633.</mixed-citation>
                    </ref>
                                    <ref id="ref30">
                        <label>30</label>
                        <mixed-citation publication-type="journal">[30] Z. Emersic, L. L. Gabriel, V. Struc, and P. Peer, “Convolutionalencoder–decoder networks for pixel-wise ear detection andsegmentation,” IET Biometrics, vol. 7, no. 3, pp. 175–184, May2018, number: 3 Reporter: IET Biometrics. [Online]. Available: https://digital-library.theiet.org/content/journals/10.1049/iet-bmt.2017.0240</mixed-citation>
                    </ref>
                                    <ref id="ref31">
                        <label>31</label>
                        <mixed-citation publication-type="journal">[31] Z. Emersic, D. Susanj, B. Meden, P. Peer, and V. Struc, “Contexednet:Context–aware ear detection in unconstrained settings,” IEEE Access,vol. 9, pp. 145 175–145 190, 2021.</mixed-citation>
                    </ref>
                                    <ref id="ref32">
                        <label>32</label>
                        <mixed-citation publication-type="journal">[32] C. Cintas, C. Delrieux, P. Navarro, M. Quinto-Sánchez, B. Pazos,and R. Gonzalez-José, “Automatic Ear Detection and Segmentationover Partially Occluded Profile Face Images,” Journal of ComputerScience and Technology, vol. 19, no. 01, p. e08, Apr. 2019, number:01 Reporter: Journal of Computer Science and Technology. [Online].Available: http://journal.info.unlp.edu.ar/JCST/article/view/1097</mixed-citation>
                    </ref>
                                    <ref id="ref33">
                        <label>33</label>
                        <mixed-citation publication-type="journal">[33] X. Zhang, L. Yuan, and J. Huang, “Physiological Curves Extractionof Human Ear Based on Improved YOLACT,” in 2020 IEEE 2ndInternational Conference on Civil Aviation Safety and InformationTechnology (ICCASIT, 2020, pp. 390–394.</mixed-citation>
                    </ref>
                                    <ref id="ref34">
                        <label>34</label>
                        <mixed-citation publication-type="journal">[34] I. I. Ganapathi, S. Prakash, I. R. Dave, and S. Bakshi, “Unconstrained eardetection using ensemble-based convolutional neural network model:Unconstrained ear detection using ensemble-based convolutionalneural network model,” Concurrency and Computation: Practiceand Experience, p. e5197, Feb. 2019</mixed-citation>
                    </ref>
                                    <ref id="ref35">
                        <label>35</label>
                        <mixed-citation publication-type="journal">[35] Y. Zhang and Z. Mu, “Ear Detection under Uncontrolled Conditions withMultiple Scale Faster Region-Based Convolutional Neural Networks,”Symmetry, vol. 9, no. 4, p. 53, Apr. 2017, number: 4 Reporter:Symmetry. [Online]. Available: http://www.mdpi.com/2073-8994/9/4/53</mixed-citation>
                    </ref>
                                    <ref id="ref36">
                        <label>36</label>
                        <mixed-citation publication-type="journal">[36] A. Kamboj, R. Rani, A. Nigam, and R. Jha, “CED-Net: context-awareear detection network for unconstrained images,” Pattern Analysis andApplications, 2020.</mixed-citation>
                    </ref>
                                    <ref id="ref37">
                        <label>37</label>
                        <mixed-citation publication-type="journal">[37] W. Raveane, P. L. Galdámez, and M. A. González Arrieta, “EarDetection and Localization with Convolutional Neural Networksin Natural Images and Videos,” Processes, vol. 7, no. 7, p.457, Jul. 2019, number: 7 Reporter: Processes. [Online]. Available:https://www.mdpi.com/2227-9717/7/7/457</mixed-citation>
                    </ref>
                                    <ref id="ref38">
                        <label>38</label>
                        <mixed-citation publication-type="journal">[38] L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam,“Encoder-Decoder with Atrous Separable Convolution for SemanticImage Segmentation,” in Computer Vision – ECCV 2018, V. Ferrari,M. Hebert, C. Sminchisescu, and Y. Weiss, Eds. Cham: SpringerInternational Publishing, 2018, vol. 11211, pp. 833–851</mixed-citation>
                    </ref>
                                    <ref id="ref39">
                        <label>39</label>
                        <mixed-citation publication-type="journal">[39] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning forImage Recognition,” in 2016 IEEE Conference on Computer Vision andPattern Recognition (CVPR), 2016, pp. 770–778.</mixed-citation>
                    </ref>
                                    <ref id="ref40">
                        <label>40</label>
                        <mixed-citation publication-type="journal">[40] S. S. M. Salehi, D. Erdogmus, and A. Gholipour, “Tversky loss functionfor image segmentation using 3D fully convolutional deep networks,”arXiv:1706.05721 [cs], Jun. 2017, reporter: arXiv:1706.05721 [cs]arXiv: 1706.05721. [Online]. Available: http://arxiv.org/abs/1706.05721</mixed-citation>
                    </ref>
                                    <ref id="ref41">
                        <label>41</label>
                        <mixed-citation publication-type="journal">[41] D. P. Kingma and J. Ba, “Adam: A Method for StochasticOptimization,” arXiv:1412.6980 [cs], Jan. 2017, arXiv: 1412.6980.[Online]. Available: http://arxiv.org/abs/1412.6980</mixed-citation>
                    </ref>
                                    <ref id="ref42">
                        <label>42</label>
                        <mixed-citation publication-type="journal">[42] U. Kacar and M. Kirci, “ScoreNet: Deep cascade score level fusionfor unconstrained ear recognition,” IET Biometrics, vol. 8, no. 2, pp.109–120, 2018, number: 2 Publisher: IET.</mixed-citation>
                    </ref>
                            </ref-list>
                    </back>
    </article>
