<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.4 20241031//EN"
        "https://jats.nlm.nih.gov/publishing/1.4/JATS-journalpublishing1-4.dtd">
<article  article-type="research-article"        dtd-version="1.4">
            <front>

                <journal-meta>
                                                                <journal-id>saucis</journal-id>
            <journal-title-group>
                                                                                    <journal-title>Sakarya University Journal of Computer and Information Sciences</journal-title>
            </journal-title-group>
                                        <issn pub-type="epub">2636-8129</issn>
                                                                                            <publisher>
                    <publisher-name>Sakarya University</publisher-name>
                </publisher>
                    </journal-meta>
                <article-meta>
                                        <article-id pub-id-type="doi">10.35377/saucis...1798069</article-id>
                                                                <article-categories>
                                            <subj-group  xml:lang="en">
                                                            <subject>Artificial Intelligence (Other)</subject>
                                                    </subj-group>
                                            <subj-group  xml:lang="tr">
                                                            <subject>Yapay Zeka (Diğer)</subject>
                                                    </subj-group>
                                    </article-categories>
                                                                                                                                                        <title-group>
                                                                                                                                                            <article-title>TriaNet: A Tri-Fusion Attention Network for Segmenting Polyps with Ambiguous Boundaries</article-title>
                                                                                                    </title-group>
            
                                                    <contrib-group content-type="authors">
                                                                        <contrib contrib-type="author">
                                                                    <contrib-id contrib-id-type="orcid">
                                        https://orcid.org/0000-0002-7947-2312</contrib-id>
                                                                <name>
                                    <surname>Baraklı</surname>
                                    <given-names>Burhan</given-names>
                                </name>
                                                                    <aff>SAKARYA UNIVERSITY</aff>
                                                            </contrib>
                                                    <contrib contrib-type="author">
                                                                    <contrib-id contrib-id-type="orcid">
                                        https://orcid.org/0000-0001-9412-5223</contrib-id>
                                                                <name>
                                    <surname>Küçüker</surname>
                                    <given-names>Ahmet</given-names>
                                </name>
                                                                    <aff>SAKARYA UNIVERSITY, FACULTY OF ENGINEERING</aff>
                                                            </contrib>
                                                                                </contrib-group>
                        
                                        <pub-date pub-type="pub" iso-8601-date="20251229">
                    <day>12</day>
                    <month>29</month>
                    <year>2025</year>
                </pub-date>
                                        <volume>8</volume>
                                        <issue>4</issue>
                                        <fpage>798</fpage>
                                        <lpage>811</lpage>
                        
                        <history>
                                    <date date-type="received" iso-8601-date="20251006">
                        <day>10</day>
                        <month>06</month>
                        <year>2025</year>
                    </date>
                                                    <date date-type="accepted" iso-8601-date="20251103">
                        <day>11</day>
                        <month>03</month>
                        <year>2025</year>
                    </date>
                            </history>
                                        <permissions>
                    <copyright-statement>Copyright © 2018, Sakarya University Journal of Computer and Information Sciences</copyright-statement>
                    <copyright-year>2018</copyright-year>
                    <copyright-holder>Sakarya University Journal of Computer and Information Sciences</copyright-holder>
                </permissions>
            
                                                                                                                        <abstract><p>Colorectal cancer (CRC) is one of the most common and deadly types of cancer worldwide. During standard colonoscopy procedures to detect polyps, which are early-stage precancerous lesions critical for disease prevention, challenges exist, such as overlooking polyps and the inability to accurately segment polyps with weak borders that are integrated with surrounding tissue using current computer-aided methods. This study proposes a new deep learning architecture, called TriaNet (Tri-Fusion Attention Network), to enhance the segmentation accuracy of polyps with weak borders. The fundamental innovation of TriaNet is its unique “triple-fusion” attention mechanism, which combines three complementary information streams. The proposed method dynamically fuses edge feature information obtained from a hybrid block containing Scharr, DoG, and Gabor filters, the semantic feature map from the decoder structure, and an instantaneous boundary map derived from a Scharr operator applied to an upper layer prediction. Furthermore, Deformable Alignment layers are utilized in skip connections to enhance the model&#039;s ability to adapt to variable polyp morphologies. The TriaNET architecture was tested on four different benchmark datasets, including Kvasir-SEG, CVC-ColonDB, ETIS-LaribPolypDB, and CVC-300, which demonstrated superior performance compared to state-of-the-art methods.</p></abstract>
                                                            
            
                                                                                        <kwd-group>
                                                    <kwd>Polyp Segmentation</kwd>
                                                    <kwd>  Attention Network</kwd>
                                                    <kwd>  U-Net</kwd>
                                                    <kwd>  Fusion Module</kwd>
                                            </kwd-group>
                            
                                                                                                                                                    </article-meta>
    </front>
    <back>
                            <ref-list>
                                    <ref id="ref1">
                        <label>1</label>
                        <mixed-citation publication-type="journal">Li, S., Ren, Y., Yu, Y., Jiang, Q., He, X., &amp; Li, H., “A survey of deep learning algorithms for colorectal polyp segmentation”, Neurocomputing, 614, 128767, 2025.</mixed-citation>
                    </ref>
                                    <ref id="ref2">
                        <label>2</label>
                        <mixed-citation publication-type="journal">Pacal, I., Karaboga, D., Basturk, A., Akay, B., &amp; Nalbantoglu, U.  “A Comprehensive review of deep learning in colon cancer”, Computers in Biology and Medicine, 126, 104003, 2020.</mixed-citation>
                    </ref>
                                    <ref id="ref3">
                        <label>3</label>
                        <mixed-citation publication-type="journal">Islam, M. R., Ahamed, M. F., Islam, M. R., Nahiduzzaman, M., &amp; Ahsan, M., “Detection, localization, segmentation, and classification in colorectal cancer screening using deep learning: A systematic review”, Biomedical Signal Processing and Control, 110, 108202, 2025.</mixed-citation>
                    </ref>
                                    <ref id="ref4">
                        <label>4</label>
                        <mixed-citation publication-type="journal">Maas, M. H., et al., “A computer-aided polyp detection system in screening and surveillance colonoscopy: an international, multicentre, randomised, tandem trial”, The Lancet Digital Health, 6(3), e157-e165, 2024.</mixed-citation>
                    </ref>
                                    <ref id="ref5">
                        <label>5</label>
                        <mixed-citation publication-type="journal">Bui, N. T., Hoang, D. H., Nguyen, Q. T., Tran, M. T., &amp; Le, N., “Meganet: Multi-scale edge-guided attention network for weak boundary polyp segmentation”, In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 7985-7994, 2024.</mixed-citation>
                    </ref>
                                    <ref id="ref6">
                        <label>6</label>
                        <mixed-citation publication-type="journal">Hu, Z., Tang, J., Wang, Z., Zhang, K., Zhang, L., &amp; Sun, Q., “Deep learning for image-based cancer detection and diagnosis− A survey”, Pattern Recognition, 83, 134-149, 2018.</mixed-citation>
                    </ref>
                                    <ref id="ref7">
                        <label>7</label>
                        <mixed-citation publication-type="journal">Nault, J. C., Bioulac–Sage, P. A. U. L. E. T. T. E., &amp; Zucman–Rossi, J. E. S. S. I. C. A., “Reviews in basic and clinical gastroenterology and hepatology”, Gastroenterology, 144, 888-902. 2013.</mixed-citation>
                    </ref>
                                    <ref id="ref8">
                        <label>8</label>
                        <mixed-citation publication-type="journal">Ronneberger, O., Fischer, P., &amp; Brox, T. “U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234-241, Cham: Springer international publishing, October 2015.</mixed-citation>
                    </ref>
                                    <ref id="ref9">
                        <label>9</label>
                        <mixed-citation publication-type="journal">Zhou, Z., Siddiquee, M. M. R., Tajbakhsh, N., &amp; Liang, J., “Unet++: Redesigning skip connections to exploit multiscale features in image segmentation”, IEEE transactions on medical imaging, 39(6), 1856-1867, 2019.</mixed-citation>
                    </ref>
                                    <ref id="ref10">
                        <label>10</label>
                        <mixed-citation publication-type="journal">Jha, D., Riegler, M. A., Johansen, D., Halvorsen, P., &amp; Johansen, H. D., ”Doubleu-net: A deep convolutional neural network for medical image segmentation”,  In 2020 IEEE 33rd International Symposium on computer-based medical systems (CBMS), pp. 558-564, IEEE, July 2020.</mixed-citation>
                    </ref>
                                    <ref id="ref11">
                        <label>11</label>
                        <mixed-citation publication-type="journal">Huang, C. H., Wu, H. Y., &amp; Lin, Y. L., “Hardnet-MSEG: A simple encoder-decoder polyp segmentation neural network that achieves over 0.9 mean Dice and 86 fps”, arXiv preprint arXiv:2101.07172, 2021.</mixed-citation>
                    </ref>
                                    <ref id="ref12">
                        <label>12</label>
                        <mixed-citation publication-type="journal">Yu, T., &amp; Wu, Q., “Hardnet-CPS: colorectal polyp segmentation based on harmonic densely united network”, Biomedical Signal Processing and Control, 85, 104953, 2023.</mixed-citation>
                    </ref>
                                    <ref id="ref13">
                        <label>13</label>
                        <mixed-citation publication-type="journal">Ta, N., Chen, H., Lyu, Y., &amp; Wu, T., “BLE-Net: Boundary learning and enhancement network for polyp segmentation”, Multimedia Systems, 29(5), 3041-3054, 2023.</mixed-citation>
                    </ref>
                                    <ref id="ref14">
                        <label>14</label>
                        <mixed-citation publication-type="journal">Zhou, T., Zhou, Y., He, K., Gong, C., Yang, J., Fu, H., &amp; Shen, D., “Cross-level feature aggregation network for polyp segmentation”, Pattern Recognition, 140, 109555, 2023.</mixed-citation>
                    </ref>
                                    <ref id="ref15">
                        <label>15</label>
                        <mixed-citation publication-type="journal">Zhao, X., et al., “M2SNet: Multi-scale in multi-scale subtraction network for medical image segmentation”, arXiv preprint arXiv:2303.10894, 2023.</mixed-citation>
                    </ref>
                                    <ref id="ref16">
                        <label>16</label>
                        <mixed-citation publication-type="journal">Chen, W., Zhang, R., Zhang, Y., Bao, F., Lv, H., Li, L., &amp; Zhang, C., “Pact-Net: Parallel CNNs and Transformers for medical image segmentation”, Computer Methods and Programs in Biomedicine, 242, 107782, 2023.</mixed-citation>
                    </ref>
                                    <ref id="ref17">
                        <label>17</label>
                        <mixed-citation publication-type="journal">Fan, D. P., Ji, G. P., Zhou, T., Chen, G., Fu, H., Shen, J., &amp; Shao, L. “PraNet: Parallel reverse attention network for polyp segmentation”, In International conference on medical image computing and computer-assisted intervention, pp. 263-273, Cham: Springer International Publishing, September 2020.</mixed-citation>
                    </ref>
                                    <ref id="ref18">
                        <label>18</label>
                        <mixed-citation publication-type="journal">Wei, J., Hu, Y., Zhang, R., Li, Z., Zhou, S. K., &amp; Cui, S. “Shallow attention network for polyp segmentation”, In International conference on medical image computing and computer-assisted intervention, pp. 699-708, Cham: Springer International Publishing, September 2021.</mixed-citation>
                    </ref>
                                    <ref id="ref19">
                        <label>19</label>
                        <mixed-citation publication-type="journal">Fang, Y., Chen, C., Yuan, Y., &amp; Tong, K. Y. “Selective feature aggregation network with area-boundary constraints for polyp segmentation”, In International conference on medical image computing and computer-assisted intervention, pp. 302-310. Cham: Springer International Publishing, October 2019.</mixed-citation>
                    </ref>
                                    <ref id="ref20">
                        <label>20</label>
                        <mixed-citation publication-type="journal">Gao, S. H., Cheng, M. M., Zhao, K., Zhang, X. Y., Yang, M. H., &amp; Torr, P., “Res2net: A new multi-scale backbone architecture”, IEEE transactions on pattern analysis and machine intelligence, 43(2), 652-662, 2019.</mixed-citation>
                    </ref>
                                    <ref id="ref21">
                        <label>21</label>
                        <mixed-citation publication-type="journal">Jha, D., Smedsrud, P. H., Riegler, M. A., Halvorsen, P., De Lange, T., Johansen, D., &amp; Johansen, H. D. “Kvasir-SEG: A segmented polyp dataset”, In International conference on multimedia modeling, pp. 451-462, Cham: Springer International Publishing, December 2019.</mixed-citation>
                    </ref>
                                    <ref id="ref22">
                        <label>22</label>
                        <mixed-citation publication-type="journal">Bernal, J., Sánchez, J., &amp; Vilarino, F. “Impact of image preprocessing methods on polyp localization in colonoscopy frames”, In 2013 35th annual international conference of the IEEE engineering in medicine and biology society (EMBC), pp. 7350-7354, IEEE. July 2013.</mixed-citation>
                    </ref>
                                    <ref id="ref23">
                        <label>23</label>
                        <mixed-citation publication-type="journal">Chien-Hsiang Huang, Hung-Yu Wu, Youn-Long Lin, “Dataset: ETIS-Larib Polyp DB”, https://doi.org/10.57702/pqx39a6l, 2024.</mixed-citation>
                    </ref>
                                    <ref id="ref24">
                        <label>24</label>
                        <mixed-citation publication-type="journal">Vázquez, D., et al., “A benchmark for endoluminal scene segmentation of colonoscopy images”, Journal of healthcare engineering, 2017(1), 4037190, 2017.</mixed-citation>
                    </ref>
                            </ref-list>
                    </back>
    </article>
