TY - JOUR T1 - Flag-Net: Fraktallar ve Lacunarity Tabanlı Hibrit Derin Öğrenme Yaklaşımı ile Cilt Lezyonlarının Sınıflanfırılması TT - FLAG-Net: Classification of Skin Lesions with a Hybrid Deep Learning Approach Based on Fractals and Lacunarity AU - Özkan, Yasin PY - 2025 DA - October Y2 - 2025 DO - 10.2339/politeknik.1734810 JF - Politeknik Dergisi PB - Gazi Üniversitesi WT - DergiPark SN - 2147-9429 SP - 1 EP - 1 LA - tr AB - Son yıllarda yapay zeka tabanlı yöntemler, özellikle de derin öğrenme, tıbbi görüntü analizinde önemli başarılar elde etmiştir. Bu çalışma, fraktal boyut ve lakunarite tabanlı doku analizi yoluyla yapısal hassasiyeti artırarak geleneksel CNN sınırlamalarının üstesinden gelmek için tasarlanmış hibrit bir derin öğrenme modeli olan FLAG-Net'i önermektedir. FLAG-Net, çok seviyeli evrişimsel özellikleri bir dikkat mekanizmasıyla zenginleştirmekte ve sınıflandırma performansını artırmak için morfolojik ve fraktal yapı haritalarını entegre etmektedir. Model, HAM10000 ve ISIC 2019 cilt lezyonu veri kümeleri üzerinde değerlendirilmiş ve sırasıyla %98,54 ve %98,72 doğruluk oranlarına ulaşarak InceptionV3, EfficientNet, VGG19 ve ResNet50 gibi iyi bilinen mimarilerden daha iyi performans göstermiştir. Temel bileşenlerin katkısını ayrı ayrı analiz etmek için ablasyon çalışmaları yapılmış ve dikkat mekanizmasının, çok düzeyli özellik füzyonunun ve fraktal/lacunarity haritalarının sınıflandırma sonuçlarını önemli ölçüde artırdığı doğrulanmıştır. Genel olarak, FLAG-Net yalnızca yüksek doğruluk elde etmekle kalmaz, aynı zamanda karmaşık doku modellerini etkili bir şekilde yakalayarak karar verme sürecini de güçlendirir. Bulgular, FLAG-Net'in tıbbi görüntü sınıflandırmasında güçlü klinik uygulanabilirliğe sahip güvenilir ve genelleştirilebilir bir model olarak potansiyelini vurgulamaktadır. KW - Fraktal boyut KW - Deri lezyonu KW - HAM10000 KW - ISIC 2019 N2 - In recent years, artificial intelligence-based methods, particularly deep learning, have achieved significant success in medical image analysis. This study proposes FLAG-Net, a hybrid deep learning model designed to overcome traditional CNN limitations by enhancing structural sensitivity through fractal dimension and lacunarity-based texture analysis. FLAG-Net enriches multilevel convolutional features with an attention mechanism and integrates morphological and fractal structure maps to improve classification performance. The model was evaluated on the HAM10000 and ISIC 2019 skin lesion datasets, achieving accuracies of 98.54% and 98.72%, respectively—outperforming well-known architectures such as InceptionV3, EfficientNet, VGG19, and ResNet50. Ablation studies were performed to analyze the contribution of key components individually, confirming that the attention mechanism, multilevel feature fusion, and fractal/lacunarity maps significantly enhance classification results. Overall, FLAG-Net not only achieves high accuracy but also strengthens decision-making by effectively capturing complex texture patterns. The findings highlight FLAG-Net’s potential as a reliable and generalizable model with strong clinical applicability in medical image classification. CR - [1] Hogue L., and Harvey V. M., “Basal cell carcinoma, squamous cell carcinoma, and cutaneous melanoma in skin of color patients”, Dermatologic Clinics, 37(4): 519–526, (2019). CR - [2] Gonzalez M. L., Young E. D., Bush J., McKenzie K., Hunt E., Tonkovic-Capin V., and Fraga G. R., “Histopathologic features of melanoma in difficult-to-diagnose lesions: a case-control study”, Journal of the American Academy of Dermatology, 77(3): 543–548, (2017). CR - [3] Liu Y., Pu H., and Sun D. W., “Efficient extraction of deep image features using convolutional neural network (CNN) for applications in detecting and analysing complex food matrices”, Trends in Food Science & Technology, 113: 193–204, (2021). CR - [4] Nasreen G., Haneef K., Tamoor M., and Irshad A., “A comparative study of state-of-the-art skin image segmentation techniques with CNN”, Multimedia Tools and Applications, 82(7): 10921–10942, (2023). CR - [5] Ferrah I., Benmahamed Y., Jahanger H. K., Teguar M., and Kherif O., “A new box-counting-based-image fractal dimension estimation method for discharges recognition on polluted insulator model”,IET Science, Measurement & Technology, 19(1): e70002, (2025). CR - [6] Das Bhattacharjee A., Banerjee N., Mitra D., Koley S., and Banerjee S., “U-Net Segmentation and Lacunarity for Enhanced Analysis of Hyperspectral Imagery”, Proceedings of the International Conference on Computing and Machine Learning, Singapore: Springer Nature, pp. 129–138, (2024). CR - [7] Ayata F., “Machine Learning-Based Approach for COVID-19 Detection and Outbreak Management: Blood Gas Analysis”,Eastern Journal of Science, 7(1): 1–10, (2024). CR - [8] Genç H., Seyyarer E., and Ayata F., “Deep Learning-Driven MRI analysis for accurate diagnosis and grading of lumbar spinal stenosis”, Measurement, 251: 117294, (2025). CR - [9] Jinnai S., Yamazaki N., Hirano Y., Sugawara Y., Ohe Y., and Hamamoto R., “The development of a skin cancer classification system for pigmented skin lesions using deep learning”, Biomolecules, 10(8): 1123, (2020). CR - [10] Tahir M., Naeem A., Malik H., Tanveer J., Naqvi R. A., and Lee S. W., “DSCC_Net: multi-classification deep learning models for diagnosing of skin cancer using dermoscopic images”. Cancers, 15(7): 2179, (2023). CR - [11] Bechelli S., and Delhommelle J., “Machine learning and deep learning algorithms for skin cancer classification from dermoscopic images”. Bioengineering, 9(3): 97, (2022). CR - [12] Jain S., Singhania U., Tripathy B., Nasr E. A., Aboudaif M. K., and Kamrani A. K., “Deep learning-based transfer learning for classification of skin cancer,” Sensors, 21(23): 8142, (2021). CR - [13] Bassel A., Abdulkareem A. B., Alyasseri Z. A. A., Sani N. S., and Mohammed H. J., “Automatic malignant and benign skin cancer classification using a hybrid deep learning approach,” Diagnostics, 12(10): 2472, (2022). CR - [14] Ali M. S., Miah M. S., Haque J., Rahman M. M., and Islam M. K., “An enhanced technique of skin cancer classification using deep convolutional neural network with transfer learning models,” Machine Learning with Applications, 5: 100036, (2021). CR - [15] Monika M. K., Vignesh N. A., Kumari C. U., Kumar M. N. V. S. S., and Lydia E. L., “Skin cancer detection and classification using machine learning”, Materials Today: Proceedings, 33: 4266–4270, (2020). CR - [16] Mijwil M. M., “Skin cancer disease images classification using deep learning solutions”, Multimedia Tools and Applications, 80(17): 26255–26271, (2021). CR - [17] Gouda W., Sama N. U., Al-Waakid G., Humayun M., and Jhanjhi N. Z., “Detection of skin cancer based on skin lesion images using deep learning”, Healthcare, 10(7): 1183, (2022). CR - [18] Chaturvedi S. S., Tembhurne J. V., and Diwan T., “A multi-class skin cancer classification using deep convolutional neural networks”, Multimedia Tools and Applications, 79(39): 28477–28498, (2020). CR - [19] Tembhurne J. V., Hebbar N., Patil H. Y., and Diwan T., “Skin cancer detection using ensemble of machine learning and deep learning techniques”, Multimedia Tools and Applications, 82(18): 27501–27524, (2023). CR - [20] Kousis I., Perikos I., Hatzilygeroudis I., and Virvou M., “Deep learning methods for accurate skin cancer recognition and mobile application”, Electronics, 11(9): 1294, (2022). CR - [21] Sethanan K., Pitakaso R., Srichok T., Khonjun S., Thannipat P., Wanram S., and Nanthasamroeng N., “Double AMIS-ensemble deep learning for skin cancer classification”, Expert Systems with Applications, 234: 121047, (2023). CR - [22] SM J., P M., Aravindan C., and Appavu R., “Classification of skin cancer from dermoscopic images using deep neural network architectures”, Multimedia Tools and Applications, 82(10): 15763–15778, (2023). CR - [23] Qasim Gilani S., Syed T., Umair M., and Marques O., “Skin cancer classification using deep spiking neural network”, Journal of Digital Imaging, 36(3): 1137–1147, (2023). CR - [24] Ghosh H., Rahat I. S., Mohanty S. N., Ravindra J. V. R., and Sobur A., “A study on the application of machine learning and deep learning techniques for skin cancer detection”, International Journal of Computer and Systems Engineering, 18(1): 51–59, (2024). CR - [25] Adla D., Reddy G. V. R., Nayak P., and Karuna G., “Deep learning-based computer aided diagnosis model for skin cancer detection and classification”, Distributed and Parallel Databases, 40(4): 717–736, (2022). CR - [26] Thakir M. M., “Quantifying Fractal-Based Features in Dermoscopic Images for Skin Cancer Characterization”, Proceedings of the ASU International Conference in Emerging Technologies for Sustainability and Intelligent Systems (ICETSIS), pp. 1–5, IEEE, (2024). CR - [27] Molina-Molina E. O., Solorza-Calderón S., and Álvarez-Borrego J., “Classification of dermoscopy skin lesion color-images using fractal-deep learning features”, Applied Sciences, 10(17): 5954, (2020). CR - [28] Moldovanu S., Damian Michis F. A., Biswas K. C., Culea-Florescu A., and Moraru L., “Skin lesion classification based on surface fractal dimensions and statistical color cluster features using an ensemble of machine learning techniques”, Cancers, 13(21): 5256, (2021). CR - [29] Jacob S., and Rosita J. D., “Fractal model for skin cancer diagnosis using probabilistic classifiers”, International Journal of Advances in Signal and Image Sciences, 7(1): 21–29, (2021). CR - [30] Çetiner H., and Metlek S., “A new CNN-based deep learning model approach for skin cancer detection and classification”, in Advances in Deep Generative Models for Medical Artificial Intelligence, Cham: Springer Nature Switzerland, pp. 177–199, (2023). CR - [31] Maniraj S. P. and Maran P. S. “A hybrid deep learning approach for skin cancer diagnosis using subband fusion of 3D wavelets”, The Journal of Supercomputing, 78(10): 12394–12409 (2022). CR - [32] Ozdemir B. and Pacal I. “An innovative deep learning framework for skin cancer detection employing ConvNeXtV2 and focal self-attention mechanisms”, Results in Engineering, 25: 103692, (2025). CR - [33] Ozdemir B. and Pacal I. “A robust deep learning framework for multiclass skin cancer classification”, Scientific Reports, 15(1): 4938, (2025). CR - [34] Rey-Barroso L., Vilaseca M., Royo S., Díaz-Doutón F., Lihacova I., Bondarenko A. and Burgos-Fernández F. J. “Training state-of-the-art deep learning algorithms with visible and extended near-infrared multispectral images of skin lesions for the improvement of skin cancer diagnosis”, Diagnostics, 15(3): 355, (2025). CR - [35] Ali S. N., Ahmed M. T., Jahan T., Paul J., Sani S. S., Noor N. and Hasan T. “A web-based mpox skin lesion detection system using state-of-the-art deep learning models considering racial diversity”, Biomedical Signal Processing and Control, 98: 106742, (2024). CR - [36] Shaik A., Dutta S. S., Sawant I. M., Kumar S., Balasundaram A. and De K. “An attention-based hybrid approach using CNN and BiLSTM for improved skin lesion classification”, Scientific Reports, 15(1): 15680, (2025). CR - [37] Wang C. and An W. “An improved 3D box-counting dimension computing technology for estimating the complexity of 3D models”, IEEE Access, 10: 119314–119320, (2022). CR - [38] Zou S., Wang X., Yuan T., Zeng K., Li G. and Xie X. “Moving object detection in shallow underwater using multi-scale spatial-temporal lacunarity”, Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1–5, (2024). CR - [39] Zhang Z. and Wang M. “Convolutional neural network with convolutional block attention module for finger vein recognition”, arXiv preprint, arXiv:2202.06673, (2022). CR - [40] Hatipoğlu A. and Altuntaş V. “DeepTFBS: Transkripsiyon faktörü bağlanma bölgeleri tahmini için derin öğrenme yöntemleri kullanan hibrit bir model”, Politeknik Dergisi, pp. 1–1, (2024). CR - [41] Jin X., Xie Y., Wei X. S., Zhao B. R., Chen Z. M. and Tan X. “Delving deep into spatial pooling for squeeze-and-excitation networks”, Pattern Recognition, 121: 108159, (2022). CR - [42] Roopa Devi E. M. and Suganthe R. C. “Enhanced transductive support vector machine classification with grey wolf optimizer cuckoo search optimization for intrusion detection system”, Concurrency and Computation: Practice and Experience, 32(4): e4999, (2020). CR - [43] Alzamel M., Alhejaili S., Alhumaidhi F., Alismail J., Almubarak L., Altammami H. and Lim Z. “Advanced skin cancer detection using deep learning”, Proceedings of the International Conference on Engineering Applications of Neural Networks, Cham: Springer Nature Switzerland, pp. 267–278, (2023). CR - [44] Franciotti R., Moharrami M., Quaranta A., Bizzoca M. E., Piattelli A., Aprile G. and Perrotti V. “Use of fractal analysis in dental images for osteoporosis detection: a systematic review and meta-analysis”, Osteoporosis International, 32(6): 1041–1052, (2021). CR - [45] Karperien A. L. and Jelinek H. F. “Box-counting fractal analysis: a primer for the clinician”, The Fractal Geometry of the Brain, Cham: Springer International Publishing, pp. 15–55, (2024). CR - [46] Song G., Qin C., Zhang K., Yao X., Bao F. and Zhang Y. “Adaptive interpolation scheme for image magnification based on local fractal analysis”, IEEE Access, 8: 34326–34338, (2020). CR - [47] Huang J., Zhou Y., Luo Y., Liu G., Guo H. and Yang G. “Representing topological self-similarity using fractal feature maps for accurate segmentation of tubular structures”, Proceedings of the European Conference on Computer Vision (ECCV), Cham: Springer Nature Switzerland, pp. 143–160, (2024). CR - [48] Cheng K., Shen Y. and Dinov I. D. “Applications of deep neural networks with fractal structure and attention blocks for 2D and 3D brain tumor segmentation”, Journal of Statistical Theory and Practice, 18(3): 31, (2024). CR - [49] Dong P. “Lacunarity analysis of raster datasets and 1D, 2D, and 3D point patterns”, Computers & Geosciences, 35(10): 2100–2110, (2009). CR - [50] Scott R., Kadum H., Salmaso G., Calaf M. and Cal R. B. “A lacunarity-based index for spatial heterogeneity”, Earth and Space Science, 9(8): e2021EA002180, (2022). CR - [51] Cui J., Liu C. L., Jennane R., Ai S., Dai K. and Tsai T. Y. “A highly generalized classifier for osteoporosis radiography based on multiscale fractal, lacunarity, and entropy distributions”, Frontiers in Bioengineering and Biotechnology, 11: 1054991, (2023). CR - [52] Ma F., Sun X., Zhang F., Zhou Y. and Li H. C. “What catch your attention in SAR images: saliency detection based on soft-superpixel lacunarity cue”, IEEE Transactions on Geoscience and Remote Sensing, 61: 1–17, (2022). CR - [53] Zhao Y., Chen J., Xu X., Lei J. and Zhou W. “SEV-Net: residual network embedded with attention mechanism for plant disease severity detection”, Concurrency and Computation: Practice and Experience, 33(10): e6161, (2021). CR - [54] Farag M. M., Fouad M. and Abdel-Hamid A. T. “Automatic severity classification of diabetic retinopathy based on DenseNet and convolutional block attention module”, IEEE Access, 10: 38299–38308, (2022). CR - [55] Zhou K., Zhang M., Wang H. and Tan J. “Ship detection in SAR images based on multi-scale feature extraction and adaptive feature fusion”, Remote Sensing, 14(3): 755, (2022). CR - [56] Li H., Ding M., Zhang R. and Xiu C. “Motor imagery EEG classification algorithm based on CNN-LSTM feature fusion network”, Biomedical Signal Processing and Control, 72: 103342, (2022). CR - [57] Zhu W., Dai Z., Gu H. and Zhu X. “Water extraction method based on multi-texture feature fusion of synthetic aperture radar images”, Sensors, 21(14): 4945, (2021). CR - [58] Karkavelraja J., Dharanyadevi P. and Zayaraz G. “Handwritten digit recognition using CNN with average pooling and global average pooling”, Proceedings of the 6th International Conference on Contemporary Computing and Informatics (IC3I), IEEE, Vol. 6, pp. 599–603, (2023). CR - [59] Srivastava N., Hinton G., Krizhevsky A., Sutskever I. and Salakhutdinov R. “Dropout: a simple way to prevent neural networks from overfitting”, The Journal of Machine Learning Research, 15(1): 1929–1958, (2014). CR - [60] Kılıç K., Özcan U., Kılıç K. and Doğru İ. “Using deep learning techniques for furniture image classification”, Politeknik Dergisi, 27(5): 1903–1911, (2024). CR - [61] Kassem M. A., Hosny K. M. and Fouad M. M. “Skin lesions classification into eight classes for ISIC using deep convolutional neural network and transfer learning”, IEEE Access, 8: 114822–114832, (2019). CR - [62] Tschandl P., Rosendahl C. and Kittler H. “The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions”, Scientific Data, 5: 180161, (2018). CR - [63] Ali K., Shaikh Z. A., Khan A. A. and Laghari A. A. “Multiclass skin cancer classification using EfficientNets – a first step towards preventing skin cancer”, Neuroscience Informatics, 2(4): 100034, (2022). CR - [64] Mridha K., Uddin M. M., Shin J., Khadka S. and Mridha M. F. “An interpretable skin cancer classification using optimized convolutional neural network for a smart healthcare system”, IEEE Access, (2023). CR - [65] Ibrahim S., Amin K. M. and Ibrahim M. “Enhanced skin cancer classification using pre-trained CNN models and transfer learning: a clinical decision support system for dermatologists”, International Journal of Computer Information, 10(3): 126–133, (2023). CR - [66] Saeed M., Naseer A., Masood H., Ur Rehman S. and Gruhn V. “The power of generative AI to augment for enhanced skin cancer classification: a deep learning approach”, IEEE Access, 11: 130330–130344, (2023). CR - [67] Nugroho E. S., Ardiyanto I. and Nugroho H. A. “Boosting the performance of pretrained CNN architecture on dermoscopic pigmented skin lesion classification”, Skin Research and Technology, 29(11): e13505, (2023). CR - [68] Reis H. C., Turk V., Khoshelham K. and Kaya S. “InSiNet: a deep convolutional approach to skin cancer detection and segmentation”, Medical & Biological Engineering & Computing, 60: 643–662, (2022). CR - [69] Houssein E. H., Abdelkareem D. A., Hu G., Hameed M. A., Ibrahim I. A. and Younan M. “An effective multiclass skin cancer classification approach based on deep convolutional neural network”, Cluster Computing, 27(9): 12799–12819, (2024). CR - [70] Arya M. S., Prabahavathy P. and Ahamed S. “Skin lesion classification and prediction by data augmentation in HAM10000 and ISIC 2019 dataset”, (2022). CR - [71] Pacal I., Ozdemir B., Zeynalov J., Gasimov H. and Pacal N. “A novel CNN-ViT-based deep learning model for early skin cancer diagnosis”, Biomedical Signal Processing and Control, 104: 107627, (2025). CR - [72] Shafiq M., Aggarwal K., Jayachandran J., Srinivasan G., Boddu R. and Alemayehu A. “RETRACTED: A novel skin lesion prediction and classification technique: ViT-GradCAM”, Skin Research and Technology, 30(9): e70040, (2024). CR - [73] Aruk I., Pacal I. and Toprak A. N. “A novel hybrid ConvNeXt-based approach for enhanced skin lesion classification”, Expert Systems with Applications, 127721, (2025). UR - https://doi.org/10.2339/politeknik.1734810 L1 - https://dergipark.org.tr/tr/download/article-file/5022878 ER -