Araştırma Makalesi
BibTex RIS Kaynak Göster

A CBAM-Enhanced UNetFormer for Semantic Segmentation of Wheat Yellow-Rust Disease Using Multispectral Remote Sensing Images

Yıl 2025, Cilt: 37 Sayı: 2, 133 - 143, 25.06.2025
https://doi.org/10.7240/jeps.1623086
https://izlik.org/JA66UH23MA

Öz

This study focuses on the problem of wheat yellow-rust disease caused by climate change and incorrect farming methods. Early detection of the disease, which manifests as yellow-orange spores on wheat leaves, is crucial for mitigating issues such as reduced crop yield, increased pesticide use, and environmental harm. Current CNN-based semantic segmentation models focus mainly on processing local pixels, which can be insufficient for large areas. This study proposes a novel version of the UNetFormer architecture, enhancing the CNN-based encoder with CBAM modules while utilizing a Transformer-based decoder to address the limitations of current approaches. Specifically, the model incorporates a Convolutional Block Attention Module (CBAM) to refine feature extraction along spatial and channel axes. CBAM modules allow the network to prioritize meaningful features, particularly near-infrared (NIR) wavelength reflections critical for detecting wheat yellow-rust. The proposed UNetFormer2 model effectively captures long-range dependencies in multispectral remote sensing images to improve disease detection across large agricultural areas. Specifically, the model achieves an IoU improvement of 2.1% for RGB, 4.6% for NDVI, and 3% for NIR compared to the baseline UNetFormer model. This work aims to improve wheat yellow-rust disease monitoring efficiency and contribute to more sustainable agricultural practices by reducing unnecessary pesticide application.

Kaynakça

  • Zhang, X., Han, L., Dong, Y., Shi, Y., Huang, W., Han, L., González-Moreno, P., Ma, H., Ye, H., & Sobeih, T. (2019). A deep learning-based approach for automated yellow rust disease detection from high-resolution hyperspectral UAV images. Remote Sensing, 11(13), 1554.
  • Mi, Z., Zhang, X., Su, J., Han, D., & Su, B. (2020). Wheat stripe rust grading by deep learning with attention mechanism and images from mobile devices. Frontiers in Plant Science, 11, 558126.
  • Zhang, J., Pu, R., Loraamm, R. W., Yang, G., & Wang, J. (2014). Comparison between wavelet spectral features and conventional spectral features in detecting yellow rust for winter wheat. Computers and Electronics in Agriculture, 100, 79–87.
  • Liu, W., Yang, G., Xu, F., Qiao, H., Fan, J., Song, Y., & Zhou, Y. (2018). Comparisons of detection of wheat stripe rust using hyperspectral and UAV aerial photography. Acta Phytopathologica Sinica, 48(2), 223–227.
  • Su, J., Yi, D., Coombes, M., Liu, C., Zhai, X., McDonald-Maier, K., & Chen, W. H. (2022). Spectral analysis and mapping of blackgrass weed by leveraging machine learning and UAV multispectral imagery. Computers and Electronics in Agriculture, 192, 106621.
  • Zhang, T., Xu, Z., Su, J., Yang, Z., Liu, C., Chen, W. H., & Li, J. (2021). IR-UNet: Irregular segmentation U-shape network for wheat yellow rust detection by UAV multispectral imagery. Remote Sensing, 13(19), 3892.
  • Su, J., Yi, D., Su, B., Mi, Z., Liu, C., Hu, X., Xu, X., Guo, L., & Chen, W.-H. (2021). Aerial visual perception in smart farming: Field study of wheat yellow rust monitoring. IEEE Transactions on Industrial Informatics, 17(3), 2242–2249.
  • Ulku, I. (2024). ResLMFFNet: A real-time semantic segmentation network for precision agriculture. Journal of Real-Time Image Processing, 21(4), 101.
  • Su, J., Liu, C., & Chen, W. H. (2022). UAV multispectral remote sensing for yellow rust mapping: Opportunities and challenges. In Unmanned Aerial Systems in Precision Agriculture: Technological Progresses and Applications (pp. 107–122). Springer, Singapore.
  • Wang, L., Li, R., Zhang, C., Fang, S., Duan, C., Meng, X., & Atkinson, P. M. (2022). UNetFormer: A UNet-like transformer for efficient semantic segmentation of remote sensing urban scene imagery. ISPRS Journal of Photogrammetry and Remote Sensing, 190, 196–214.
  • Woo, S., Park, J., Lee, J. Y., & Kweon, I. S. (2018). CBAM: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 3–19).
  • Ulku, I., Tanriover, O. O., & Akagündüz, E. (2024). LoRA-NIR: Low-Rank Adaptation of Vision Transformers for Remote Sensing with Near-Infrared Imagery. IEEE Geoscience and Remote Sensing Letters, 21, 1–5.
  • Ulku, I., Akagündüz, E., & Ghamisi, P. (2022). Deep semantic segmentation of trees using multispectral images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 15, 7589–7604.
  • Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015 (pp. 234–241).
  • Badrinarayanan, V., Kendall, A., & Cipolla, R. (2017). SegNet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(12), 2481–2495.
  • Chen, L. C., Zhu, Y., Papandreou, G., Schroff, F., & Adam, H. (2018). Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 801–818).
  • Yu, C., Wang, J., Peng, C., Gao, C., Yu, G., & Sang, N. (2018). BiSeNet: Bilateral segmentation network for real-time semantic segmentation. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 325–341).
  • Li, H., Xiong, P., Fan, H., & Sun, J. (2019). DFANet: Deep feature aggregation for real-time semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (pp. 9522–9531).
  • Zhou, L., Zhang, C., & Wu, M. (2018). D-LinkNet: LinkNet with pretrained encoder and dilated convolution for high-resolution satellite imagery road extraction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, (pp. 182–186).
  • Ding, L., Tang, H., & Bruzzone, L. (2020). LANet: Local attention embedding to improve the semantic segmentation of remote sensing images. IEEE Transactions on Geoscience and Remote Sensing, 59(1), 426–435.
  • Liu, R., Mi, L., & Chen, Z. (2020). AFNet: Adaptive fusion network for remote sensing image semantic segmentation. IEEE Transactions on Geoscience and Remote Sensing, 59(9), 7871–7886.
  • Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., & Houlsby, N. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. https://arxiv.org/abs/2010.11929.
  • Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009). ImageNet: A large-scale hierarchical image database. 2009 IEEE Conference on Computer Vision and Pattern Recognition, (pp. 248–255).
  • Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles, A., & Jégou, H. (2021). Training data-efficient image transformers & distillation through attention. International Conference on Machine Learning, (pp. 10347–10357).
  • Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., & Guo, B. (2021). Swin Transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), (pp. 10012–10022).
  • Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J. M., & Luo, P. (2021). SegFormer: Simple and efficient design for semantic segmentation with transformers. Advances in Neural Information Processing Systems, 34, 12077–12090.
  • Strudel, R., Garcia, R., Laptev, I., & Schmid, C. (2021). Segmenter: Transformer for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, (pp. 7262–7272).
  • Zheng, S., Lu, J., Zhao, H., Zhu, X., Luo, Z., Wang, Y., Fu, Y., Feng, J., Xiang, T., Torr, P. H. S., & Zhang, L. (2021). Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (pp. 6881–6890).
  • Chen, J., Lu, Y., Yu, Q., Luo, X., Adeli, E., Wang, Y., Lu, L., Yuille, A. L., & Zhou, Y. (2021). TransUNet: Transformers make strong encoders for medical image segmentation. arXiv preprint arXiv:2102.04306. https://arxiv.org/abs/2102.04306.
  • Yan, Y., Liu, R., Chen, H., Zhang, L., & Zhang, Q. (2023). CCT-Unet: A U-shaped network based on convolution coupled transformer for segmentation of peripheral and transition zones in prostate MRI. IEEE Journal of Biomedical and Health Informatics, 27(9), 4341–4351.
  • Xiao, T., Liu, Y., Huang, Y., Li, M., & Yang, G. (2023). Enhancing multiscale representations with transformer for remote sensing image semantic segmentation. IEEE Transactions on Geoscience and Remote Sensing, 61, 1–16.
  • Lu, C., Zhang, X., Du, K., Xu, H., & Liu, G. (2024). CTCFNet: CNN-Transformer complementary and fusion network for high-resolution remote sensing image semantic segmentation. IEEE Transactions on Geoscience and Remote Sensing, 62, 1–17.
  • Zhang, C., Jiang, W., Zhang, Y., Wang, W., Zhao, Q., & Wang, C. (2022). Transformer and CNN hybrid deep neural network for semantic segmentation of very-high-resolution remote sensing imagery. IEEE Transactions on Geoscience and Remote Sensing, 60, 1–20.
  • Wu, H., Huang, P., Zhang, M., Tang, W., & Yu, X. (2023). CMTFNet: CNN and multiscale transformer fusion network for remote-sensing image semantic segmentation. IEEE Transactions on Geoscience and Remote Sensing, 61, 1–12.
  • He, X., Zhou, Y., Zhao, J., Zhang, D., Yao, R., & Xue, Y. (2022). Swin transformer embedding UNet for remote sensing image semantic segmentation. IEEE Transactions on Geoscience and Remote Sensing, 60, 1–15.
  • Fan, L., Zhou, Y., Liu, H., Li, Y., & Cao, D. (2023). Combining Swin Transformer with UNet for remote sensing image semantic segmentation. IEEE Transactions on Geoscience and Remote Sensing, 61, 1–11.
  • He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (pp. 770–778).
  • Zhou, Z., Rahman Siddiquee, M. M., Tajbakhsh, N., & Liang, J. (2018). UNet++: A nested U-Net architecture for medical image segmentation. In Proceedings of DLMA, (pp. 3–11).

A CBAM-Enhanced UNetFormer for Semantic Segmentation of Wheat Yellow-Rust Disease Using Multispectral Remote Sensing Images

Yıl 2025, Cilt: 37 Sayı: 2, 133 - 143, 25.06.2025
https://doi.org/10.7240/jeps.1623086
https://izlik.org/JA66UH23MA

Öz

This study focuses on the problem of wheat yellow-rust disease caused by climate change and incorrect farming methods. Early detection of the disease, which manifests as yellow-orange spores on wheat leaves, is crucial for mitigating issues such as reduced crop yield, increased pesticide use, and environmental harm. Current CNN-based semantic segmentation models focus mainly on processing local pixels, which can be insufficient for large areas. This study proposes a novel version of the UNetFormer architecture, enhancing the CNN-based encoder with CBAM modules while utilizing a Transformer-based decoder to address the limitations of current approaches. Specifically, the model incorporates a Convolutional Block Attention Module (CBAM) to refine feature extraction along spatial and channel axes. CBAM modules allow the network to prioritize meaningful features, particularly near-infrared (NIR) wavelength reflections critical for detecting wheat yellow-rust. The proposed UNetFormer2 model effectively captures long-range dependencies in multispectral remote sensing images to improve disease detection across large agricultural areas. Specifically, the model achieves an IoU improvement of 2.1% for RGB, 4.6% for NDVI, and 3% for NIR compared to the baseline UNetFormer model. This work aims to improve wheat yellow-rust disease monitoring efficiency and contribute to more sustainable agricultural practices by reducing unnecessary pesticide application.

Kaynakça

  • Zhang, X., Han, L., Dong, Y., Shi, Y., Huang, W., Han, L., González-Moreno, P., Ma, H., Ye, H., & Sobeih, T. (2019). A deep learning-based approach for automated yellow rust disease detection from high-resolution hyperspectral UAV images. Remote Sensing, 11(13), 1554.
  • Mi, Z., Zhang, X., Su, J., Han, D., & Su, B. (2020). Wheat stripe rust grading by deep learning with attention mechanism and images from mobile devices. Frontiers in Plant Science, 11, 558126.
  • Zhang, J., Pu, R., Loraamm, R. W., Yang, G., & Wang, J. (2014). Comparison between wavelet spectral features and conventional spectral features in detecting yellow rust for winter wheat. Computers and Electronics in Agriculture, 100, 79–87.
  • Liu, W., Yang, G., Xu, F., Qiao, H., Fan, J., Song, Y., & Zhou, Y. (2018). Comparisons of detection of wheat stripe rust using hyperspectral and UAV aerial photography. Acta Phytopathologica Sinica, 48(2), 223–227.
  • Su, J., Yi, D., Coombes, M., Liu, C., Zhai, X., McDonald-Maier, K., & Chen, W. H. (2022). Spectral analysis and mapping of blackgrass weed by leveraging machine learning and UAV multispectral imagery. Computers and Electronics in Agriculture, 192, 106621.
  • Zhang, T., Xu, Z., Su, J., Yang, Z., Liu, C., Chen, W. H., & Li, J. (2021). IR-UNet: Irregular segmentation U-shape network for wheat yellow rust detection by UAV multispectral imagery. Remote Sensing, 13(19), 3892.
  • Su, J., Yi, D., Su, B., Mi, Z., Liu, C., Hu, X., Xu, X., Guo, L., & Chen, W.-H. (2021). Aerial visual perception in smart farming: Field study of wheat yellow rust monitoring. IEEE Transactions on Industrial Informatics, 17(3), 2242–2249.
  • Ulku, I. (2024). ResLMFFNet: A real-time semantic segmentation network for precision agriculture. Journal of Real-Time Image Processing, 21(4), 101.
  • Su, J., Liu, C., & Chen, W. H. (2022). UAV multispectral remote sensing for yellow rust mapping: Opportunities and challenges. In Unmanned Aerial Systems in Precision Agriculture: Technological Progresses and Applications (pp. 107–122). Springer, Singapore.
  • Wang, L., Li, R., Zhang, C., Fang, S., Duan, C., Meng, X., & Atkinson, P. M. (2022). UNetFormer: A UNet-like transformer for efficient semantic segmentation of remote sensing urban scene imagery. ISPRS Journal of Photogrammetry and Remote Sensing, 190, 196–214.
  • Woo, S., Park, J., Lee, J. Y., & Kweon, I. S. (2018). CBAM: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 3–19).
  • Ulku, I., Tanriover, O. O., & Akagündüz, E. (2024). LoRA-NIR: Low-Rank Adaptation of Vision Transformers for Remote Sensing with Near-Infrared Imagery. IEEE Geoscience and Remote Sensing Letters, 21, 1–5.
  • Ulku, I., Akagündüz, E., & Ghamisi, P. (2022). Deep semantic segmentation of trees using multispectral images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 15, 7589–7604.
  • Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015 (pp. 234–241).
  • Badrinarayanan, V., Kendall, A., & Cipolla, R. (2017). SegNet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(12), 2481–2495.
  • Chen, L. C., Zhu, Y., Papandreou, G., Schroff, F., & Adam, H. (2018). Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 801–818).
  • Yu, C., Wang, J., Peng, C., Gao, C., Yu, G., & Sang, N. (2018). BiSeNet: Bilateral segmentation network for real-time semantic segmentation. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 325–341).
  • Li, H., Xiong, P., Fan, H., & Sun, J. (2019). DFANet: Deep feature aggregation for real-time semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (pp. 9522–9531).
  • Zhou, L., Zhang, C., & Wu, M. (2018). D-LinkNet: LinkNet with pretrained encoder and dilated convolution for high-resolution satellite imagery road extraction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, (pp. 182–186).
  • Ding, L., Tang, H., & Bruzzone, L. (2020). LANet: Local attention embedding to improve the semantic segmentation of remote sensing images. IEEE Transactions on Geoscience and Remote Sensing, 59(1), 426–435.
  • Liu, R., Mi, L., & Chen, Z. (2020). AFNet: Adaptive fusion network for remote sensing image semantic segmentation. IEEE Transactions on Geoscience and Remote Sensing, 59(9), 7871–7886.
  • Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., & Houlsby, N. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. https://arxiv.org/abs/2010.11929.
  • Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009). ImageNet: A large-scale hierarchical image database. 2009 IEEE Conference on Computer Vision and Pattern Recognition, (pp. 248–255).
  • Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles, A., & Jégou, H. (2021). Training data-efficient image transformers & distillation through attention. International Conference on Machine Learning, (pp. 10347–10357).
  • Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., & Guo, B. (2021). Swin Transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), (pp. 10012–10022).
  • Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J. M., & Luo, P. (2021). SegFormer: Simple and efficient design for semantic segmentation with transformers. Advances in Neural Information Processing Systems, 34, 12077–12090.
  • Strudel, R., Garcia, R., Laptev, I., & Schmid, C. (2021). Segmenter: Transformer for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, (pp. 7262–7272).
  • Zheng, S., Lu, J., Zhao, H., Zhu, X., Luo, Z., Wang, Y., Fu, Y., Feng, J., Xiang, T., Torr, P. H. S., & Zhang, L. (2021). Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (pp. 6881–6890).
  • Chen, J., Lu, Y., Yu, Q., Luo, X., Adeli, E., Wang, Y., Lu, L., Yuille, A. L., & Zhou, Y. (2021). TransUNet: Transformers make strong encoders for medical image segmentation. arXiv preprint arXiv:2102.04306. https://arxiv.org/abs/2102.04306.
  • Yan, Y., Liu, R., Chen, H., Zhang, L., & Zhang, Q. (2023). CCT-Unet: A U-shaped network based on convolution coupled transformer for segmentation of peripheral and transition zones in prostate MRI. IEEE Journal of Biomedical and Health Informatics, 27(9), 4341–4351.
  • Xiao, T., Liu, Y., Huang, Y., Li, M., & Yang, G. (2023). Enhancing multiscale representations with transformer for remote sensing image semantic segmentation. IEEE Transactions on Geoscience and Remote Sensing, 61, 1–16.
  • Lu, C., Zhang, X., Du, K., Xu, H., & Liu, G. (2024). CTCFNet: CNN-Transformer complementary and fusion network for high-resolution remote sensing image semantic segmentation. IEEE Transactions on Geoscience and Remote Sensing, 62, 1–17.
  • Zhang, C., Jiang, W., Zhang, Y., Wang, W., Zhao, Q., & Wang, C. (2022). Transformer and CNN hybrid deep neural network for semantic segmentation of very-high-resolution remote sensing imagery. IEEE Transactions on Geoscience and Remote Sensing, 60, 1–20.
  • Wu, H., Huang, P., Zhang, M., Tang, W., & Yu, X. (2023). CMTFNet: CNN and multiscale transformer fusion network for remote-sensing image semantic segmentation. IEEE Transactions on Geoscience and Remote Sensing, 61, 1–12.
  • He, X., Zhou, Y., Zhao, J., Zhang, D., Yao, R., & Xue, Y. (2022). Swin transformer embedding UNet for remote sensing image semantic segmentation. IEEE Transactions on Geoscience and Remote Sensing, 60, 1–15.
  • Fan, L., Zhou, Y., Liu, H., Li, Y., & Cao, D. (2023). Combining Swin Transformer with UNet for remote sensing image semantic segmentation. IEEE Transactions on Geoscience and Remote Sensing, 61, 1–11.
  • He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (pp. 770–778).
  • Zhou, Z., Rahman Siddiquee, M. M., Tajbakhsh, N., & Liang, J. (2018). UNet++: A nested U-Net architecture for medical image segmentation. In Proceedings of DLMA, (pp. 3–11).
Toplam 38 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Derin Öğrenme
Bölüm Araştırma Makalesi
Yazarlar

İrem Ülkü 0000-0003-4998-607X

Gönderilme Tarihi 19 Ocak 2025
Kabul Tarihi 17 Nisan 2025
Erken Görünüm Tarihi 16 Haziran 2025
Yayımlanma Tarihi 25 Haziran 2025
DOI https://doi.org/10.7240/jeps.1623086
IZ https://izlik.org/JA66UH23MA
Yayımlandığı Sayı Yıl 2025 Cilt: 37 Sayı: 2

Kaynak Göster

APA Ülkü, İ. (2025). A CBAM-Enhanced UNetFormer for Semantic Segmentation of Wheat Yellow-Rust Disease Using Multispectral Remote Sensing Images. International Journal of Advances in Engineering and Pure Sciences, 37(2), 133-143. https://doi.org/10.7240/jeps.1623086
AMA 1.Ülkü İ. A CBAM-Enhanced UNetFormer for Semantic Segmentation of Wheat Yellow-Rust Disease Using Multispectral Remote Sensing Images. JEPS. 2025;37(2):133-143. doi:10.7240/jeps.1623086
Chicago Ülkü, İrem. 2025. “A CBAM-Enhanced UNetFormer for Semantic Segmentation of Wheat Yellow-Rust Disease Using Multispectral Remote Sensing Images”. International Journal of Advances in Engineering and Pure Sciences 37 (2): 133-43. https://doi.org/10.7240/jeps.1623086.
EndNote Ülkü İ (01 Haziran 2025) A CBAM-Enhanced UNetFormer for Semantic Segmentation of Wheat Yellow-Rust Disease Using Multispectral Remote Sensing Images. International Journal of Advances in Engineering and Pure Sciences 37 2 133–143.
IEEE [1]İ. Ülkü, “A CBAM-Enhanced UNetFormer for Semantic Segmentation of Wheat Yellow-Rust Disease Using Multispectral Remote Sensing Images”, JEPS, c. 37, sy 2, ss. 133–143, Haz. 2025, doi: 10.7240/jeps.1623086.
ISNAD Ülkü, İrem. “A CBAM-Enhanced UNetFormer for Semantic Segmentation of Wheat Yellow-Rust Disease Using Multispectral Remote Sensing Images”. International Journal of Advances in Engineering and Pure Sciences 37/2 (01 Haziran 2025): 133-143. https://doi.org/10.7240/jeps.1623086.
JAMA 1.Ülkü İ. A CBAM-Enhanced UNetFormer for Semantic Segmentation of Wheat Yellow-Rust Disease Using Multispectral Remote Sensing Images. JEPS. 2025;37:133–143.
MLA Ülkü, İrem. “A CBAM-Enhanced UNetFormer for Semantic Segmentation of Wheat Yellow-Rust Disease Using Multispectral Remote Sensing Images”. International Journal of Advances in Engineering and Pure Sciences, c. 37, sy 2, Haziran 2025, ss. 133-4, doi:10.7240/jeps.1623086.
Vancouver 1.Ülkü İ. A CBAM-Enhanced UNetFormer for Semantic Segmentation of Wheat Yellow-Rust Disease Using Multispectral Remote Sensing Images. JEPS [Internet]. 01 Haziran 2025;37(2):133-4. Erişim adresi: https://izlik.org/JA66UH23MA