Research Article
BibTex RIS Cite

Object and Color Classification with Instance Segmentation

Year 2022, , 182 - 189, 25.04.2022
https://doi.org/10.19113/sdufenbed.1023674

Abstract

One of the main issues in the computer vision field is the applications of object detection and classification. Convolutional neural networks stand out due to its computational performance (speed) and accuracy to realize object detection and classification processes needed in popular applications such as autonomous vehicles and surveillance systems. However, object detection and classification processes lack different feature extractions like the colors of the same types of objects. The reason behind this is each color is required to be defined as a new class in the network even if the object category is the same. Another way to obtain the color data of the detected object is to process the related image at the pixel level. In order to improve the accuracy of the methods to be performed at the pixel level, it is necessary to clearly define the boundaries of the detected object by using segmentation as well as object detection. The color of the object can be classified with the pixel intensity values of the image in detected object boundaries. In this study, after the instance segmentation is carried out with convolutional neural networks, color classification based on pixel information is performed and the colors of the objects is determined as well as their classes. The success of the proposed approach has been tested experimentally and contributed to the literature by presenting an effective method.

References

  • [1] Jiao, L., Zhang, F., Liu, F., Yang, S., Li, L., Feng, Z., & Qu, R. (2019). A Survey of Deep Learning-Based Object Detection. IEEE Access, 7, 128837-128868.
  • [2] Lin, T., Goyal, P., Girshick, R., He, K., & Dollar, P. (2017). Focal Loss for Dense Object Detection. 2017 IEEE International Conference On Computer Vision (ICCV).
  • [3] Chao, P., Kao, C., Ruan, Y., Huang, C., & Lin, Y. (2019). HarDNet: A Low Memory Traffic Network. 2019 IEEE/CVF International Conference On Computer Vision (ICCV).
  • [4] Tan, M., Pang, R., & Le, Q. (2020). EfficientDet: Scalable and Efficient Object Detection. 2020 IEEE/CVF Conference On Computer Vision And Pattern Recognition (CVPR).
  • [5] Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You Only Look Once: Unified, Real-Time Object Detection. 2016 IEEE Conference On Computer Vision And Pattern Recognition (CVPR).
  • [6] Guo, Y., Liu, Y., Georgiou, T., & Lew, M. (2017). A review of semantic segmentation using deep neural networks. International Journal Of Multimedia Information Retrieval, 7(2), 87-93.
  • [7] Momin, B., & Mujawar, T. (2015). Vehicle detection and attribute based search of vehicles in video surveillance system. 2015 International Conference On Circuits, Power And Computing Technologies [ICCPCT-2015].
  • [8] Chu, W., Liu, Y., Shen, C., Cai, D., & Hua, X. (2018). Multi-Task Vehicle Detection With Region-of-Interest Voting. IEEE Transactions On Image Processing, 27(1), 432-441.
  • [9] Artan, Y., Alkan, B., Balci, B., & Elihos, A. (2019). Deep Learning Based Vehicle Make, Model and Color Recognition Using License Plate Recognition Camera Images. 2019 27Th Signal Processing And Communications Applications Conference (SIU).
  • [10] Bai, M., & Urtasun, R. (2017). Deep Watershed Transform for Instance Segmentation. 2017 IEEE Conference On Computer Vision And Pattern Recognition (CVPR).
  • [11] Bahnsen, C., & Moeslund, T. (2019). Rain Removal in Traffic Surveillance: Does it Matter?. IEEE Transactions On Intelligent Transportation Systems, 20(8), 2802-2819.
  • [12] Pinheiro, P., Lin, T., Collobert, R., & Dollár, P. (2016). Learning to Refine Object Segments. Computer Vision – ECCV 2016, 75-91.
  • [13] He, K., Gkioxari, G., Dollar, P., & Girshick, R. (2017). Mask R-CNN. 2017 IEEE International Conference On Computer Vision (ICCV).
  • [14] Liu, S., Qi, L., Qin, H., Shi, J., & Jia, J. (2018). Path Aggregation Network for Instance Segmentation. 2018 IEEE/CVF Conference On Computer Vision And Pattern Recognition.
  • [15] Fu, C. Y., Shvets, M., & Berg, A. C. (2019). RetinaMask: Learning to predict masks improves state-of-the-art single-shot detection for free. arXiv preprint arXiv:1901.03353.
  • [16] Bolya, D., Zhou, C., Xiao, F., & Lee, Y. (2019). YOLACT: Real-Time Instance Segmentation. 2019 IEEE/CVF International Conference On Computer Vision (ICCV).
  • [17] Bayram F. (2020). Derin öğrenme tabanlı otomatik plaka tanıma. Politeknik Dergisi, 23(4), 955-960.
  • [18] Lin, T., Maire, M., Belongie, S., Hays, J., Perona, P., & Ramanan, D. et al. (2014). Microsoft COCO: Common Objects in Context. Computer Vision – ECCV 2014, 740-755.
  • [19] Everingham, M., Eslami, S., Van Gool, L., Williams, C., Winn, J., & Zisserman, A. (2014). The Pascal Visual Object Classes Challenge: A Retrospective. International Journal Of Computer Vision, 111(1), 98-136.
  • [20] Neuhold, G., Ollmann, T., Bulo, S., & Kontschieder, P. (2017). The Mapillary Vistas Dataset for Semantic Understanding of Street Scenes. 2017 IEEE International Conference On Computer Vision (ICCV). [21] Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., & Benenson, R. et al. (2016). The Cityscapes Dataset for Semantic Urban Scene Understanding. 2016 IEEE Conference On Computer Vision And Pattern Recognition (CVPR).
  • [22] Vassilvitskii, S., & Arthur, D. (2006). k-means++: The advantages of careful seeding. In Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms (pp. 1027-1035).
  • [23] N. Bhatia and V. Ashev, (2010). Survey of Nearest Neighbor Techniques, International Journal of Computer Science and Information Security, 8(2), 1-4.
  • [24] Ajmal, A., Hollitt, C., Frean, M., & Al-Sahaf, H. (2018). A Comparison of RGB and HSV Colour Spaces for Visual Attention Models. 2018 International Conference On Image And Vision Computing New Zealand (IVCNZ).
  • [25] Centore, P. (2016). sRGB Centroids for the ISCC-NBS Colour System, https://www.munsell colourscienceforpainters.com/ColourSciencePapers/sRGBCentroidsForTheISCCNBSColourSystem.pdf (Erişim Tarihi: 10.11.2021).
  • [26] Wang, P., Jiao, B., Yang, L., Yang, Y., Zhang, S., Wei, W., & Zhang, Y. (2019). Vehicle Re-Identification in Aerial Imagery: Dataset and Approach. 2019 IEEE/CVF International Conference On Computer Vision (ICCV).
  • [27] Bahnsen, C., & Moeslund, T. (2019). Rain Removal in Traffic Surveillance: Does it Matter?. IEEE Transactions On Intelligent Transportation Systems, 20(8), 2802-2819.

Örnek Bölütlemesi ile Nesne ve Renk Sınıflandırması

Year 2022, , 182 - 189, 25.04.2022
https://doi.org/10.19113/sdufenbed.1023674

Abstract

Görüntü üzerinde nesne tespit ve sınıflandırma uygulamaları görüntü işleme alanında ele alınan temel konulardandır. Otonom araçlar ve görsel takip sistemleri gibi popüler uygulamalarda ihtiyaç duyulan nesne tespit ve sınıflandırma işlemlerinin gerçekleştirilmesinde evrişimsel sinir ağları, hesaplama performansı (hızı) ve başarımı ile öne çıkmaktadır. Ancak nesne tespit ve sınıflandırma işlemleri aynı tip nesnelerin renk gibi farklı özellik çıkarımlarından yoksun olmaktadır. Bu durumun temelinde ise nesne tipi aynı olsa da her bir rengin yeni bir sınıf olarak ağa tanıtılması gerekliliğidir. Tespit edilen nesnenin renk bilgisini edinmenin bir diğer yolu ise nesneye ait görüntüyü piksel seviyesinde işlemektir. Piksel seviyesinde yapılacak işlemlerin doğruluğunu arttırmak için nesne tespitinin yanında bölütme işlemi de yapılarak tespit edilen nesnenin sınırlarını net olarak belirlemek gereklidir. Tespit edilen nesnenin rengi tespit edilen nesne sınırları içerisindeki piksel yoğunluk değerleri ile sınıflandırılabilir. Bu çalışmada evrişimsel sinir ağları ile gerçekleştirilen örnek bölütlemesi sonrası piksel bilgilerine dayalı renk sınıflandırması yapılarak nesnelerin sınıflarının yanı sıra renkleri de tespit edilebilmiştir. Ortaya konulan yaklaşımın başarısı deneysel olarak sınanmış ve etkin bir yöntem sunularak literatüre katkıda bulunulmuştur.

References

  • [1] Jiao, L., Zhang, F., Liu, F., Yang, S., Li, L., Feng, Z., & Qu, R. (2019). A Survey of Deep Learning-Based Object Detection. IEEE Access, 7, 128837-128868.
  • [2] Lin, T., Goyal, P., Girshick, R., He, K., & Dollar, P. (2017). Focal Loss for Dense Object Detection. 2017 IEEE International Conference On Computer Vision (ICCV).
  • [3] Chao, P., Kao, C., Ruan, Y., Huang, C., & Lin, Y. (2019). HarDNet: A Low Memory Traffic Network. 2019 IEEE/CVF International Conference On Computer Vision (ICCV).
  • [4] Tan, M., Pang, R., & Le, Q. (2020). EfficientDet: Scalable and Efficient Object Detection. 2020 IEEE/CVF Conference On Computer Vision And Pattern Recognition (CVPR).
  • [5] Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You Only Look Once: Unified, Real-Time Object Detection. 2016 IEEE Conference On Computer Vision And Pattern Recognition (CVPR).
  • [6] Guo, Y., Liu, Y., Georgiou, T., & Lew, M. (2017). A review of semantic segmentation using deep neural networks. International Journal Of Multimedia Information Retrieval, 7(2), 87-93.
  • [7] Momin, B., & Mujawar, T. (2015). Vehicle detection and attribute based search of vehicles in video surveillance system. 2015 International Conference On Circuits, Power And Computing Technologies [ICCPCT-2015].
  • [8] Chu, W., Liu, Y., Shen, C., Cai, D., & Hua, X. (2018). Multi-Task Vehicle Detection With Region-of-Interest Voting. IEEE Transactions On Image Processing, 27(1), 432-441.
  • [9] Artan, Y., Alkan, B., Balci, B., & Elihos, A. (2019). Deep Learning Based Vehicle Make, Model and Color Recognition Using License Plate Recognition Camera Images. 2019 27Th Signal Processing And Communications Applications Conference (SIU).
  • [10] Bai, M., & Urtasun, R. (2017). Deep Watershed Transform for Instance Segmentation. 2017 IEEE Conference On Computer Vision And Pattern Recognition (CVPR).
  • [11] Bahnsen, C., & Moeslund, T. (2019). Rain Removal in Traffic Surveillance: Does it Matter?. IEEE Transactions On Intelligent Transportation Systems, 20(8), 2802-2819.
  • [12] Pinheiro, P., Lin, T., Collobert, R., & Dollár, P. (2016). Learning to Refine Object Segments. Computer Vision – ECCV 2016, 75-91.
  • [13] He, K., Gkioxari, G., Dollar, P., & Girshick, R. (2017). Mask R-CNN. 2017 IEEE International Conference On Computer Vision (ICCV).
  • [14] Liu, S., Qi, L., Qin, H., Shi, J., & Jia, J. (2018). Path Aggregation Network for Instance Segmentation. 2018 IEEE/CVF Conference On Computer Vision And Pattern Recognition.
  • [15] Fu, C. Y., Shvets, M., & Berg, A. C. (2019). RetinaMask: Learning to predict masks improves state-of-the-art single-shot detection for free. arXiv preprint arXiv:1901.03353.
  • [16] Bolya, D., Zhou, C., Xiao, F., & Lee, Y. (2019). YOLACT: Real-Time Instance Segmentation. 2019 IEEE/CVF International Conference On Computer Vision (ICCV).
  • [17] Bayram F. (2020). Derin öğrenme tabanlı otomatik plaka tanıma. Politeknik Dergisi, 23(4), 955-960.
  • [18] Lin, T., Maire, M., Belongie, S., Hays, J., Perona, P., & Ramanan, D. et al. (2014). Microsoft COCO: Common Objects in Context. Computer Vision – ECCV 2014, 740-755.
  • [19] Everingham, M., Eslami, S., Van Gool, L., Williams, C., Winn, J., & Zisserman, A. (2014). The Pascal Visual Object Classes Challenge: A Retrospective. International Journal Of Computer Vision, 111(1), 98-136.
  • [20] Neuhold, G., Ollmann, T., Bulo, S., & Kontschieder, P. (2017). The Mapillary Vistas Dataset for Semantic Understanding of Street Scenes. 2017 IEEE International Conference On Computer Vision (ICCV). [21] Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., & Benenson, R. et al. (2016). The Cityscapes Dataset for Semantic Urban Scene Understanding. 2016 IEEE Conference On Computer Vision And Pattern Recognition (CVPR).
  • [22] Vassilvitskii, S., & Arthur, D. (2006). k-means++: The advantages of careful seeding. In Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms (pp. 1027-1035).
  • [23] N. Bhatia and V. Ashev, (2010). Survey of Nearest Neighbor Techniques, International Journal of Computer Science and Information Security, 8(2), 1-4.
  • [24] Ajmal, A., Hollitt, C., Frean, M., & Al-Sahaf, H. (2018). A Comparison of RGB and HSV Colour Spaces for Visual Attention Models. 2018 International Conference On Image And Vision Computing New Zealand (IVCNZ).
  • [25] Centore, P. (2016). sRGB Centroids for the ISCC-NBS Colour System, https://www.munsell colourscienceforpainters.com/ColourSciencePapers/sRGBCentroidsForTheISCCNBSColourSystem.pdf (Erişim Tarihi: 10.11.2021).
  • [26] Wang, P., Jiao, B., Yang, L., Yang, Y., Zhang, S., Wei, W., & Zhang, Y. (2019). Vehicle Re-Identification in Aerial Imagery: Dataset and Approach. 2019 IEEE/CVF International Conference On Computer Vision (ICCV).
  • [27] Bahnsen, C., & Moeslund, T. (2019). Rain Removal in Traffic Surveillance: Does it Matter?. IEEE Transactions On Intelligent Transportation Systems, 20(8), 2802-2819.
There are 26 citations in total.

Details

Primary Language Turkish
Subjects Engineering
Journal Section Makaleler
Authors

Ahmet Özcan 0000-0002-1098-7078

Ömer Çetin 0000-0001-5176-6338

Publication Date April 25, 2022
Published in Issue Year 2022

Cite

APA Özcan, A., & Çetin, Ö. (2022). Örnek Bölütlemesi ile Nesne ve Renk Sınıflandırması. Süleyman Demirel Üniversitesi Fen Bilimleri Enstitüsü Dergisi, 26(1), 182-189. https://doi.org/10.19113/sdufenbed.1023674
AMA Özcan A, Çetin Ö. Örnek Bölütlemesi ile Nesne ve Renk Sınıflandırması. Süleyman Demirel Üniv. Fen Bilim. Enst. Derg. April 2022;26(1):182-189. doi:10.19113/sdufenbed.1023674
Chicago Özcan, Ahmet, and Ömer Çetin. “Örnek Bölütlemesi Ile Nesne Ve Renk Sınıflandırması”. Süleyman Demirel Üniversitesi Fen Bilimleri Enstitüsü Dergisi 26, no. 1 (April 2022): 182-89. https://doi.org/10.19113/sdufenbed.1023674.
EndNote Özcan A, Çetin Ö (April 1, 2022) Örnek Bölütlemesi ile Nesne ve Renk Sınıflandırması. Süleyman Demirel Üniversitesi Fen Bilimleri Enstitüsü Dergisi 26 1 182–189.
IEEE A. Özcan and Ö. Çetin, “Örnek Bölütlemesi ile Nesne ve Renk Sınıflandırması”, Süleyman Demirel Üniv. Fen Bilim. Enst. Derg., vol. 26, no. 1, pp. 182–189, 2022, doi: 10.19113/sdufenbed.1023674.
ISNAD Özcan, Ahmet - Çetin, Ömer. “Örnek Bölütlemesi Ile Nesne Ve Renk Sınıflandırması”. Süleyman Demirel Üniversitesi Fen Bilimleri Enstitüsü Dergisi 26/1 (April 2022), 182-189. https://doi.org/10.19113/sdufenbed.1023674.
JAMA Özcan A, Çetin Ö. Örnek Bölütlemesi ile Nesne ve Renk Sınıflandırması. Süleyman Demirel Üniv. Fen Bilim. Enst. Derg. 2022;26:182–189.
MLA Özcan, Ahmet and Ömer Çetin. “Örnek Bölütlemesi Ile Nesne Ve Renk Sınıflandırması”. Süleyman Demirel Üniversitesi Fen Bilimleri Enstitüsü Dergisi, vol. 26, no. 1, 2022, pp. 182-9, doi:10.19113/sdufenbed.1023674.
Vancouver Özcan A, Çetin Ö. Örnek Bölütlemesi ile Nesne ve Renk Sınıflandırması. Süleyman Demirel Üniv. Fen Bilim. Enst. Derg. 2022;26(1):182-9.

e-ISSN :1308-6529
Linking ISSN (ISSN-L): 1300-7688

Dergide yayımlanan tüm makalelere ücretiz olarak erişilebilinir ve Creative Commons CC BY-NC Atıf-GayriTicari lisansı ile açık erişime sunulur. Tüm yazarlar ve diğer dergi kullanıcıları bu durumu kabul etmiş sayılırlar. CC BY-NC lisansı hakkında detaylı bilgiye erişmek için tıklayınız.