Research Article
BibTex RIS Cite

İnce Ayarlı Görüntü Transformatörü ve MobileNet Modelleri Kullanılarak Kırmızı Et Türlerinin Belirlenmesi

Year 2022, Issue: 36, 237 - 242, 31.05.2022
https://doi.org/10.31590/ejosat.1112892

Abstract

Dünyanın bazı ülkelerinde yoksulluk veya gıda üzerinde kalite kontrolünün olmaması ile ilgili nedenlerden dolayı, hala gıda tağşişi var. Eşek veya domuz eti gibi düşük maliyetli etler kuzu veya sığır eti olarak pazarlanmaktadır. Bu ahlaki açıdan tehlikelidir, ancak belirli et türlerine alerjisi olan veya dini çekinceleri olan bazı kişiler için daha tehlikeli olabilir. Yapay zeka tekniklerinin hızla gelişmesiyle farklı et türleri arasında ayrım yapabilen bir model oluşturmak mümkün. Bu çalışma, farklı kırmızı et türleri arasında ayrım yapabilen bir model oluşturmayı amaçlamaktadır. Aynı zamanda, bilgisayarlı görü alanındaki en son teknoloji CNN ile transformatör mimarisi arasındaki performansı karşılaştırmayı da amaçlamaktadır. Bu amaç için, çevrimiçi bir depodan sınırlı bir veri seti elde edildi. Veri seti sığır, at ve domuz etlerinin RGB görüntülerini içermektedir. Görüntüler işlendi ve çeşitli veri büyütme teknikleri uygulandı. Daha sonra ince ayarlı ve ayarsız görüntü dönüştürücü ViT ve mobil ağ modelleri üretildi. Modellerin davranışını ölçmek için çeşitli performans değerlendirme kriterleri uygulandı. En iyi test doğruluğu, ince ayarlı ViT modeli tarafından elde edilen %97'dir. Bu çalışma, dönüştürücü mimarisinin ve özellikle ince ayarlı ViT modelinin sınırlı bir veri setinde bile görüntü sınıflandırma alanlarında uygulanmasının etkinliğini göstermiştir.

References

  • Andreas Steiner. (2022). Vision Transformer and MLP-Mixer Architectures. Https://Github.Com/Google-Research/Vision_transformer.
  • Asmara, R. A., Romario, R., Batubulan, K. S., Rohadi, E., Siradjuddin, I., Ronilaya, F., Ariyanto, R., Rahmad, C., & Rahutomo, F. (2018). Classification of pork and beef meat images using extraction of color and texture feature by Grey Level Co-Occurrence Matrix method. IOP Conference Series: Materials Science and Engineering, 434(1). https://doi.org/10.1088/1757-899X/434/1/012072
  • Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., & Houlsby, N. (2020). An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. http://arxiv.org/abs/2010.11929
  • Fitrianto, A., & Sartono, B. (n.d.). International journal of science, engineering, and information technology Image Classification of Beef and Pork Using Convolutional Neural Network in Keras Framework. https://journal.trunojoyo.ac.id/ijseit
  • Gaudenz Boesch. (2022). Vision Transformers (ViT) in Image Recognition – 2022 Guide. Https://Viso.Ai/Deep-Learning/Vision-Transformer-Vit/.
  • GC, S., Saidul Md, B., Zhang, Y., Reed, D., Ahsan, M., Berg, E., & Sun, X. (2021). Using Deep Learning Neural Network in Artificial Intelligence Technology to Classify Beef Cuts. Frontiers in Sensors, 2. https://doi.org/10.3389/fsens.2021.654357
  • Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., & Adam, H. (2017). MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. http://arxiv.org/abs/1704.04861
  • Huang, C., & Gu, Y. (2022). A Machine Learning Method for the Quantitative Detection of Adulterated Meat Using a MOS-Based E-Nose. Foods, 11(4). https://doi.org/10.3390/foods11040602
  • IQBAL AGISTANY. (2022, February). Pork, Meat, and Horse Meat Dataset. Https://Www.Kaggle.Com/Datasets/Iqbalagistany/Pork-Meat-and-Horse-Meat-Dataset.
  • Kaur, P., Khehra, B. S., & Mavi, Er. B. S. (2021). Data Augmentation for Object Detection: A Review. 2021 IEEE International Midwest Symposium on Circuits and Systems (MWSCAS), 537–543. https://doi.org/10.1109/MWSCAS47672.2021.9531849
  • paperswithcode. (2022, March 2). Image Classification on ImageNet. Https://Paperswithcode.Com/Sota/Image-Classification-on-Imagenet?P=centroid-Transformers-Learning-to-Abstract.
  • Wikimedia Foundation. (2022, March 3). Red meat. Https://En.Wikipedia.Org/Wiki/Red_meat.
  • Zhou, D., Kang, B., Jin, X., Yang, L., Lian, X., Jiang, Z., Hou, Q., & Feng, J. (2021). DeepViT: Towards Deeper Vision Transformer. http://arxiv.org/abs/2103.11886

The Identification of Red-Meat Types using The Fine-Tuned Vision Transformer and MobileNet Models

Year 2022, Issue: 36, 237 - 242, 31.05.2022
https://doi.org/10.31590/ejosat.1112892

Abstract

For reasons related to poverty or lack of quality control over food in some countries of the world, there is still food adulteration. Low-cost meats such as donkey or pork are marketed as lamb or beef. This is morally dangerous but may be more dangerous for some people who are allergic to certain types of meat or who have religious reservations. With the rapid development of artificial intelligence techniques, it is possible to build a model capable of differentiating between different types of meat. This study aims to build a model capable of differentiating between different types of red meat. It also aims to compare performance between the very state of art CNN in computer vision with the transformer architecture. For this goal, a limited dataset from an online repository was obtained. The dataset contains RGB images of beef, horse, and pork meats. The images were processed, and various data augmentation techniques were applied. Then vision transformer ViT and mobile net models with and without fine-tuning were built. To measure the models' behavior, several performance evaluation criteria were applied. The best testing accuracy is 97% achieved by the fine-tuned ViT model. This study showed the effectiveness of applying the transformer architecture and especially the fine-tuned ViT model in the areas of image classification even on a limited dataset.

References

  • Andreas Steiner. (2022). Vision Transformer and MLP-Mixer Architectures. Https://Github.Com/Google-Research/Vision_transformer.
  • Asmara, R. A., Romario, R., Batubulan, K. S., Rohadi, E., Siradjuddin, I., Ronilaya, F., Ariyanto, R., Rahmad, C., & Rahutomo, F. (2018). Classification of pork and beef meat images using extraction of color and texture feature by Grey Level Co-Occurrence Matrix method. IOP Conference Series: Materials Science and Engineering, 434(1). https://doi.org/10.1088/1757-899X/434/1/012072
  • Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., & Houlsby, N. (2020). An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. http://arxiv.org/abs/2010.11929
  • Fitrianto, A., & Sartono, B. (n.d.). International journal of science, engineering, and information technology Image Classification of Beef and Pork Using Convolutional Neural Network in Keras Framework. https://journal.trunojoyo.ac.id/ijseit
  • Gaudenz Boesch. (2022). Vision Transformers (ViT) in Image Recognition – 2022 Guide. Https://Viso.Ai/Deep-Learning/Vision-Transformer-Vit/.
  • GC, S., Saidul Md, B., Zhang, Y., Reed, D., Ahsan, M., Berg, E., & Sun, X. (2021). Using Deep Learning Neural Network in Artificial Intelligence Technology to Classify Beef Cuts. Frontiers in Sensors, 2. https://doi.org/10.3389/fsens.2021.654357
  • Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., & Adam, H. (2017). MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. http://arxiv.org/abs/1704.04861
  • Huang, C., & Gu, Y. (2022). A Machine Learning Method for the Quantitative Detection of Adulterated Meat Using a MOS-Based E-Nose. Foods, 11(4). https://doi.org/10.3390/foods11040602
  • IQBAL AGISTANY. (2022, February). Pork, Meat, and Horse Meat Dataset. Https://Www.Kaggle.Com/Datasets/Iqbalagistany/Pork-Meat-and-Horse-Meat-Dataset.
  • Kaur, P., Khehra, B. S., & Mavi, Er. B. S. (2021). Data Augmentation for Object Detection: A Review. 2021 IEEE International Midwest Symposium on Circuits and Systems (MWSCAS), 537–543. https://doi.org/10.1109/MWSCAS47672.2021.9531849
  • paperswithcode. (2022, March 2). Image Classification on ImageNet. Https://Paperswithcode.Com/Sota/Image-Classification-on-Imagenet?P=centroid-Transformers-Learning-to-Abstract.
  • Wikimedia Foundation. (2022, March 3). Red meat. Https://En.Wikipedia.Org/Wiki/Red_meat.
  • Zhou, D., Kang, B., Jin, X., Yang, L., Lian, X., Jiang, Z., Hou, Q., & Feng, J. (2021). DeepViT: Towards Deeper Vision Transformer. http://arxiv.org/abs/2103.11886
There are 13 citations in total.

Details

Primary Language English
Subjects Engineering
Journal Section Articles
Authors

Nagham Alhawas 0000-0002-7407-1392

Zekeriya Tüfekci 0000-0001-7835-2741

Early Pub Date April 11, 2022
Publication Date May 31, 2022
Published in Issue Year 2022 Issue: 36

Cite

APA Alhawas, N., & Tüfekci, Z. (2022). The Identification of Red-Meat Types using The Fine-Tuned Vision Transformer and MobileNet Models. Avrupa Bilim Ve Teknoloji Dergisi(36), 237-242. https://doi.org/10.31590/ejosat.1112892