Araştırma Makalesi
BibTex RIS Kaynak Göster

AYDINLATMA ÖZNİTELİĞİ KULLANILARAK EVRİŞİMSEL SİNİR AĞI MODELLERİ İLE MEYVE SINIFLANDIRMA

Yıl 2020, Cilt: 25 Sayı: 1, 81 - 100, 30.04.2020
https://doi.org/10.17482/uumfd.628166

Öz

Aydınlatma, nesnelerin olduğu gibi görünmesini sağlayan doğal veya yapay kaynaklardır. Özellikle görüntü işleme uygulamalarında yakalanan görüntüdeki nesne bilgisinin eksiksiz ve doğru şekilde alınabilmesi için aydınlatmanın kullanılması bir gerekliliktir. Ancak aydınlatma kaynağının tür, parlaklık ve konumunun değişimi; nesnenin görüntüsü, rengi, gölgesi veya boyutunun da değişmesine ve nesnenin farklı olarak algılanmasına sebep olmaktadır. Bu sebeple görüntülerin ayırt edilmesinde güçlü bir yapay zeka tekniğinin kullanılması, sınıfların ayırt edilmesini kolaylaştıracaktır. Bir yapay zeka yöntemi olan Evrişimsel Sinir Ağları (ESA), otomatik olarak özellikleri çıkarabilen ve ağ eğitilirken öğrenme sağlandığı için bariz özellikleri kolaylıkla belirleyen bir algoritmadır. Çalışmada ALOI-COL veriseti kullanılmıştır. ALOI-COL, 12 farklı renk sıcaklığıyla elde edilmiş 1000 sınıftan oluşan bir verisetidir. ALOI-COL verisetindeki 29 sınıftan oluşan meyve görüntüleri, ESA mimarilerinden AlexNet, VGG16 ve VGG19 kullanılarak sınıflandırılmıştır. Verisetindeki görüntüler, görüntü işleme teknikleriyle zenginleştirilmiş ve her sınıftan 51 adet görüntü elde edilmiştir. Çalışma; %80-20 ve %60-40 eğitim-test olmak üzere iki yapıda incelenmiştir. 50 devir çalıştırılması sonucunda test verileri, AlexNet (%80-20) ve VGG16 (%60-40) mimarilerinde %100, VGG19 (%80-20) mimarisinde ise %86.49 doğrulukla sınıflandırılmıştır.

Kaynakça

  • Adler, A., Elad, M. and Zibulevsky, M. (2016) Compressed Learning: A Deep Neural Network Approach, arXiv preprint, arXiv: 1610.09615.
  • ALOI, (2004). Konu: Amsterdam Library of Object Images (ALOI). Erişim Adresi: http://aloi.science.uva.nl/ (Erişim Tarihi:19.1.2019)
  • Bianco, S., Cusano, C., Napoletano, P. and Schettini, R. (2017) Improving CNN-Based Texture Classification by Color Balancing, Journal of Imaging, 3(3), 33. doi:10.3390/jimaging3030033
  • Braje, W.L., Kersten, D., Tarr, M.J. and Troje, N.F. (1998) Illumination effects in face recognition, Psychobiology, 26(4), 371-380. doi: 10.3758/BF03330623
  • Brodatz, P. (1966) Textures: a photographic album for artists and designers, Dover Pubns, New York.
  • Castelluccio, M., Poggi, G., Sansone, C. and Verdoliva, L. (2015) Land use classification in remote sensing images by convolutional neural networks, arXiv preprint, arXiv:1508.00092
  • Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K. and Yuille, A.L. (2017). Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs, IEEE transactions on pattern analysis and machine intelligence, 40(4), 834-848. doi: 10.1109/TPAMI.2017.2699184
  • Cusano, C., Napoletano, P. and Schettini, R. (2016a) Evaluating color texture descriptors under large variations of controlled lighting conditions, JOSA A, 33(1), 17-30. doi: 10.1364/JOSAA.33.000017
  • Cusano, C., Napoletano, P. and Schettini, R. (2016b) Combining multiple features for color texture classification, Journal of Electronic Imaging, 25(6), 061410. doi:10.1117/1.JEI.25.6.061410
  • Çağlayan, A. (2018). Derin Öğrenme Tekniklerini Kullanarak RGB-D Nesne Tanıma, Doktora Tezi, Hacettepe Üniversitesi Fen Bilimleri Enstitüsü, Ankara. (in Turkish)
  • Doğan, F. ve Türkoğlu, İ. (2018) Derin Öğrenme Algoritmalarının Yaprak Sınıflandırma Başarımlarının Karşılaştırılması, Sakarya University Journal Of Computer And Information Sciences, 1(1), 10-21. (in Turkish)
  • Frossard, D., (2016). Konu: VGG in TensorFlow. Erişim Adresi: http://www.cs.toronto.edu/~frossard/post/vgg16/ (Erişim Tarihi:19.1.2019)
  • Geusebroek, J.M., Burghouts, G.J. and Smeulders, A.W.M. (2005) The Amsterdam library of object images, International Journal of Computer Vision, 61(1), 103-112. doi: 10.1023/B:VISI.0000042993.50813.60
  • Goodfellow, I., Bengio, Y. and Courville, A. (2016) Deep Learning, MIT Press, Cambridge.
  • He, K., Zhang, X., Ren, S. and Sun, J. (2016). Deep residual learning for image recognition, In Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA, 770-778. doi: 10.1109/CVPR.2016.90
  • Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I. and Salakhutdinov, R.R. (2012). Improving neural networks by preventing co-adaptation of feature detectors, arXiv preprint, arXiv:1207.0580
  • Hossin, M. and Sulaiman, M.N. (2015) A review on evaluation metrics for data classification evaluations, International Journal of Data Mining & Knowledge Management Process, 5(2), 1-11. doi: 10.5121/ijdkp.2015.5201
  • Hubel, D.H. and Wiesel, T.N. (1968) Receptive fields and functional architecture of monkey striate cortex, The Journal of Physiology, 195(1), 215-243. doi: 10.1113/jphysiol.1968.sp008455
  • Ijjina, E.P. and Mohan, C.K. (2014) View and illumination invariant object classification based on 3D Color Histogram using Convolutional Neural Networks, In Asian Conference on Computer Vision, Springer, Cham, 316-327. doi: 10.1007/978-3-319-16628-5_23
  • İnik, Ö. ve Ülker, E. (2017) Derin Öğrenme ve Görüntü Analizinde Kullanılan Derin Öğrenme Modelleri, Gaziosmanpaşa Bilimsel Araştırma Dergisi, 6(3), 85-104. (in Turkish)
  • Jacobs, D.W., Belhumeur, P.N. and Basri, R. (1998) Comparing Images Under Variable Illumination, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR98), Santa Barbara, CA, USA, 610-617. doi: 10.1109/CVPR.1998.698668
  • Jähne B. and Haußecker, B. (2000) Computer Vision and Applications, Academic Press, USA.
  • Jana, S., Basak, S. and Parekh, R. (2017) Automatic fruit recognition from natural images using color and texture features, 2017 Devices for Integrated Circuit (DevIC), Kalyani, India, 620-624. doi: 10.1109/DEVIC.2017.8074025
  • Jarrett, K., Kavukcuoglu, K., Ranzato, M.A. and LeCun, Y. (2009) What is the best multi-stage architecture for object recognition?, 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 2146-2153. doi: 10.1109/ICCV.2009.5459469
  • Jehle, M., Sommer, C. and Jähne, B. (2010) Learning of optimal illumination for material classification, In Joint Pattern Recognition Symposium, Springer, Berlin, Heidelberg, 563-572. doi: 10.1007/978-3-642-15986-2_57
  • Kızrak, A., (2018). Konu: Derine Daha Derine: Evrişimli Sinir Ağları, Bilgisayarlı görü neden gerekli?. Erişim Adresi: https://medium.com/deep-learning-turkiye/deri%CC%87ne-daha-deri%CC%87ne-evri%C5%9Fimli-sinir-a%C4%9Flar%C4%B1-2813a2c8b2a9 (Erişim Tarihi:19.1.2019)
  • Krizhevsky, A., Sutskever, I. and Hinton, G.E. (2012). Imagenet classification with deep convolutional neural networks, In Advances in neural information processing systems, 25(2), 1097-1105. doi: 10.1145/3065386
  • LeCun, Y., Bottou, L., Bengio, Y. and Haffner, P. (1998) Gradient-based learning applied to document recognition, Proceedings of the IEEE, 86(11), 2278-2324. doi: 10.1109/5.726791
  • Liu, L., Shen, C. and van den Hengel, A. (2015a) The treasure beneath convolutional layers: Cross-convolutional-layer pooling for image classification, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, Massachusetts, 4749-4757. doi: 10.1109/CVPR.2015.7299107
  • Liu, T., Fang, S., Zhao, Y., Wang, P. and Zhang, J. (2015b) Implementation of training convolutional neural networks, arXiv preprint. arXiv:1506.01195
  • Lü, Q. and Tang, M. (2012) Detection of hidden bruise on kiwi fruit using hyperspectral imaging and parallelepiped classification, Procedia Environmental Sciences, 12, 1172-1179. doi: 10.1016/j.proenv.2012.01.404
  • Mindru, F., Tuytelaars, T., Van Gool, L. and Moons, T. (2004) Moment invariants for recognition under changing viewpoint and illumination, Computer Vision and Image Understanding, 94(1-3), 3-27. doi: 10.1016/j.cviu.2003.10.011
  • Phillips, P.J., Scruggs, W.T., O'Toole, A.J., Flynn, P.J., Bowyer, K.W., Schott, C.L. and Sharpe, M. (2009) FRVT 2006 and ICE 2006 large-scale experimental results, IEEE transactions on pattern analysis and machine intelligence, 32(5), 831-846. doi: 10.1109/TPAMI.2009.59
  • Pouladzadeh, P., Yassine, A. and Shirmohammadi, S. (2015) Foodd: Food detection dataset for calorie measurement using food images, In International Conference on Image Analysis and Processing, Springer, Cham, 441-448. doi: 10.1007/978-3-319-23222-5_54
  • Rocha, A., Hauagge, D. C., Wainer, J. and Goldenstein, S. (2010) Automatic fruit and vegetable classification from images, Computers and Electronics in Agriculture, 70(1), 96-104. doi: 10.1016/j.compag.2009.09.002
  • Sa, I., Ge, Z., Dayoub, F., Upcroft, B., Perez, T. and McCool, C. (2016) Deepfruits: A fruit detection system using deep neural networks, Sensors, 16(8), 1222. doi: 10.3390/s16081222
  • Scherer, D., Müller, A., and Behnke, S. (2010). Evaluation of pooling operations in convolutional architectures for object recognition, In International conference on artificial neural networks, Springer, Berlin, Heidelberg, 92-101. doi: 10.1007/978-3-642-15825-4_10
  • Simonyan, K. and Zisserman, A. (2014) Very deep convolutional networks for large-scale image recognition, arXiv preprint, arXiv:1409.1556 2014
  • Steinbrener, J., Posch, K. and Leitner, R. (2019) Hyperspectral fruit and vegetable classification using convolutional neural networks, Computers and Electronics in Agriculture, 162, 364-372. doi: 10.1016/j.compag.2019.04.019
  • Tümen, V., Söylemez, Ö.F. and Ergen, B. (2017) Facial emotion recognition on a dataset using convolutional neural network, 2017 International Artificial Intelligence and Data Processing Symposium (IDAP), Malatya, Turkey, 1-5. doi: 10.1109/IDAP.2017.8090281
  • Uçar, A. ve Bingöl, M.S. (2018) Derin öğrenmenin Caffe kullanılarak grafik işleme kartlarında değerlendirilmesi, Dicle Mühendislik Fakültesi Dergisi, 9(1), 39-49. (in Turkish)
  • Vageeswaran, P., Mitra, K. and Chellappa, R. (2012) Blur and illumination robust face recognition via set-theoretic characterization, IEEE transactions on image processing, 22(4), 1362-1372. doi: 10.1109/TIP.2012.2228498
  • Wang, S., Zhang, Y., Ji, G., Yang, J., Wu, J. and Wei, L. (2015) Fruit classification by wavelet-entropy and feedforward neural network trained by fitness-scaled chaotic ABC and biogeography-based optimization, Entropy, 17(8), 5711-5728. doi: 10.3390/e17085711
  • Xiao, T., Li, H., Ouyang, W. and Wang, X. (2016). Learning deep feature representations with domain guided dropout for person re-identification, In Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA, 1249-1258. doi: 10.1109/CVPR.2016.140
  • Yang, J.B., Nguyen, M.N., San, P.P., Li, X.L. and Krishnaswamy, S. (2015) Deep Convolutional Neural Networks on Multichannel Time Series for Human Activity Recognition, In Twenty-Fourth International Joint Conference on Artificial Intelligence, Buenos Aires, Argentina, 3995-4001.
  • Yiğit, A. (2017). İş Süreçlerinde İnsan Görüsünü Derin Öğrenme ile Destekleme, Yüksek Lisans Tezi, Trakya Üniversitesi Fen Bilimleri Enstitüsü, Edirne. (in Turkish)
  • Zhang, T., Tang, Y.Y., Fang, B., Shang, Z. and Liu, X. (2009). Face recognition under varying illumination using gradientfaces, IEEE Transactions on Image Processing, 18(11), 2599-2606. doi: 10.1109/TIP.2009.2028255
  • Zhang, W., Zhang, Y., Zhai, J., Zhao, D., Xu, L., Zhou, J., Li, Z. and Yang, S. (2018) Multi-source data fusion using deep learning for smart refrigerators, Computers in Industry, 95, 15-21. doi: 10.1016/j.compind.2017.09.001
  • Zheng, Y., Yang, C. and Merkulov, A. (2018). Breast cancer screening using convolutional neural network and follow-up digital mammography, in Proc. SPIE San Francisco 10669, Computational Imaging III, doi: 10.1117/12.2304564
  • Zhu, J.Y., Zheng, W.S., Lu, F. and Lai, J.H. (2017). Illumination invariant single face image recognition under heterogeneous lighting condition, Pattern Recognition, 66, 313-327. doi: 10.1016/j.patcog.2016.12.029

FRUIT CLASSIFICATION WITH CONVOLUTION NEURAL NETWORK MODELS USING ILLUMINATION ATTRIBUTE

Yıl 2020, Cilt: 25 Sayı: 1, 81 - 100, 30.04.2020
https://doi.org/10.17482/uumfd.628166

Öz

Illumination is a natural or artificial source and it allows objects to be seen. Especially use of illumination for necessary in image processing applications for correct and complete object information captured from images. However type, brightness and position of lighting source change, it also changes the image, color, shadow or size of the object and it causes to appear differently on the object. Therefore, the use of a strong artificial intelligence technique to distinguish images will ease the differentiation of classes. Convolutional Neural Networks (CNN), an artificial intelligence method, is an algorithm that can automatically extract features and easily identify obvious features as learning is provided while training network. In the study ALOI-COL dataset used. ALOI-COL consists of 1000 classes such as food and toys obtained with 12 different color temperatures. Fruit images of 29 classes in the dataset were classified using the CNN architectures AlexNet, VGG16 and VGG19. The images in the dataset were increased with image processing techniques and 51 images of each class created. The study 80-20% and 60-40% training-test examined in two structures. As a result of 50 epochs in the test data classified accuracy as 100% by using AlexNet (80-20%) and VGG16 (60-40%) architectures and 86.49% in VGG19 (80-20%) architecture. 

Kaynakça

  • Adler, A., Elad, M. and Zibulevsky, M. (2016) Compressed Learning: A Deep Neural Network Approach, arXiv preprint, arXiv: 1610.09615.
  • ALOI, (2004). Konu: Amsterdam Library of Object Images (ALOI). Erişim Adresi: http://aloi.science.uva.nl/ (Erişim Tarihi:19.1.2019)
  • Bianco, S., Cusano, C., Napoletano, P. and Schettini, R. (2017) Improving CNN-Based Texture Classification by Color Balancing, Journal of Imaging, 3(3), 33. doi:10.3390/jimaging3030033
  • Braje, W.L., Kersten, D., Tarr, M.J. and Troje, N.F. (1998) Illumination effects in face recognition, Psychobiology, 26(4), 371-380. doi: 10.3758/BF03330623
  • Brodatz, P. (1966) Textures: a photographic album for artists and designers, Dover Pubns, New York.
  • Castelluccio, M., Poggi, G., Sansone, C. and Verdoliva, L. (2015) Land use classification in remote sensing images by convolutional neural networks, arXiv preprint, arXiv:1508.00092
  • Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K. and Yuille, A.L. (2017). Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs, IEEE transactions on pattern analysis and machine intelligence, 40(4), 834-848. doi: 10.1109/TPAMI.2017.2699184
  • Cusano, C., Napoletano, P. and Schettini, R. (2016a) Evaluating color texture descriptors under large variations of controlled lighting conditions, JOSA A, 33(1), 17-30. doi: 10.1364/JOSAA.33.000017
  • Cusano, C., Napoletano, P. and Schettini, R. (2016b) Combining multiple features for color texture classification, Journal of Electronic Imaging, 25(6), 061410. doi:10.1117/1.JEI.25.6.061410
  • Çağlayan, A. (2018). Derin Öğrenme Tekniklerini Kullanarak RGB-D Nesne Tanıma, Doktora Tezi, Hacettepe Üniversitesi Fen Bilimleri Enstitüsü, Ankara. (in Turkish)
  • Doğan, F. ve Türkoğlu, İ. (2018) Derin Öğrenme Algoritmalarının Yaprak Sınıflandırma Başarımlarının Karşılaştırılması, Sakarya University Journal Of Computer And Information Sciences, 1(1), 10-21. (in Turkish)
  • Frossard, D., (2016). Konu: VGG in TensorFlow. Erişim Adresi: http://www.cs.toronto.edu/~frossard/post/vgg16/ (Erişim Tarihi:19.1.2019)
  • Geusebroek, J.M., Burghouts, G.J. and Smeulders, A.W.M. (2005) The Amsterdam library of object images, International Journal of Computer Vision, 61(1), 103-112. doi: 10.1023/B:VISI.0000042993.50813.60
  • Goodfellow, I., Bengio, Y. and Courville, A. (2016) Deep Learning, MIT Press, Cambridge.
  • He, K., Zhang, X., Ren, S. and Sun, J. (2016). Deep residual learning for image recognition, In Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA, 770-778. doi: 10.1109/CVPR.2016.90
  • Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I. and Salakhutdinov, R.R. (2012). Improving neural networks by preventing co-adaptation of feature detectors, arXiv preprint, arXiv:1207.0580
  • Hossin, M. and Sulaiman, M.N. (2015) A review on evaluation metrics for data classification evaluations, International Journal of Data Mining & Knowledge Management Process, 5(2), 1-11. doi: 10.5121/ijdkp.2015.5201
  • Hubel, D.H. and Wiesel, T.N. (1968) Receptive fields and functional architecture of monkey striate cortex, The Journal of Physiology, 195(1), 215-243. doi: 10.1113/jphysiol.1968.sp008455
  • Ijjina, E.P. and Mohan, C.K. (2014) View and illumination invariant object classification based on 3D Color Histogram using Convolutional Neural Networks, In Asian Conference on Computer Vision, Springer, Cham, 316-327. doi: 10.1007/978-3-319-16628-5_23
  • İnik, Ö. ve Ülker, E. (2017) Derin Öğrenme ve Görüntü Analizinde Kullanılan Derin Öğrenme Modelleri, Gaziosmanpaşa Bilimsel Araştırma Dergisi, 6(3), 85-104. (in Turkish)
  • Jacobs, D.W., Belhumeur, P.N. and Basri, R. (1998) Comparing Images Under Variable Illumination, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR98), Santa Barbara, CA, USA, 610-617. doi: 10.1109/CVPR.1998.698668
  • Jähne B. and Haußecker, B. (2000) Computer Vision and Applications, Academic Press, USA.
  • Jana, S., Basak, S. and Parekh, R. (2017) Automatic fruit recognition from natural images using color and texture features, 2017 Devices for Integrated Circuit (DevIC), Kalyani, India, 620-624. doi: 10.1109/DEVIC.2017.8074025
  • Jarrett, K., Kavukcuoglu, K., Ranzato, M.A. and LeCun, Y. (2009) What is the best multi-stage architecture for object recognition?, 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 2146-2153. doi: 10.1109/ICCV.2009.5459469
  • Jehle, M., Sommer, C. and Jähne, B. (2010) Learning of optimal illumination for material classification, In Joint Pattern Recognition Symposium, Springer, Berlin, Heidelberg, 563-572. doi: 10.1007/978-3-642-15986-2_57
  • Kızrak, A., (2018). Konu: Derine Daha Derine: Evrişimli Sinir Ağları, Bilgisayarlı görü neden gerekli?. Erişim Adresi: https://medium.com/deep-learning-turkiye/deri%CC%87ne-daha-deri%CC%87ne-evri%C5%9Fimli-sinir-a%C4%9Flar%C4%B1-2813a2c8b2a9 (Erişim Tarihi:19.1.2019)
  • Krizhevsky, A., Sutskever, I. and Hinton, G.E. (2012). Imagenet classification with deep convolutional neural networks, In Advances in neural information processing systems, 25(2), 1097-1105. doi: 10.1145/3065386
  • LeCun, Y., Bottou, L., Bengio, Y. and Haffner, P. (1998) Gradient-based learning applied to document recognition, Proceedings of the IEEE, 86(11), 2278-2324. doi: 10.1109/5.726791
  • Liu, L., Shen, C. and van den Hengel, A. (2015a) The treasure beneath convolutional layers: Cross-convolutional-layer pooling for image classification, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, Massachusetts, 4749-4757. doi: 10.1109/CVPR.2015.7299107
  • Liu, T., Fang, S., Zhao, Y., Wang, P. and Zhang, J. (2015b) Implementation of training convolutional neural networks, arXiv preprint. arXiv:1506.01195
  • Lü, Q. and Tang, M. (2012) Detection of hidden bruise on kiwi fruit using hyperspectral imaging and parallelepiped classification, Procedia Environmental Sciences, 12, 1172-1179. doi: 10.1016/j.proenv.2012.01.404
  • Mindru, F., Tuytelaars, T., Van Gool, L. and Moons, T. (2004) Moment invariants for recognition under changing viewpoint and illumination, Computer Vision and Image Understanding, 94(1-3), 3-27. doi: 10.1016/j.cviu.2003.10.011
  • Phillips, P.J., Scruggs, W.T., O'Toole, A.J., Flynn, P.J., Bowyer, K.W., Schott, C.L. and Sharpe, M. (2009) FRVT 2006 and ICE 2006 large-scale experimental results, IEEE transactions on pattern analysis and machine intelligence, 32(5), 831-846. doi: 10.1109/TPAMI.2009.59
  • Pouladzadeh, P., Yassine, A. and Shirmohammadi, S. (2015) Foodd: Food detection dataset for calorie measurement using food images, In International Conference on Image Analysis and Processing, Springer, Cham, 441-448. doi: 10.1007/978-3-319-23222-5_54
  • Rocha, A., Hauagge, D. C., Wainer, J. and Goldenstein, S. (2010) Automatic fruit and vegetable classification from images, Computers and Electronics in Agriculture, 70(1), 96-104. doi: 10.1016/j.compag.2009.09.002
  • Sa, I., Ge, Z., Dayoub, F., Upcroft, B., Perez, T. and McCool, C. (2016) Deepfruits: A fruit detection system using deep neural networks, Sensors, 16(8), 1222. doi: 10.3390/s16081222
  • Scherer, D., Müller, A., and Behnke, S. (2010). Evaluation of pooling operations in convolutional architectures for object recognition, In International conference on artificial neural networks, Springer, Berlin, Heidelberg, 92-101. doi: 10.1007/978-3-642-15825-4_10
  • Simonyan, K. and Zisserman, A. (2014) Very deep convolutional networks for large-scale image recognition, arXiv preprint, arXiv:1409.1556 2014
  • Steinbrener, J., Posch, K. and Leitner, R. (2019) Hyperspectral fruit and vegetable classification using convolutional neural networks, Computers and Electronics in Agriculture, 162, 364-372. doi: 10.1016/j.compag.2019.04.019
  • Tümen, V., Söylemez, Ö.F. and Ergen, B. (2017) Facial emotion recognition on a dataset using convolutional neural network, 2017 International Artificial Intelligence and Data Processing Symposium (IDAP), Malatya, Turkey, 1-5. doi: 10.1109/IDAP.2017.8090281
  • Uçar, A. ve Bingöl, M.S. (2018) Derin öğrenmenin Caffe kullanılarak grafik işleme kartlarında değerlendirilmesi, Dicle Mühendislik Fakültesi Dergisi, 9(1), 39-49. (in Turkish)
  • Vageeswaran, P., Mitra, K. and Chellappa, R. (2012) Blur and illumination robust face recognition via set-theoretic characterization, IEEE transactions on image processing, 22(4), 1362-1372. doi: 10.1109/TIP.2012.2228498
  • Wang, S., Zhang, Y., Ji, G., Yang, J., Wu, J. and Wei, L. (2015) Fruit classification by wavelet-entropy and feedforward neural network trained by fitness-scaled chaotic ABC and biogeography-based optimization, Entropy, 17(8), 5711-5728. doi: 10.3390/e17085711
  • Xiao, T., Li, H., Ouyang, W. and Wang, X. (2016). Learning deep feature representations with domain guided dropout for person re-identification, In Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA, 1249-1258. doi: 10.1109/CVPR.2016.140
  • Yang, J.B., Nguyen, M.N., San, P.P., Li, X.L. and Krishnaswamy, S. (2015) Deep Convolutional Neural Networks on Multichannel Time Series for Human Activity Recognition, In Twenty-Fourth International Joint Conference on Artificial Intelligence, Buenos Aires, Argentina, 3995-4001.
  • Yiğit, A. (2017). İş Süreçlerinde İnsan Görüsünü Derin Öğrenme ile Destekleme, Yüksek Lisans Tezi, Trakya Üniversitesi Fen Bilimleri Enstitüsü, Edirne. (in Turkish)
  • Zhang, T., Tang, Y.Y., Fang, B., Shang, Z. and Liu, X. (2009). Face recognition under varying illumination using gradientfaces, IEEE Transactions on Image Processing, 18(11), 2599-2606. doi: 10.1109/TIP.2009.2028255
  • Zhang, W., Zhang, Y., Zhai, J., Zhao, D., Xu, L., Zhou, J., Li, Z. and Yang, S. (2018) Multi-source data fusion using deep learning for smart refrigerators, Computers in Industry, 95, 15-21. doi: 10.1016/j.compind.2017.09.001
  • Zheng, Y., Yang, C. and Merkulov, A. (2018). Breast cancer screening using convolutional neural network and follow-up digital mammography, in Proc. SPIE San Francisco 10669, Computational Imaging III, doi: 10.1117/12.2304564
  • Zhu, J.Y., Zheng, W.S., Lu, F. and Lai, J.H. (2017). Illumination invariant single face image recognition under heterogeneous lighting condition, Pattern Recognition, 66, 313-327. doi: 10.1016/j.patcog.2016.12.029
Toplam 50 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Yapay Zeka
Bölüm Araştırma Makaleleri
Yazarlar

Birkan Büyükarıkan 0000-0002-9703-9678

Erkan Ülker 0000-0003-4393-9870

Yayımlanma Tarihi 30 Nisan 2020
Gönderilme Tarihi 2 Ekim 2019
Kabul Tarihi 2 Mart 2020
Yayımlandığı Sayı Yıl 2020 Cilt: 25 Sayı: 1

Kaynak Göster

APA Büyükarıkan, B., & Ülker, E. (2020). AYDINLATMA ÖZNİTELİĞİ KULLANILARAK EVRİŞİMSEL SİNİR AĞI MODELLERİ İLE MEYVE SINIFLANDIRMA. Uludağ Üniversitesi Mühendislik Fakültesi Dergisi, 25(1), 81-100. https://doi.org/10.17482/uumfd.628166
AMA Büyükarıkan B, Ülker E. AYDINLATMA ÖZNİTELİĞİ KULLANILARAK EVRİŞİMSEL SİNİR AĞI MODELLERİ İLE MEYVE SINIFLANDIRMA. UUJFE. Nisan 2020;25(1):81-100. doi:10.17482/uumfd.628166
Chicago Büyükarıkan, Birkan, ve Erkan Ülker. “AYDINLATMA ÖZNİTELİĞİ KULLANILARAK EVRİŞİMSEL SİNİR AĞI MODELLERİ İLE MEYVE SINIFLANDIRMA”. Uludağ Üniversitesi Mühendislik Fakültesi Dergisi 25, sy. 1 (Nisan 2020): 81-100. https://doi.org/10.17482/uumfd.628166.
EndNote Büyükarıkan B, Ülker E (01 Nisan 2020) AYDINLATMA ÖZNİTELİĞİ KULLANILARAK EVRİŞİMSEL SİNİR AĞI MODELLERİ İLE MEYVE SINIFLANDIRMA. Uludağ Üniversitesi Mühendislik Fakültesi Dergisi 25 1 81–100.
IEEE B. Büyükarıkan ve E. Ülker, “AYDINLATMA ÖZNİTELİĞİ KULLANILARAK EVRİŞİMSEL SİNİR AĞI MODELLERİ İLE MEYVE SINIFLANDIRMA”, UUJFE, c. 25, sy. 1, ss. 81–100, 2020, doi: 10.17482/uumfd.628166.
ISNAD Büyükarıkan, Birkan - Ülker, Erkan. “AYDINLATMA ÖZNİTELİĞİ KULLANILARAK EVRİŞİMSEL SİNİR AĞI MODELLERİ İLE MEYVE SINIFLANDIRMA”. Uludağ Üniversitesi Mühendislik Fakültesi Dergisi 25/1 (Nisan 2020), 81-100. https://doi.org/10.17482/uumfd.628166.
JAMA Büyükarıkan B, Ülker E. AYDINLATMA ÖZNİTELİĞİ KULLANILARAK EVRİŞİMSEL SİNİR AĞI MODELLERİ İLE MEYVE SINIFLANDIRMA. UUJFE. 2020;25:81–100.
MLA Büyükarıkan, Birkan ve Erkan Ülker. “AYDINLATMA ÖZNİTELİĞİ KULLANILARAK EVRİŞİMSEL SİNİR AĞI MODELLERİ İLE MEYVE SINIFLANDIRMA”. Uludağ Üniversitesi Mühendislik Fakültesi Dergisi, c. 25, sy. 1, 2020, ss. 81-100, doi:10.17482/uumfd.628166.
Vancouver Büyükarıkan B, Ülker E. AYDINLATMA ÖZNİTELİĞİ KULLANILARAK EVRİŞİMSEL SİNİR AĞI MODELLERİ İLE MEYVE SINIFLANDIRMA. UUJFE. 2020;25(1):81-100.

DUYURU:

30.03.2021- Nisan 2021 (26/1) sayımızdan itibaren TR-Dizin yeni kuralları gereği, dergimizde basılacak makalelerde, ilk gönderim aşamasında Telif Hakkı Formu yanısıra, Çıkar Çatışması Bildirim Formu ve Yazar Katkısı Bildirim Formu da tüm yazarlarca imzalanarak gönderilmelidir. Yayınlanacak makalelerde de makale metni içinde "Çıkar Çatışması" ve "Yazar Katkısı" bölümleri yer alacaktır. İlk gönderim aşamasında doldurulması gereken yeni formlara "Yazım Kuralları" ve "Makale Gönderim Süreci" sayfalarımızdan ulaşılabilir. (Değerlendirme süreci bu tarihten önce tamamlanıp basımı bekleyen makalelerin yanısıra değerlendirme süreci devam eden makaleler için, yazarlar tarafından ilgili formlar doldurularak sisteme yüklenmelidir).  Makale şablonları da, bu değişiklik doğrultusunda güncellenmiştir. Tüm yazarlarımıza önemle duyurulur.

Bursa Uludağ Üniversitesi, Mühendislik Fakültesi Dekanlığı, Görükle Kampüsü, Nilüfer, 16059 Bursa. Tel: (224) 294 1907, Faks: (224) 294 1903, e-posta: mmfd@uludag.edu.tr