Araştırma Makalesi
BibTex RIS Kaynak Göster

AN OVERVIEW OF POPULAR DEEP LEARNING METHODS

Yıl 2017, Cilt: 7 Sayı: 2, 165 - 176, 30.12.2017

Öz

 



This paper offers an overview of essential concepts in deep
learning, one of the state of the art approaches in machine learning, in terms
of its history and current applications as a brief introduction to the subject.
Deep learning has shown great successes in many domains such as handwriting
recognition, image recognition, object detection etc. We revisited the concepts
and mechanisms of typical deep learning algorithms such as Convolutional Neural
Networks, Recurrent Neural Networks, Restricted Boltzmann Machine, and
Autoencoders. We provided an intuition to deep learning that does not rely
heavily on its deep math or theoretical constructs.



 

Kaynakça

  • [1] M. Liang, and X. Hu, Recurrent convolutional neural network for object recognition, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 3367–3375.
  • [2] P. Pinheiro, and R. Collobert, Recurrent convolutional neural networks for scene labeling, Proceedings of the 31st International Conference on Machine Learning, 2014, vol. 32, pp. 82–90.
  • [3] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa, Natural Language Processing (Almost) from Scratch, J. Mach. Learn. Res., 2011, vol. 12, pp. 2493–2537.
  • [4] Mohamed A. El-Sayed, Yarub A. Estaitia, and Mohamed A. Khafagy, Automated Edge Detection Using Convolutional Neural Network, Int. J. Adv. Comput. Sci. Appl., 2013, vol. 4, no. 10, pp. 11–17.
  • [5] Dan Cireşan, Deep Neural Networks for Pattern Recognition.
  • [6] W. Shen, X. Wang, Y. Wang, X. Bai, and Z. Zhang, DeepContour: A deep convolutional feature learned by positive-sharing loss for contour detection, in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2015, vol. 07–12–June, pp. 3982–3991.
  • [7] E. Shelhamer, J. Long, and T. Darrell, Fully Convolutional Networks for Semantic Segmentation, IEEE Trans. Pattern Anal. Mach. Intell., 2017, vol. 39, no. 4, pp. 640–651.
  • [8] Y. LeCun, Y. Bengio, G. Hinton, L. Y., B. Y., and H. G., Deep learning, 2015, Nature, vol. 521, no. 7553, pp. 436–444.
  • [9] Kunihiko Fukushima, Neocognitron: A Self-organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position, 1980, Biol. Cybernetics 36, pp. 193-202.
  • [10] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, Handwritten digit recognition with a back-propagation network, in NIPS’89.
  • [11] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, Gradient-based learning applied to document recognition, 1998, Proceedings of the IEEE.
  • [12] Xavier Glorot, Antoine Bordes, and Yoshua Bengio, Deep Sparse Rectifier Neural Networks, 2011, Proceedings of the 14th International Conference on Artificial Intelligence and Statistics, vol. 15 of JMLR. Pp. 315-323.
  • [13] Honglak Lee, Roger Grosse, Rajesh Ranganath, and Andrew Y Ng, Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations, 2009, In Proceedings of the 26th annual international conference on machine learning, pp. 609–616.
  • [14] Yan Lecun B Boser, John S Denker, D Henderson, Richard E Howard, W Hubbard, and Lawrence D Jackel, Handwritten digit recognition with a back-propagation network, 1990, In Advances in neural information processing systems. Citeseer.
  • [15] Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner, Gradient-based learning applied to document recognition, 1998a, Proceedings of the IEEE, pp. 2278–2324.
  • [16] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton, Imagenet classification with deep convolutional neural networks, In Advances in neural information processing systems, 2012, pages 1097–1105.
  • [17] Zeiler, M. D. and Fergus, R. Visualizing and understanding convolutional networks. CoRR, abs/1311.2901, 2013, Published in Proc. ECCV, 2014.
  • [18] Karen Simonyan, and Andrew Zisserman, Very Deep Convolutional Networks For Large-Scale Image Recognition, 2014, arXiv preprint arXiv:1409.1556.
  • [19] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich, Going Deeper with Convolutions, 2014, Computer Vision and Pattern Recognition (CVPR 2015).
  • [20] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, Deep Residual Learning for Image Recognition, 2015, arXiv:1512.03385v1.
  • [21] Gao Huang, Zhuang Liu, Kilian Q. Weinberger, and Laurens van der Maaten, Densely Connected Convolutional Networks, 2016, arXiv:1608.06993v4.
  • [22] Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton, Speech recognition with deep recurrent neural networks, 2013, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
  • [23] A. Graves, M. Liwicki, S. Fernandez, R. Bertolami, H. Bunke, and J. Schmidhuber. A novel connectionist system for unconstrained handwriting recognition, 2009, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 31(5):855–868.
  • [24] Tomas Mikolov, Martin Karafiat, Lukas Burget, Jan Honza Cernock, Sanjeev Khudanpur, Recurrent neural network based language model, 2010, INTERSPEECH 2010.
  • [25] Biao Zhang, Deyi Xiong, and Jinsong Su, Recurrent Neural Machine Translation, 2016, EMNLP 2016, arXiv:1607.08725v1.
  • [26] Justin Johnson, Andrej Karpathy, and Li Fei-Fei, DenseCap: Fully Convolutional Localization Networks for Dense Captioning, 2015, arXiv:1511.07571v1.
  • [27] Jeffrey L. Elma, Finding Structure in Time, 1990, Cognitive Science 14, pp. 179-211.
  • [28] Sepp Hochreiter, and Jürgen Schmidhuber, Long Short-Term Memory, 1997, Neural Computation 9(8):1735-1780.
  • [29] Klaus Greff, Rupesh K. Srivastava, Jan Koutnık, Bas R. Steunebrink, and Jurgen Schmidhuber, LSTM: A Search Space Odyssey, 2016, IEEE Transactions On Neural Networks And Learning Systems, vol.28, Issue: 10, pp. 2222-2232.
  • [30] David H Ackley, Geoffrey E Hinton, and Terrence J Sejnowski, 1985, A learning algorithm for boltzmann machines. Cognitive science, 9(1):147–169.
  • [31] G. E. Hinton and R. R. Salakhutdinov, 2006, Reducing the dimensionality of data with neural networks, Science, 313(5786):504–507.
  • [32] J. Kivinen, and C. Williams, Multiple texture Boltzmann machines, 2012, JMLR W&CP: AISTATS 2012, 22:638–646.
  • [33] H. Larochelle and Y. Bengio, Classification using discriminative restricted Boltzmann machines, 2008, International Conference on Machine learning(ICML), pp. 536-543.
  • [34] A. Mohamed, and G. E. Hinton, Phone recognition using restricted Boltzmann machines, 2010, IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), pp. 4354–4357.
  • [35] T. Schmah, G. E. Hinton, R. S. Zemel, S. L. Small, and S. C. Strother, Generative versus discriminative training of RBMs for classification of fMRI images, 2009, Advances in Neural Information Processing Systems (NIPS 21), pp. 1409–1416.
  • [36] Y. Tang, R. Salakhutdinov, and G. E. Hinton, Robust Boltzmann machines for recognition and denoising, 2012, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2264–2271.
  • [37] Hinton, G.E, Training products of experts by minimizing contrastive divergence, 2002, Neural Computation 14, pp. 1771–1800.
  • [38] Asja Fischer, and Christian Igel, Training Restricted Boltzmann Machines: An Introduction, 2014, Pattern Recognition 47:25-39.
  • [39] [Hinton, G.E, Learning multiple layers of representation, 2007, Trends in Cognitive Sciences 11(10), pp. 428–434.
  • [40] Geoffrey E. Hinton, Simon Osindero, and Yee-Whye Teh, A Fast Learning Algorithm for Deep Belief Nets, 2006, Neural Computation 18, pp. 1527-1554.
  • [41] Yoshua Bengio, Learning Deep Architectures for AI, 2009, Dept. IRO, Universite de Montreal, Technical Report 1312.
  • [42] P. Vincent, H. Larochelle Y. Bengio and P.A. Manzagol, Extracting and Composing Robust Features with Denoising Autoencoders, 2008, Proceedings of the 25th International Conference on Machine Learning (ICML2008), pp. 1096-1103.
  • [43] Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle, Greedy Layer-Wise Training of Deep Networks, 2007, Advances in neural information processing systems, pp. 153-160.
  • [44] Autoencoders -Bits and Bytes of Deep Learning, [Online], https://medium.com/towards-data-science/autoencoders-bits-and-bytes-of-deep-learning-eaba376f23ad, [Accessed: 09-Oct-2017].
Yıl 2017, Cilt: 7 Sayı: 2, 165 - 176, 30.12.2017

Öz

Kaynakça

  • [1] M. Liang, and X. Hu, Recurrent convolutional neural network for object recognition, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 3367–3375.
  • [2] P. Pinheiro, and R. Collobert, Recurrent convolutional neural networks for scene labeling, Proceedings of the 31st International Conference on Machine Learning, 2014, vol. 32, pp. 82–90.
  • [3] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa, Natural Language Processing (Almost) from Scratch, J. Mach. Learn. Res., 2011, vol. 12, pp. 2493–2537.
  • [4] Mohamed A. El-Sayed, Yarub A. Estaitia, and Mohamed A. Khafagy, Automated Edge Detection Using Convolutional Neural Network, Int. J. Adv. Comput. Sci. Appl., 2013, vol. 4, no. 10, pp. 11–17.
  • [5] Dan Cireşan, Deep Neural Networks for Pattern Recognition.
  • [6] W. Shen, X. Wang, Y. Wang, X. Bai, and Z. Zhang, DeepContour: A deep convolutional feature learned by positive-sharing loss for contour detection, in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2015, vol. 07–12–June, pp. 3982–3991.
  • [7] E. Shelhamer, J. Long, and T. Darrell, Fully Convolutional Networks for Semantic Segmentation, IEEE Trans. Pattern Anal. Mach. Intell., 2017, vol. 39, no. 4, pp. 640–651.
  • [8] Y. LeCun, Y. Bengio, G. Hinton, L. Y., B. Y., and H. G., Deep learning, 2015, Nature, vol. 521, no. 7553, pp. 436–444.
  • [9] Kunihiko Fukushima, Neocognitron: A Self-organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position, 1980, Biol. Cybernetics 36, pp. 193-202.
  • [10] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, Handwritten digit recognition with a back-propagation network, in NIPS’89.
  • [11] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, Gradient-based learning applied to document recognition, 1998, Proceedings of the IEEE.
  • [12] Xavier Glorot, Antoine Bordes, and Yoshua Bengio, Deep Sparse Rectifier Neural Networks, 2011, Proceedings of the 14th International Conference on Artificial Intelligence and Statistics, vol. 15 of JMLR. Pp. 315-323.
  • [13] Honglak Lee, Roger Grosse, Rajesh Ranganath, and Andrew Y Ng, Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations, 2009, In Proceedings of the 26th annual international conference on machine learning, pp. 609–616.
  • [14] Yan Lecun B Boser, John S Denker, D Henderson, Richard E Howard, W Hubbard, and Lawrence D Jackel, Handwritten digit recognition with a back-propagation network, 1990, In Advances in neural information processing systems. Citeseer.
  • [15] Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner, Gradient-based learning applied to document recognition, 1998a, Proceedings of the IEEE, pp. 2278–2324.
  • [16] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton, Imagenet classification with deep convolutional neural networks, In Advances in neural information processing systems, 2012, pages 1097–1105.
  • [17] Zeiler, M. D. and Fergus, R. Visualizing and understanding convolutional networks. CoRR, abs/1311.2901, 2013, Published in Proc. ECCV, 2014.
  • [18] Karen Simonyan, and Andrew Zisserman, Very Deep Convolutional Networks For Large-Scale Image Recognition, 2014, arXiv preprint arXiv:1409.1556.
  • [19] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich, Going Deeper with Convolutions, 2014, Computer Vision and Pattern Recognition (CVPR 2015).
  • [20] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, Deep Residual Learning for Image Recognition, 2015, arXiv:1512.03385v1.
  • [21] Gao Huang, Zhuang Liu, Kilian Q. Weinberger, and Laurens van der Maaten, Densely Connected Convolutional Networks, 2016, arXiv:1608.06993v4.
  • [22] Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton, Speech recognition with deep recurrent neural networks, 2013, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
  • [23] A. Graves, M. Liwicki, S. Fernandez, R. Bertolami, H. Bunke, and J. Schmidhuber. A novel connectionist system for unconstrained handwriting recognition, 2009, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 31(5):855–868.
  • [24] Tomas Mikolov, Martin Karafiat, Lukas Burget, Jan Honza Cernock, Sanjeev Khudanpur, Recurrent neural network based language model, 2010, INTERSPEECH 2010.
  • [25] Biao Zhang, Deyi Xiong, and Jinsong Su, Recurrent Neural Machine Translation, 2016, EMNLP 2016, arXiv:1607.08725v1.
  • [26] Justin Johnson, Andrej Karpathy, and Li Fei-Fei, DenseCap: Fully Convolutional Localization Networks for Dense Captioning, 2015, arXiv:1511.07571v1.
  • [27] Jeffrey L. Elma, Finding Structure in Time, 1990, Cognitive Science 14, pp. 179-211.
  • [28] Sepp Hochreiter, and Jürgen Schmidhuber, Long Short-Term Memory, 1997, Neural Computation 9(8):1735-1780.
  • [29] Klaus Greff, Rupesh K. Srivastava, Jan Koutnık, Bas R. Steunebrink, and Jurgen Schmidhuber, LSTM: A Search Space Odyssey, 2016, IEEE Transactions On Neural Networks And Learning Systems, vol.28, Issue: 10, pp. 2222-2232.
  • [30] David H Ackley, Geoffrey E Hinton, and Terrence J Sejnowski, 1985, A learning algorithm for boltzmann machines. Cognitive science, 9(1):147–169.
  • [31] G. E. Hinton and R. R. Salakhutdinov, 2006, Reducing the dimensionality of data with neural networks, Science, 313(5786):504–507.
  • [32] J. Kivinen, and C. Williams, Multiple texture Boltzmann machines, 2012, JMLR W&CP: AISTATS 2012, 22:638–646.
  • [33] H. Larochelle and Y. Bengio, Classification using discriminative restricted Boltzmann machines, 2008, International Conference on Machine learning(ICML), pp. 536-543.
  • [34] A. Mohamed, and G. E. Hinton, Phone recognition using restricted Boltzmann machines, 2010, IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), pp. 4354–4357.
  • [35] T. Schmah, G. E. Hinton, R. S. Zemel, S. L. Small, and S. C. Strother, Generative versus discriminative training of RBMs for classification of fMRI images, 2009, Advances in Neural Information Processing Systems (NIPS 21), pp. 1409–1416.
  • [36] Y. Tang, R. Salakhutdinov, and G. E. Hinton, Robust Boltzmann machines for recognition and denoising, 2012, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2264–2271.
  • [37] Hinton, G.E, Training products of experts by minimizing contrastive divergence, 2002, Neural Computation 14, pp. 1771–1800.
  • [38] Asja Fischer, and Christian Igel, Training Restricted Boltzmann Machines: An Introduction, 2014, Pattern Recognition 47:25-39.
  • [39] [Hinton, G.E, Learning multiple layers of representation, 2007, Trends in Cognitive Sciences 11(10), pp. 428–434.
  • [40] Geoffrey E. Hinton, Simon Osindero, and Yee-Whye Teh, A Fast Learning Algorithm for Deep Belief Nets, 2006, Neural Computation 18, pp. 1527-1554.
  • [41] Yoshua Bengio, Learning Deep Architectures for AI, 2009, Dept. IRO, Universite de Montreal, Technical Report 1312.
  • [42] P. Vincent, H. Larochelle Y. Bengio and P.A. Manzagol, Extracting and Composing Robust Features with Denoising Autoencoders, 2008, Proceedings of the 25th International Conference on Machine Learning (ICML2008), pp. 1096-1103.
  • [43] Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle, Greedy Layer-Wise Training of Deep Networks, 2007, Advances in neural information processing systems, pp. 153-160.
  • [44] Autoencoders -Bits and Bytes of Deep Learning, [Online], https://medium.com/towards-data-science/autoencoders-bits-and-bytes-of-deep-learning-eaba376f23ad, [Accessed: 09-Oct-2017].
Toplam 44 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Elektrik Mühendisliği
Bölüm Araştırma Makalesi
Yazarlar

Musab Coşkun Bu kişi benim

Özal Yıldırım Bu kişi benim

Ayşegül Uçar Bu kişi benim

Yakup Demır Bu kişi benim

Yayımlanma Tarihi 30 Aralık 2017
Yayımlandığı Sayı Yıl 2017 Cilt: 7 Sayı: 2

Kaynak Göster

APA Coşkun, M., Yıldırım, Ö., Uçar, A., Demır, Y. (2017). AN OVERVIEW OF POPULAR DEEP LEARNING METHODS. European Journal of Technique (EJT), 7(2), 165-176.

All articles published by EJT are licensed under the Creative Commons Attribution 4.0 International License. This permits anyone to copy, redistribute, remix, transmit and adapt the work provided the original work and source is appropriately cited.Creative Commons Lisansı