Araştırma Makalesi
BibTex RIS Kaynak Göster

Performance comparison of deep learning frameworks

Yıl 2021, Cilt: 1 Sayı: 1, 1 - 11, 28.02.2021

Öz

Deep learning (DL) is branch of machine learning and imitates the neural activity of brain on to artificial neural networks. Meanwhile it can be trained to define characteristics of data such as image, voice or different complex patterns. DL is capable of to find solutions for complex and NP-hard problems. In literature, there are many DL frameworks, libraries and tools to develop solutions. In this study, the most commonly used DL frameworks such as Torch, Theano, Caffe, Caffe2, MXNet, Keras, TensorFlow and Computational Network Tool Kit (CNTK) are investigated and performance comparison of the frameworks is provided. . In addition, the GPU performances have been tested for the best frameworks which have been determined according to the literature: TensorFlow, Keras (TensorFlow Backend), Theano, Keras (Theano Backend), Torch. The GPU performance comparison of these frameworks has been made by the experimental results obtained through MNIST and GPDS signature datasets. According to experimental results TensorFlow was detected best one, while other researches in the literature claimed that Pytorch is better. The contributions of in this study is to eliminate the contradiction in the literature by revealing the cause. In this way, it is aimed to assist the researchers in choosing the most appropriate DL framework for their studies.

Teşekkür

This work has been supported by the NVIDIA Corporation. All experimental studies were carried out on the TITAN XP graphics card donated by NVIDIA. We sincerely thank NVIDIA Corporation for their supports.

Kaynakça

  • [1] Bahrampour, S., Ramakrishnan, N., Schott, L., & Shah M., Comparative study of caffe, neon, theano, and torch for deep learning 2016, arXiv, abs/1511.06435.
  • [2] Chen T., et al. Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems, arXiv Prepr 2015. arXiv1512.01274.
  • [3] Jia Y. et al. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM international conference on Multimedia 2014, 675–678.
  • [4] NVIDIA. Caffe2 Deep Learning Framework 2017. https://developer.nvidia.com/caffe2.
  • [5] Chollet F., et al. Keras: Deep learning library for theano and tensorflow 2015. URL https//keras. io/k, 7, 8.
  • [6] ̧F.ois Chollet, Keras 2015, https://github.com/fchollet/keras.
  • [7] Microsoft. Computational Network Toolkit (CNTK) 2016. [Online]. Available: https://www.microsoft.com/en-us/cognitive-toolkit/.
  • [8] Huang. X., Microsoft computational network toolkit offers most efficient distributed deep learning computational performance 2015. https://goo.gl/9UUwVn.
  • [9] Microsoft. The Microsoft Cognitive Toolkit 2016. https://www.microsoft.com/en-us/cognitive-toolkit/.
  • [10] Banerjee, D.S., Hamidouche, K., & Panda, D.K., Re-Designing CNTK Deep Learning Framework on Modern GPU Enabled Clusters, 2016 IEEE International Conference on Cloud Computing Technology and Science (CloudCom), 2016, pp. 144–151, DOI: 10.1109/CloudCom.2016.0036.
  • [11] MXNet. MXNet 2017. https://mxnet.incubator.apache.org/.
  • [12] Google, TensorFlow. [Online]. Available: https://www.tensorflow.org/.
  • [13] NVIDIA. GPU-Accelerated TensorFlow 2018. https://www.nvidia.com/en-us/data-center/gpu-accelerated-applications/tensorflow/.
  • [14] Abadi M., et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems 2016. arXiv Prepr. arXiv1603.04467, 2016.
  • [15] Goldsborough. P., A tour of TensorFlow 2016. arXiv Prepr. arXiv1610.01178.
  • [16] Comparing Top Deep Learning Frameworks. https://deeplearning4j.org/compare-dl4j-tensorflow-pytorch.
  • [17] Bergstra J., et al. Theano: A CPU and GPU math compiler in Python 2010. In Proc. 9th Python in Science Conf, vol. 1.
  • [18] Bastien F. et al. Theano: new features and speed improvements 2012. arXiv Prepr. arXiv1211.5590.
  • [19] Al-Rfou R., et al. Theano: A Python framework for fast computation of mathematical expressions. arXiv Prepr. arXiv1605.02688, 472: 473-2016.
  • [20] Torch. What is Torch? http://torch.ch/. [Online]. Available: http://torch.ch/.
  • [21] Chetlur S., et al. Cudnn: Efficient primitives for deep learning 2014. arXiv Prepr. arXiv1410.0759.
  • [22] Shi, S., Wang, Q., Xu, P., & Chu, X., Benchmarking state-of-the-art deep learning software tools 2016. In 7th International Conference on Cloud Computing and Big Data (CCBD), 2016, 99–104. DOI: 10.1109/CCBD.2016.029.
  • [23] Chollet. F. Deep learning with python, 2017 Manning Publications Co.
  • [24] Erickson, B. J., Korfiatis, P., Akkus, Z., Kline, T., & Philbrick, K. Toolkits and libraries for deep learning. J. Digit. Imaging, 2017, 30(4), 400–405.
  • [25] Jouppi N. P. et al. In-datacenter performance analysis of a tensor processing unit. In Proceedings of the 44th Annual International Symposium on Computer Architecture, 2017, 1–12.
  • [26] Bergstra J., et al. Theano: Deep learning on gpus with Python 2011. In NIPS 2011, BigLearning Workshop, Granada, Spain, vol. 3.
  • [27] Goodfellow I. J., et al. Pylearn2: a machine learning research library 2013. arXiv Prepr. arXiv1308.4214.
  • [28] Van Merriënboer B., et al. Blocks and fuel: Frameworks for deep learning 2015. arXiv Prepr. arXiv1506.00619.
  • [29] Sander Dieleman, Jan Schlüter, Colin Raffel, Eben Olson, Søren Kaae Sønderby, Daniel Nouri, Daniel Maturana, Martin Thoma, Eric Battenberg, Jack Kelly, Jeffrey De Fauw, Michael Heilman, diogo149, Brian McFee, Hendrik Weideman, takacsg84, peterderivaz, Jon, instagibbs, Dr. Kashif Rasul, CongLiu, Britefury, and Jonas Degrave, Lasagne: First Release. (Zenodo, 2015).
  • [30] Team T. T. D., et al. Theano: A Python framework for fast computation of mathematical expressions 2016. arXiv Prepr. arXiv1605.02688.
  • [31] Bengio. Y., MILA and the future of Theano 2017. [Online]. Available: goo.gl/gdmTjk.
  • [32] Ferrer, M. A., Diaz-Cabrera, M., & Morales, A. Synthetic off-line signature image generation 2013. In Biometrics (ICB), 2013 International Conference, 1–7.
  • [33] Shatnawi, A., Al-Bdour, G., Al-Qurran, R., & Al-Ayyoub, M. A comparative study of open source deep learning frameworks 2018. In 2018 9th International Conference on Information and Communication Systems (ICICS), 72-77. DOI: 10.1109/IACS.2018.8355444.
Yıl 2021, Cilt: 1 Sayı: 1, 1 - 11, 28.02.2021

Öz

Kaynakça

  • [1] Bahrampour, S., Ramakrishnan, N., Schott, L., & Shah M., Comparative study of caffe, neon, theano, and torch for deep learning 2016, arXiv, abs/1511.06435.
  • [2] Chen T., et al. Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems, arXiv Prepr 2015. arXiv1512.01274.
  • [3] Jia Y. et al. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM international conference on Multimedia 2014, 675–678.
  • [4] NVIDIA. Caffe2 Deep Learning Framework 2017. https://developer.nvidia.com/caffe2.
  • [5] Chollet F., et al. Keras: Deep learning library for theano and tensorflow 2015. URL https//keras. io/k, 7, 8.
  • [6] ̧F.ois Chollet, Keras 2015, https://github.com/fchollet/keras.
  • [7] Microsoft. Computational Network Toolkit (CNTK) 2016. [Online]. Available: https://www.microsoft.com/en-us/cognitive-toolkit/.
  • [8] Huang. X., Microsoft computational network toolkit offers most efficient distributed deep learning computational performance 2015. https://goo.gl/9UUwVn.
  • [9] Microsoft. The Microsoft Cognitive Toolkit 2016. https://www.microsoft.com/en-us/cognitive-toolkit/.
  • [10] Banerjee, D.S., Hamidouche, K., & Panda, D.K., Re-Designing CNTK Deep Learning Framework on Modern GPU Enabled Clusters, 2016 IEEE International Conference on Cloud Computing Technology and Science (CloudCom), 2016, pp. 144–151, DOI: 10.1109/CloudCom.2016.0036.
  • [11] MXNet. MXNet 2017. https://mxnet.incubator.apache.org/.
  • [12] Google, TensorFlow. [Online]. Available: https://www.tensorflow.org/.
  • [13] NVIDIA. GPU-Accelerated TensorFlow 2018. https://www.nvidia.com/en-us/data-center/gpu-accelerated-applications/tensorflow/.
  • [14] Abadi M., et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems 2016. arXiv Prepr. arXiv1603.04467, 2016.
  • [15] Goldsborough. P., A tour of TensorFlow 2016. arXiv Prepr. arXiv1610.01178.
  • [16] Comparing Top Deep Learning Frameworks. https://deeplearning4j.org/compare-dl4j-tensorflow-pytorch.
  • [17] Bergstra J., et al. Theano: A CPU and GPU math compiler in Python 2010. In Proc. 9th Python in Science Conf, vol. 1.
  • [18] Bastien F. et al. Theano: new features and speed improvements 2012. arXiv Prepr. arXiv1211.5590.
  • [19] Al-Rfou R., et al. Theano: A Python framework for fast computation of mathematical expressions. arXiv Prepr. arXiv1605.02688, 472: 473-2016.
  • [20] Torch. What is Torch? http://torch.ch/. [Online]. Available: http://torch.ch/.
  • [21] Chetlur S., et al. Cudnn: Efficient primitives for deep learning 2014. arXiv Prepr. arXiv1410.0759.
  • [22] Shi, S., Wang, Q., Xu, P., & Chu, X., Benchmarking state-of-the-art deep learning software tools 2016. In 7th International Conference on Cloud Computing and Big Data (CCBD), 2016, 99–104. DOI: 10.1109/CCBD.2016.029.
  • [23] Chollet. F. Deep learning with python, 2017 Manning Publications Co.
  • [24] Erickson, B. J., Korfiatis, P., Akkus, Z., Kline, T., & Philbrick, K. Toolkits and libraries for deep learning. J. Digit. Imaging, 2017, 30(4), 400–405.
  • [25] Jouppi N. P. et al. In-datacenter performance analysis of a tensor processing unit. In Proceedings of the 44th Annual International Symposium on Computer Architecture, 2017, 1–12.
  • [26] Bergstra J., et al. Theano: Deep learning on gpus with Python 2011. In NIPS 2011, BigLearning Workshop, Granada, Spain, vol. 3.
  • [27] Goodfellow I. J., et al. Pylearn2: a machine learning research library 2013. arXiv Prepr. arXiv1308.4214.
  • [28] Van Merriënboer B., et al. Blocks and fuel: Frameworks for deep learning 2015. arXiv Prepr. arXiv1506.00619.
  • [29] Sander Dieleman, Jan Schlüter, Colin Raffel, Eben Olson, Søren Kaae Sønderby, Daniel Nouri, Daniel Maturana, Martin Thoma, Eric Battenberg, Jack Kelly, Jeffrey De Fauw, Michael Heilman, diogo149, Brian McFee, Hendrik Weideman, takacsg84, peterderivaz, Jon, instagibbs, Dr. Kashif Rasul, CongLiu, Britefury, and Jonas Degrave, Lasagne: First Release. (Zenodo, 2015).
  • [30] Team T. T. D., et al. Theano: A Python framework for fast computation of mathematical expressions 2016. arXiv Prepr. arXiv1605.02688.
  • [31] Bengio. Y., MILA and the future of Theano 2017. [Online]. Available: goo.gl/gdmTjk.
  • [32] Ferrer, M. A., Diaz-Cabrera, M., & Morales, A. Synthetic off-line signature image generation 2013. In Biometrics (ICB), 2013 International Conference, 1–7.
  • [33] Shatnawi, A., Al-Bdour, G., Al-Qurran, R., & Al-Ayyoub, M. A comparative study of open source deep learning frameworks 2018. In 2018 9th International Conference on Information and Communication Systems (ICICS), 72-77. DOI: 10.1109/IACS.2018.8355444.
Toplam 33 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Yapay Zeka
Bölüm Research Articles
Yazarlar

M. Mutlu Yapıcı 0000-0001-6171-1226

Nurettin Topaloğlu 0000-0001-5836-7882

Yayımlanma Tarihi 28 Şubat 2021
Kabul Tarihi 23 Eylül 2020
Yayımlandığı Sayı Yıl 2021 Cilt: 1 Sayı: 1

Kaynak Göster

Vancouver Yapıcı MM, Topaloğlu N. Performance comparison of deep learning frameworks. C&I. 2021;1(1):1-11.