Performance comparison of deep learning frameworks
Abstract
Keywords
Thanks
References
- [1] Bahrampour, S., Ramakrishnan, N., Schott, L., & Shah M., Comparative study of caffe, neon, theano, and torch for deep learning 2016, arXiv, abs/1511.06435.
- [2] Chen T., et al. Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems, arXiv Prepr 2015. arXiv1512.01274.
- [3] Jia Y. et al. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM international conference on Multimedia 2014, 675–678.
- [4] NVIDIA. Caffe2 Deep Learning Framework 2017. https://developer.nvidia.com/caffe2.
- [5] Chollet F., et al. Keras: Deep learning library for theano and tensorflow 2015. URL https//keras. io/k, 7, 8.
- [6] ̧F.ois Chollet, Keras 2015, https://github.com/fchollet/keras.
- [7] Microsoft. Computational Network Toolkit (CNTK) 2016. [Online]. Available: https://www.microsoft.com/en-us/cognitive-toolkit/.
- [8] Huang. X., Microsoft computational network toolkit offers most efficient distributed deep learning computational performance 2015. https://goo.gl/9UUwVn.
Details
Primary Language
English
Subjects
Artificial Intelligence
Journal Section
Research Article
Authors
M. Mutlu Yapıcı
*
0000-0001-6171-1226
Türkiye
Publication Date
February 28, 2021
Submission Date
July 14, 2020
Acceptance Date
September 23, 2020
Published in Issue
Year 2021 Volume: 1 Number: 1