Çizge Veri Tabanlarının Yapay Sinir Ağları İçin Kullanımı
Year 2021,
Volume: 7 Issue: 1, 12 - 34, 20.03.2021
Doğa Barış Özdemir
,
Ahmet Cumhur Kınacı
Abstract
Eğitilmiş yapay sinir ağı (YSA) modellerinin saklanması ve kullanılması teknik olarak zorluklar içerir. Bu modeller genelde dosya olarak saklanır ve doğrudan çalıştırılamazlar. Bir yapay sinir ağı yapısal olarak bir çizge şeklinde ifade edilebilir. Bu nedenle YSA modellerini bir veri tabanında saklamak ve bu veri tabanı sistemi olarak çizge veri tabanı kullanmak çok daha faydalı olacaktır. Bu çalışmada YSA modelleri üzerinde birden çok araştırmacı tarafından ortak araştırma yapmasına olanak sağlayacak yazılım ile YSA modellerinin eğitim ve test aşamalarının görselleştirilmesi sağlanmıştır. Veri tabanında saklanan modellerin versiyonlanması daha kolay olacaktır. Ayrıca modele girdi olacak veriler yine bu veri tabanında saklanabilir. YSA modellerinin girdi verileri ile beslenmesi ve çıktı üretmesi için veri tabanının kendi sorgu dili kullanılmıştır. Bu sayede başka bir yazılım kütüphanesine bağımlılık ortadan kaldırılmıştır.
Supporting Institution
Hepsiburada
Thanks
Hepsiburada'ya vermiş olduğu imkanlardan dolayı teşekkür ederim.
References
- Ackley, D. H., Hinton, G. E., & Sejnowski, T. J. (1985). A learning algorithm for Boltzmann machines. Cognitive science, 9(1), 147-169.
- Armenta, M. A., & Jodoin, P. M. (2020). The Representation Theory of Neural Networks. arXiv preprint arXiv:2007.12213.
- Battaglia, W., P., Hamrick, B., J., Bapst, Alvaro, … Razvan. (2018, October 17). Relational inductive biases, deep learning, and graph networks. Retrieved from https://arxiv.org/abs/1806.01261.
- Buhrmester, V., Münch, D., & Arens, M. (2019). Analysis of explainers of black box deep neural networks for computer vision: A survey. arXiv preprint arXiv:1911.12116.
- Carpenter, G. A., & Grossberg, S. (1990). ART 3: Hierarchical search using chemical transmitters in self-organizing pattern recognition architectures. Neural networks, 3(2), 129-152.
- Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.
- Choromanska, A., Henaff, M., Mathieu, M., Arous, G. B., & LeCun, Y. (2015). The loss surfaces of multila-yer networks. In Artificial intelligence and statistics (pp. 192-204).
- Cvitkovic, M. (2020). Supervised Learning on Relational Databases with Graph Neural Networks. arXiv preprint arXiv:2002.02046.
- Cybenko, G. (1989). Approximation by superpositions of a sigmoidal function. Mathematics of control, sig-nals and systems, 2(4), 303-314.
- Çuhadar, M., & Kayacan, C. (2005). Yapay Sinir Ağları Kullanılarak Konaklama İşletmelerinde Doluluk Oranı Tahmini: Türkiye'deki Konaklama İşletmeleri Üzerine Bir Deneme. Anatolia: Turizm Arastir-malari Dergisi, 16(1).
- Euler, L. (1741). Solutio problematis ad geometriam situs pertinentis. Commentarii academiae scientiarum Petropolitanae, 128-140. Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial networks. arXiv preprint arXiv:1406.2661, 4(5), 6.
- Goodfellow, I., Bengio, Y., Courville, A., & Bengio, Y. (2016). Deep learning (Vol. 1, No. 2). Cambridge: MIT press.
- Graph Based Neural Network (2019). Retrieved from https://github.com/dogabaris/GraphBasedNeuralNetwork.
- Hebb, D. O. (1949). The organization of behavior: a neuropsychological theory. J. Wiley; Chapman & Hall.
- Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780.
- Hopfield, J. J. (1982). Neural networks and physical systems with emergent collective computational abilities. Proceedings of the national academy of sciences, 79(8), 2554-2558.
- Hopfield, J. J. (1984). Neurons with graded response have collective computational properties like those of two-state neurons. Proceedings of the national academy of sciences, 81(10), 3088-3092.
- Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., ... & Adam, H. (2017). Mobi-lenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861.
- Keras: The Python Deep Learning library. (2017). Retrieved from https://keras.io.
- Kohonen, T. (1982). Self-organized Formation of Topologically Correct Feature Maps. Biological Cyberne-tics, 43, 59-69.
- Kohonen, T. (1990). The self-organizing map. Proceedings of the IEEE, 78(9), 1464-1480.
- Lam, H. T., Minh, T. N., Sinn, M., Buesser, B., & Wistuba, M. (2018). Neural feature learning from relatio-nal database. arXiv preprint arXiv:1801.05372.
- LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document re-cognition. Proceedings of the IEEE, 86(11), 2278-2324.
- LeCun, Y., Cortes, C., & Burges, C. J. (2010). MNIST handwritten digit database.
- Liu, H. (2017, November 1). Hierarchical Representations for Efficient Architecture Search. Retrieved from https://arxiv.org/abs/1711.00436v2.
- Maas, A. L., Hannun, A. Y., & Ng, A. Y. (2013, June). Rectifier nonlinearities improve neural network acoustic models. In Proc. icml (Vol. 30, No. 1, p. 3).
- Mcculloch, W. & Pitts, W. (1943). A Logical Calculus of Ideas Immanent in Nervous Activity. Bulletin of Mathematical Biophysics, 5, 127--147.
- Minsky, M., Papert, S. (1969). Perceptrons: An Introduction to Computational Geometry. Cambridge, MA, USA: MIT Press.
- Moody, J., & Darken, C. J. (1989). Fast learning in networks of locally-tuned processing units. Neural com-putation, 1(2), 281-294.
- Muhammad, T., & Halim, Z. (2016). Employing artificial neural networks for constructing metadata-based model to automatically select an appropriate data visualization technique. Applied Soft Computing, 49, 365–384. DOI: 10.1016/j.asoc.2016.08.039.
- Nair, V., & Hinton, G. E. (2010, January). Rectified linear units improve restricted boltzmann machines. In ICML.
- Nekhaev, D., & Demin, V. (2017). Visualization of maximizing images with deconvolutional optimization method for neurons in deep neural networks. Procedia Computer Science, 119, 174–181. DOI: 10.1016/j.procs.2017.11.174.
- Neo4j. (2007). Retrieved from https://neo4j.com.
- Olden, J. D., & Jackson, D. A. (2002). Illuminating the “black box”: a randomization approach for unders-tanding variable contributions in artificial neural networks. Ecological Modelling, 154(1-2), 135–150. DOI: 10.1016/s0304-3800(02)00064-9.
- Öztanır, O. (2018). Makine Öğrenmesi Kullanılarak Kestirimci Bakım (Master's thesis, Fen Bilimleri Ensti-tüsü).
- Rosenblatt, F. (1958). The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65(6), 386–408. https://doi.org/10.1037/h0042519
- Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. nature, 323(6088), 533-536.
- Rumelhart, D. E., Hinton, G. E. & Williams, R. J. (1985). Learning Internal Representations by Error Propa-gation. In D. E. Rumelhart & J. L. Mcclelland (ed.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Volume 1: Foundations (pp. 318--362). MIT Press.
- Scarselli, F., Gori, M., Tsoi, A. C., Hagenbuchner, M., & Monfardini, G. (2008). The graph neural network model. IEEE Transactions on Neural Networks, 20(1), 61-80.
- Schikuta, E. (2008). Neural networks and database systems. arXiv preprint arXiv:0802.3582.
- Smolensky, P. (1986). Information processing in dynamical systems: Foundations of harmony theory. Colo-rado Univ at Boulder Dept of Computer Science.
- Tank, D., & Hopfield, J. (1986). Simple 'neural' optimization networks: An A/D converter, signal decision circuit, and a linear programming circuit. IEEE transactions on circuits and systems, 33(5), 533-541.
- TensorFlow. (2015). Retrieved from https://tensorflow.org.
- Tosun, S. (2007). Sınıflandırmada yapay sinir ağları ve karar ağaçları karşılaştırması: Öğrenci başarıları üze-rine bir uygulama (Doctoral dissertation, Fen Bilimleri Enstitüsü).
- Touretzky, D. S., & Pomerleau, D. A. (1989). What’s hidden in the hidden layers. Byte, 14(8), 227-233.
- Uwents, W., Monfardini, G., Blockeel, H., Gori, M., & Scarselli, F. (2010). Neural networks for relational learning: an experimental comparison. Machine Learning, 82(3), 315–349. DOI: 10.1007/s10994-010-5196-5.
- Wang, T. (2018, February 15). NerveNet: Learning Structured Policy with Graph Neural Networks. Retrieved from https://openreview.net/forum?id=S1sqHMZCb
- Widrow, B. & Hoff, M. E. (1960). Adaptive Switching Circuits. 1960 IRE WESCON Convention Record, Part 4 (p./pp. 96--104), New York: IRE.
- Widrow, B., & Lehr, M. A. (1990). 30 years of adaptive neural networks: perceptron, madaline, and backpropagation. Proceedings of the IEEE, 78(9), 1415-1442.
- Witt, C., Bux, M., Gusew, W., & Leser, U. (2019). Predictive performance modeling for distributed batch processing using black box monitoring and machine learning. Information Systems, 82, 33–52. DOI: 10.1016/j.is.2019.01.006.
- Yahia, M. E., & Elsawi, A. M. (2003). Neural Database Model.
The Use of Graph Databases for Artificial Neural Networks
Year 2021,
Volume: 7 Issue: 1, 12 - 34, 20.03.2021
Doğa Barış Özdemir
,
Ahmet Cumhur Kınacı
Abstract
Storing and using trained artificial neural network (ANN) models face technical difficulties. These models are usually stored as files and cannot be run directly. An artificial neural network can be structurally expressed as a graph. Therefore, it would be much more useful to store ANN models in a database and use the graph database as this database system. In this study, training and testing stages of ANN models are provided with software that will allow multiple researchers to conduct joint research on ANN models. The developed software platform is aimed to increase the representation power of the currently used methods by transferring the models developed in the popular ANN frameworks used today. With the study conducted, even someone who has started learning artificial neural network models from scratch will see the process and can visually develop their own model. When models are stored in the graph database, it will be easier to making versions and observing how the model grows. In addition, data to be input and output to the model can be stored in this database, also. In order to feed ANN models with input data and produce outputs, the graph database's own query language was used. This eliminates the dependency on another software library.
References
- Ackley, D. H., Hinton, G. E., & Sejnowski, T. J. (1985). A learning algorithm for Boltzmann machines. Cognitive science, 9(1), 147-169.
- Armenta, M. A., & Jodoin, P. M. (2020). The Representation Theory of Neural Networks. arXiv preprint arXiv:2007.12213.
- Battaglia, W., P., Hamrick, B., J., Bapst, Alvaro, … Razvan. (2018, October 17). Relational inductive biases, deep learning, and graph networks. Retrieved from https://arxiv.org/abs/1806.01261.
- Buhrmester, V., Münch, D., & Arens, M. (2019). Analysis of explainers of black box deep neural networks for computer vision: A survey. arXiv preprint arXiv:1911.12116.
- Carpenter, G. A., & Grossberg, S. (1990). ART 3: Hierarchical search using chemical transmitters in self-organizing pattern recognition architectures. Neural networks, 3(2), 129-152.
- Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.
- Choromanska, A., Henaff, M., Mathieu, M., Arous, G. B., & LeCun, Y. (2015). The loss surfaces of multila-yer networks. In Artificial intelligence and statistics (pp. 192-204).
- Cvitkovic, M. (2020). Supervised Learning on Relational Databases with Graph Neural Networks. arXiv preprint arXiv:2002.02046.
- Cybenko, G. (1989). Approximation by superpositions of a sigmoidal function. Mathematics of control, sig-nals and systems, 2(4), 303-314.
- Çuhadar, M., & Kayacan, C. (2005). Yapay Sinir Ağları Kullanılarak Konaklama İşletmelerinde Doluluk Oranı Tahmini: Türkiye'deki Konaklama İşletmeleri Üzerine Bir Deneme. Anatolia: Turizm Arastir-malari Dergisi, 16(1).
- Euler, L. (1741). Solutio problematis ad geometriam situs pertinentis. Commentarii academiae scientiarum Petropolitanae, 128-140. Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial networks. arXiv preprint arXiv:1406.2661, 4(5), 6.
- Goodfellow, I., Bengio, Y., Courville, A., & Bengio, Y. (2016). Deep learning (Vol. 1, No. 2). Cambridge: MIT press.
- Graph Based Neural Network (2019). Retrieved from https://github.com/dogabaris/GraphBasedNeuralNetwork.
- Hebb, D. O. (1949). The organization of behavior: a neuropsychological theory. J. Wiley; Chapman & Hall.
- Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780.
- Hopfield, J. J. (1982). Neural networks and physical systems with emergent collective computational abilities. Proceedings of the national academy of sciences, 79(8), 2554-2558.
- Hopfield, J. J. (1984). Neurons with graded response have collective computational properties like those of two-state neurons. Proceedings of the national academy of sciences, 81(10), 3088-3092.
- Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., ... & Adam, H. (2017). Mobi-lenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861.
- Keras: The Python Deep Learning library. (2017). Retrieved from https://keras.io.
- Kohonen, T. (1982). Self-organized Formation of Topologically Correct Feature Maps. Biological Cyberne-tics, 43, 59-69.
- Kohonen, T. (1990). The self-organizing map. Proceedings of the IEEE, 78(9), 1464-1480.
- Lam, H. T., Minh, T. N., Sinn, M., Buesser, B., & Wistuba, M. (2018). Neural feature learning from relatio-nal database. arXiv preprint arXiv:1801.05372.
- LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document re-cognition. Proceedings of the IEEE, 86(11), 2278-2324.
- LeCun, Y., Cortes, C., & Burges, C. J. (2010). MNIST handwritten digit database.
- Liu, H. (2017, November 1). Hierarchical Representations for Efficient Architecture Search. Retrieved from https://arxiv.org/abs/1711.00436v2.
- Maas, A. L., Hannun, A. Y., & Ng, A. Y. (2013, June). Rectifier nonlinearities improve neural network acoustic models. In Proc. icml (Vol. 30, No. 1, p. 3).
- Mcculloch, W. & Pitts, W. (1943). A Logical Calculus of Ideas Immanent in Nervous Activity. Bulletin of Mathematical Biophysics, 5, 127--147.
- Minsky, M., Papert, S. (1969). Perceptrons: An Introduction to Computational Geometry. Cambridge, MA, USA: MIT Press.
- Moody, J., & Darken, C. J. (1989). Fast learning in networks of locally-tuned processing units. Neural com-putation, 1(2), 281-294.
- Muhammad, T., & Halim, Z. (2016). Employing artificial neural networks for constructing metadata-based model to automatically select an appropriate data visualization technique. Applied Soft Computing, 49, 365–384. DOI: 10.1016/j.asoc.2016.08.039.
- Nair, V., & Hinton, G. E. (2010, January). Rectified linear units improve restricted boltzmann machines. In ICML.
- Nekhaev, D., & Demin, V. (2017). Visualization of maximizing images with deconvolutional optimization method for neurons in deep neural networks. Procedia Computer Science, 119, 174–181. DOI: 10.1016/j.procs.2017.11.174.
- Neo4j. (2007). Retrieved from https://neo4j.com.
- Olden, J. D., & Jackson, D. A. (2002). Illuminating the “black box”: a randomization approach for unders-tanding variable contributions in artificial neural networks. Ecological Modelling, 154(1-2), 135–150. DOI: 10.1016/s0304-3800(02)00064-9.
- Öztanır, O. (2018). Makine Öğrenmesi Kullanılarak Kestirimci Bakım (Master's thesis, Fen Bilimleri Ensti-tüsü).
- Rosenblatt, F. (1958). The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65(6), 386–408. https://doi.org/10.1037/h0042519
- Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. nature, 323(6088), 533-536.
- Rumelhart, D. E., Hinton, G. E. & Williams, R. J. (1985). Learning Internal Representations by Error Propa-gation. In D. E. Rumelhart & J. L. Mcclelland (ed.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Volume 1: Foundations (pp. 318--362). MIT Press.
- Scarselli, F., Gori, M., Tsoi, A. C., Hagenbuchner, M., & Monfardini, G. (2008). The graph neural network model. IEEE Transactions on Neural Networks, 20(1), 61-80.
- Schikuta, E. (2008). Neural networks and database systems. arXiv preprint arXiv:0802.3582.
- Smolensky, P. (1986). Information processing in dynamical systems: Foundations of harmony theory. Colo-rado Univ at Boulder Dept of Computer Science.
- Tank, D., & Hopfield, J. (1986). Simple 'neural' optimization networks: An A/D converter, signal decision circuit, and a linear programming circuit. IEEE transactions on circuits and systems, 33(5), 533-541.
- TensorFlow. (2015). Retrieved from https://tensorflow.org.
- Tosun, S. (2007). Sınıflandırmada yapay sinir ağları ve karar ağaçları karşılaştırması: Öğrenci başarıları üze-rine bir uygulama (Doctoral dissertation, Fen Bilimleri Enstitüsü).
- Touretzky, D. S., & Pomerleau, D. A. (1989). What’s hidden in the hidden layers. Byte, 14(8), 227-233.
- Uwents, W., Monfardini, G., Blockeel, H., Gori, M., & Scarselli, F. (2010). Neural networks for relational learning: an experimental comparison. Machine Learning, 82(3), 315–349. DOI: 10.1007/s10994-010-5196-5.
- Wang, T. (2018, February 15). NerveNet: Learning Structured Policy with Graph Neural Networks. Retrieved from https://openreview.net/forum?id=S1sqHMZCb
- Widrow, B. & Hoff, M. E. (1960). Adaptive Switching Circuits. 1960 IRE WESCON Convention Record, Part 4 (p./pp. 96--104), New York: IRE.
- Widrow, B., & Lehr, M. A. (1990). 30 years of adaptive neural networks: perceptron, madaline, and backpropagation. Proceedings of the IEEE, 78(9), 1415-1442.
- Witt, C., Bux, M., Gusew, W., & Leser, U. (2019). Predictive performance modeling for distributed batch processing using black box monitoring and machine learning. Information Systems, 82, 33–52. DOI: 10.1016/j.is.2019.01.006.
- Yahia, M. E., & Elsawi, A. M. (2003). Neural Database Model.