Araştırma Makalesi
PDF Zotero Mendeley EndNote BibTex Kaynak Göster

Çizge Veri Tabanlarının Yapay Sinir Ağları İçin Kullanımı

Yıl 2021, Cilt 7, Sayı 1, 12 - 34, 20.03.2021
https://doi.org/10.28979/jarnas.890552

Öz

Eğitilmiş yapay sinir ağı (YSA) modellerinin saklanması ve kullanılması teknik olarak zorluklar içerir. Bu modeller genelde dosya olarak saklanır ve doğrudan çalıştırılamazlar. Bir yapay sinir ağı yapısal olarak bir çizge şeklinde ifade edilebilir. Bu nedenle YSA modellerini bir veri tabanında saklamak ve bu veri tabanı sistemi olarak çizge veri tabanı kullanmak çok daha faydalı olacaktır. Bu çalışmada YSA modelleri üzerinde birden çok araştırmacı tarafından ortak araştırma yapmasına olanak sağlayacak yazılım ile YSA modellerinin eğitim ve test aşamalarının görselleştirilmesi sağlanmıştır. Veri tabanında saklanan modellerin versiyonlanması daha kolay olacaktır. Ayrıca modele girdi olacak veriler yine bu veri tabanında saklanabilir. YSA modellerinin girdi verileri ile beslenmesi ve çıktı üretmesi için veri tabanının kendi sorgu dili kullanılmıştır. Bu sayede başka bir yazılım kütüphanesine bağımlılık ortadan kaldırılmıştır.

Kaynakça

  • Ackley, D. H., Hinton, G. E., & Sejnowski, T. J. (1985). A learning algorithm for Boltzmann machines. Cognitive science, 9(1), 147-169.
  • Armenta, M. A., & Jodoin, P. M. (2020). The Representation Theory of Neural Networks. arXiv preprint arXiv:2007.12213.
  • Battaglia, W., P., Hamrick, B., J., Bapst, Alvaro, … Razvan. (2018, October 17). Relational inductive biases, deep learning, and graph networks. Retrieved from https://arxiv.org/abs/1806.01261.
  • Buhrmester, V., Münch, D., & Arens, M. (2019). Analysis of explainers of black box deep neural networks for computer vision: A survey. arXiv preprint arXiv:1911.12116.
  • Carpenter, G. A., & Grossberg, S. (1990). ART 3: Hierarchical search using chemical transmitters in self-organizing pattern recognition architectures. Neural networks, 3(2), 129-152.
  • Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.
  • Choromanska, A., Henaff, M., Mathieu, M., Arous, G. B., & LeCun, Y. (2015). The loss surfaces of multila-yer networks. In Artificial intelligence and statistics (pp. 192-204).
  • Cvitkovic, M. (2020). Supervised Learning on Relational Databases with Graph Neural Networks. arXiv preprint arXiv:2002.02046.
  • Cybenko, G. (1989). Approximation by superpositions of a sigmoidal function. Mathematics of control, sig-nals and systems, 2(4), 303-314.
  • Çuhadar, M., & Kayacan, C. (2005). Yapay Sinir Ağları Kullanılarak Konaklama İşletmelerinde Doluluk Oranı Tahmini: Türkiye'deki Konaklama İşletmeleri Üzerine Bir Deneme. Anatolia: Turizm Arastir-malari Dergisi, 16(1).
  • Euler, L. (1741). Solutio problematis ad geometriam situs pertinentis. Commentarii academiae scientiarum Petropolitanae, 128-140. Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial networks. arXiv preprint arXiv:1406.2661, 4(5), 6.
  • Goodfellow, I., Bengio, Y., Courville, A., & Bengio, Y. (2016). Deep learning (Vol. 1, No. 2). Cambridge: MIT press.
  • Graph Based Neural Network (2019). Retrieved from https://github.com/dogabaris/GraphBasedNeuralNetwork.
  • Hebb, D. O. (1949). The organization of behavior: a neuropsychological theory. J. Wiley; Chapman & Hall.
  • Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780.
  • Hopfield, J. J. (1982). Neural networks and physical systems with emergent collective computational abilities. Proceedings of the national academy of sciences, 79(8), 2554-2558.
  • Hopfield, J. J. (1984). Neurons with graded response have collective computational properties like those of two-state neurons. Proceedings of the national academy of sciences, 81(10), 3088-3092.
  • Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., ... & Adam, H. (2017). Mobi-lenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861.
  • Keras: The Python Deep Learning library. (2017). Retrieved from https://keras.io.
  • Kohonen, T. (1982). Self-organized Formation of Topologically Correct Feature Maps. Biological Cyberne-tics, 43, 59-69.
  • Kohonen, T. (1990). The self-organizing map. Proceedings of the IEEE, 78(9), 1464-1480.
  • Lam, H. T., Minh, T. N., Sinn, M., Buesser, B., & Wistuba, M. (2018). Neural feature learning from relatio-nal database. arXiv preprint arXiv:1801.05372.
  • LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document re-cognition. Proceedings of the IEEE, 86(11), 2278-2324.
  • LeCun, Y., Cortes, C., & Burges, C. J. (2010). MNIST handwritten digit database.
  • Liu, H. (2017, November 1). Hierarchical Representations for Efficient Architecture Search. Retrieved from https://arxiv.org/abs/1711.00436v2.
  • Maas, A. L., Hannun, A. Y., & Ng, A. Y. (2013, June). Rectifier nonlinearities improve neural network acoustic models. In Proc. icml (Vol. 30, No. 1, p. 3).
  • Mcculloch, W. & Pitts, W. (1943). A Logical Calculus of Ideas Immanent in Nervous Activity. Bulletin of Mathematical Biophysics, 5, 127--147.
  • Minsky, M., Papert, S. (1969). Perceptrons: An Introduction to Computational Geometry. Cambridge, MA, USA: MIT Press.
  • Moody, J., & Darken, C. J. (1989). Fast learning in networks of locally-tuned processing units. Neural com-putation, 1(2), 281-294.
  • Muhammad, T., & Halim, Z. (2016). Employing artificial neural networks for constructing metadata-based model to automatically select an appropriate data visualization technique. Applied Soft Computing, 49, 365–384. DOI: 10.1016/j.asoc.2016.08.039.
  • Nair, V., & Hinton, G. E. (2010, January). Rectified linear units improve restricted boltzmann machines. In ICML.
  • Nekhaev, D., & Demin, V. (2017). Visualization of maximizing images with deconvolutional optimization method for neurons in deep neural networks. Procedia Computer Science, 119, 174–181. DOI: 10.1016/j.procs.2017.11.174.
  • Neo4j. (2007). Retrieved from https://neo4j.com.
  • Olden, J. D., & Jackson, D. A. (2002). Illuminating the “black box”: a randomization approach for unders-tanding variable contributions in artificial neural networks. Ecological Modelling, 154(1-2), 135–150. DOI: 10.1016/s0304-3800(02)00064-9.
  • Öztanır, O. (2018). Makine Öğrenmesi Kullanılarak Kestirimci Bakım (Master's thesis, Fen Bilimleri Ensti-tüsü).
  • Rosenblatt, F. (1958). The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65(6), 386–408. https://doi.org/10.1037/h0042519
  • Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. nature, 323(6088), 533-536.
  • Rumelhart, D. E., Hinton, G. E. & Williams, R. J. (1985). Learning Internal Representations by Error Propa-gation. In D. E. Rumelhart & J. L. Mcclelland (ed.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Volume 1: Foundations (pp. 318--362). MIT Press.
  • Scarselli, F., Gori, M., Tsoi, A. C., Hagenbuchner, M., & Monfardini, G. (2008). The graph neural network model. IEEE Transactions on Neural Networks, 20(1), 61-80.
  • Schikuta, E. (2008). Neural networks and database systems. arXiv preprint arXiv:0802.3582.
  • Smolensky, P. (1986). Information processing in dynamical systems: Foundations of harmony theory. Colo-rado Univ at Boulder Dept of Computer Science.
  • Tank, D., & Hopfield, J. (1986). Simple 'neural' optimization networks: An A/D converter, signal decision circuit, and a linear programming circuit. IEEE transactions on circuits and systems, 33(5), 533-541.
  • TensorFlow. (2015). Retrieved from https://tensorflow.org.
  • Tosun, S. (2007). Sınıflandırmada yapay sinir ağları ve karar ağaçları karşılaştırması: Öğrenci başarıları üze-rine bir uygulama (Doctoral dissertation, Fen Bilimleri Enstitüsü).
  • Touretzky, D. S., & Pomerleau, D. A. (1989). What’s hidden in the hidden layers. Byte, 14(8), 227-233.
  • Uwents, W., Monfardini, G., Blockeel, H., Gori, M., & Scarselli, F. (2010). Neural networks for relational learning: an experimental comparison. Machine Learning, 82(3), 315–349. DOI: 10.1007/s10994-010-5196-5.
  • Wang, T. (2018, February 15). NerveNet: Learning Structured Policy with Graph Neural Networks. Retrieved from https://openreview.net/forum?id=S1sqHMZCb
  • Widrow, B. & Hoff, M. E. (1960). Adaptive Switching Circuits. 1960 IRE WESCON Convention Record, Part 4 (p./pp. 96--104), New York: IRE.
  • Widrow, B., & Lehr, M. A. (1990). 30 years of adaptive neural networks: perceptron, madaline, and backpropagation. Proceedings of the IEEE, 78(9), 1415-1442.
  • Witt, C., Bux, M., Gusew, W., & Leser, U. (2019). Predictive performance modeling for distributed batch processing using black box monitoring and machine learning. Information Systems, 82, 33–52. DOI: 10.1016/j.is.2019.01.006.
  • Yahia, M. E., & Elsawi, A. M. (2003). Neural Database Model.

The Use of Graph Databases for Artificial Neural Networks

Yıl 2021, Cilt 7, Sayı 1, 12 - 34, 20.03.2021
https://doi.org/10.28979/jarnas.890552

Öz

Storing and using trained artificial neural network (ANN) models face technical difficulties. These models are usually stored as files and cannot be run directly. An artificial neural network can be structurally expressed as a graph. Therefore, it would be much more useful to store ANN models in a database and use the graph database as this database system. In this study, training and testing stages of ANN models are provided with software that will allow multiple researchers to conduct joint research on ANN models. The developed software platform is aimed to increase the representation power of the currently used methods by transferring the models developed in the popular ANN frameworks used today. With the study conducted, even someone who has started learning artificial neural network models from scratch will see the process and can visually develop their own model. When models are stored in the graph database, it will be easier to making versions and observing how the model grows. In addition, data to be input and output to the model can be stored in this database, also. In order to feed ANN models with input data and produce outputs, the graph database's own query language was used. This eliminates the dependency on another software library.

Kaynakça

  • Ackley, D. H., Hinton, G. E., & Sejnowski, T. J. (1985). A learning algorithm for Boltzmann machines. Cognitive science, 9(1), 147-169.
  • Armenta, M. A., & Jodoin, P. M. (2020). The Representation Theory of Neural Networks. arXiv preprint arXiv:2007.12213.
  • Battaglia, W., P., Hamrick, B., J., Bapst, Alvaro, … Razvan. (2018, October 17). Relational inductive biases, deep learning, and graph networks. Retrieved from https://arxiv.org/abs/1806.01261.
  • Buhrmester, V., Münch, D., & Arens, M. (2019). Analysis of explainers of black box deep neural networks for computer vision: A survey. arXiv preprint arXiv:1911.12116.
  • Carpenter, G. A., & Grossberg, S. (1990). ART 3: Hierarchical search using chemical transmitters in self-organizing pattern recognition architectures. Neural networks, 3(2), 129-152.
  • Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.
  • Choromanska, A., Henaff, M., Mathieu, M., Arous, G. B., & LeCun, Y. (2015). The loss surfaces of multila-yer networks. In Artificial intelligence and statistics (pp. 192-204).
  • Cvitkovic, M. (2020). Supervised Learning on Relational Databases with Graph Neural Networks. arXiv preprint arXiv:2002.02046.
  • Cybenko, G. (1989). Approximation by superpositions of a sigmoidal function. Mathematics of control, sig-nals and systems, 2(4), 303-314.
  • Çuhadar, M., & Kayacan, C. (2005). Yapay Sinir Ağları Kullanılarak Konaklama İşletmelerinde Doluluk Oranı Tahmini: Türkiye'deki Konaklama İşletmeleri Üzerine Bir Deneme. Anatolia: Turizm Arastir-malari Dergisi, 16(1).
  • Euler, L. (1741). Solutio problematis ad geometriam situs pertinentis. Commentarii academiae scientiarum Petropolitanae, 128-140. Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial networks. arXiv preprint arXiv:1406.2661, 4(5), 6.
  • Goodfellow, I., Bengio, Y., Courville, A., & Bengio, Y. (2016). Deep learning (Vol. 1, No. 2). Cambridge: MIT press.
  • Graph Based Neural Network (2019). Retrieved from https://github.com/dogabaris/GraphBasedNeuralNetwork.
  • Hebb, D. O. (1949). The organization of behavior: a neuropsychological theory. J. Wiley; Chapman & Hall.
  • Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780.
  • Hopfield, J. J. (1982). Neural networks and physical systems with emergent collective computational abilities. Proceedings of the national academy of sciences, 79(8), 2554-2558.
  • Hopfield, J. J. (1984). Neurons with graded response have collective computational properties like those of two-state neurons. Proceedings of the national academy of sciences, 81(10), 3088-3092.
  • Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., ... & Adam, H. (2017). Mobi-lenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861.
  • Keras: The Python Deep Learning library. (2017). Retrieved from https://keras.io.
  • Kohonen, T. (1982). Self-organized Formation of Topologically Correct Feature Maps. Biological Cyberne-tics, 43, 59-69.
  • Kohonen, T. (1990). The self-organizing map. Proceedings of the IEEE, 78(9), 1464-1480.
  • Lam, H. T., Minh, T. N., Sinn, M., Buesser, B., & Wistuba, M. (2018). Neural feature learning from relatio-nal database. arXiv preprint arXiv:1801.05372.
  • LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document re-cognition. Proceedings of the IEEE, 86(11), 2278-2324.
  • LeCun, Y., Cortes, C., & Burges, C. J. (2010). MNIST handwritten digit database.
  • Liu, H. (2017, November 1). Hierarchical Representations for Efficient Architecture Search. Retrieved from https://arxiv.org/abs/1711.00436v2.
  • Maas, A. L., Hannun, A. Y., & Ng, A. Y. (2013, June). Rectifier nonlinearities improve neural network acoustic models. In Proc. icml (Vol. 30, No. 1, p. 3).
  • Mcculloch, W. & Pitts, W. (1943). A Logical Calculus of Ideas Immanent in Nervous Activity. Bulletin of Mathematical Biophysics, 5, 127--147.
  • Minsky, M., Papert, S. (1969). Perceptrons: An Introduction to Computational Geometry. Cambridge, MA, USA: MIT Press.
  • Moody, J., & Darken, C. J. (1989). Fast learning in networks of locally-tuned processing units. Neural com-putation, 1(2), 281-294.
  • Muhammad, T., & Halim, Z. (2016). Employing artificial neural networks for constructing metadata-based model to automatically select an appropriate data visualization technique. Applied Soft Computing, 49, 365–384. DOI: 10.1016/j.asoc.2016.08.039.
  • Nair, V., & Hinton, G. E. (2010, January). Rectified linear units improve restricted boltzmann machines. In ICML.
  • Nekhaev, D., & Demin, V. (2017). Visualization of maximizing images with deconvolutional optimization method for neurons in deep neural networks. Procedia Computer Science, 119, 174–181. DOI: 10.1016/j.procs.2017.11.174.
  • Neo4j. (2007). Retrieved from https://neo4j.com.
  • Olden, J. D., & Jackson, D. A. (2002). Illuminating the “black box”: a randomization approach for unders-tanding variable contributions in artificial neural networks. Ecological Modelling, 154(1-2), 135–150. DOI: 10.1016/s0304-3800(02)00064-9.
  • Öztanır, O. (2018). Makine Öğrenmesi Kullanılarak Kestirimci Bakım (Master's thesis, Fen Bilimleri Ensti-tüsü).
  • Rosenblatt, F. (1958). The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65(6), 386–408. https://doi.org/10.1037/h0042519
  • Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. nature, 323(6088), 533-536.
  • Rumelhart, D. E., Hinton, G. E. & Williams, R. J. (1985). Learning Internal Representations by Error Propa-gation. In D. E. Rumelhart & J. L. Mcclelland (ed.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Volume 1: Foundations (pp. 318--362). MIT Press.
  • Scarselli, F., Gori, M., Tsoi, A. C., Hagenbuchner, M., & Monfardini, G. (2008). The graph neural network model. IEEE Transactions on Neural Networks, 20(1), 61-80.
  • Schikuta, E. (2008). Neural networks and database systems. arXiv preprint arXiv:0802.3582.
  • Smolensky, P. (1986). Information processing in dynamical systems: Foundations of harmony theory. Colo-rado Univ at Boulder Dept of Computer Science.
  • Tank, D., & Hopfield, J. (1986). Simple 'neural' optimization networks: An A/D converter, signal decision circuit, and a linear programming circuit. IEEE transactions on circuits and systems, 33(5), 533-541.
  • TensorFlow. (2015). Retrieved from https://tensorflow.org.
  • Tosun, S. (2007). Sınıflandırmada yapay sinir ağları ve karar ağaçları karşılaştırması: Öğrenci başarıları üze-rine bir uygulama (Doctoral dissertation, Fen Bilimleri Enstitüsü).
  • Touretzky, D. S., & Pomerleau, D. A. (1989). What’s hidden in the hidden layers. Byte, 14(8), 227-233.
  • Uwents, W., Monfardini, G., Blockeel, H., Gori, M., & Scarselli, F. (2010). Neural networks for relational learning: an experimental comparison. Machine Learning, 82(3), 315–349. DOI: 10.1007/s10994-010-5196-5.
  • Wang, T. (2018, February 15). NerveNet: Learning Structured Policy with Graph Neural Networks. Retrieved from https://openreview.net/forum?id=S1sqHMZCb
  • Widrow, B. & Hoff, M. E. (1960). Adaptive Switching Circuits. 1960 IRE WESCON Convention Record, Part 4 (p./pp. 96--104), New York: IRE.
  • Widrow, B., & Lehr, M. A. (1990). 30 years of adaptive neural networks: perceptron, madaline, and backpropagation. Proceedings of the IEEE, 78(9), 1415-1442.
  • Witt, C., Bux, M., Gusew, W., & Leser, U. (2019). Predictive performance modeling for distributed batch processing using black box monitoring and machine learning. Information Systems, 82, 33–52. DOI: 10.1016/j.is.2019.01.006.
  • Yahia, M. E., & Elsawi, A. M. (2003). Neural Database Model.

Ayrıntılar

Birincil Dil İngilizce
Konular Mühendislik
Bölüm Makaleler
Yazarlar

Doğa Barış ÖZDEMİR (Sorumlu Yazar)
CANAKKALE ONSEKIZ MART UNIVERSITY
0000-0003-1966-2154
Türkiye


Ahmet Cumhur KINACI Bu kişi benim
CANAKKALE ONSEKIZ MART UNIVERSITY, .
Türkiye

Destekleyen Kurum Hepsiburada
Teşekkür Hepsiburada'ya vermiş olduğu imkanlardan dolayı teşekkür ederim.
Yayımlanma Tarihi 20 Mart 2021
Başvuru Tarihi 8 Mayıs 2020
Kabul Tarihi 14 Aralık 2020
Yayınlandığı Sayı Yıl 2021, Cilt 7, Sayı 1

Kaynak Göster

Bibtex @araştırma makalesi { jarnas890552, journal = {Journal of Advanced Research in Natural and Applied Sciences}, issn = {}, eissn = {2757-5195}, address = {Çanakkale Onsekiz <mart Üniversitesi Lisansüstü Eğitim Enstitüsü}, publisher = {Çanakkale Onsekiz Mart Üniversitesi}, year = {2021}, volume = {7}, pages = {12 - 34}, doi = {10.28979/jarnas.890552}, title = {The Use of Graph Databases for Artificial Neural Networks}, key = {cite}, author = {Özdemir, Doğa Barış and Kınacı, Ahmet Cumhur} }
APA Özdemir, D. B. & Kınacı, A. C. (2021). The Use of Graph Databases for Artificial Neural Networks . Journal of Advanced Research in Natural and Applied Sciences , 7 (1) , 12-34 . DOI: 10.28979/jarnas.890552
MLA Özdemir, D. B. , Kınacı, A. C. "The Use of Graph Databases for Artificial Neural Networks" . Journal of Advanced Research in Natural and Applied Sciences 7 (2021 ): 12-34 <https://dergipark.org.tr/tr/pub/jarnas/issue/60593/890552>
Chicago Özdemir, D. B. , Kınacı, A. C. "The Use of Graph Databases for Artificial Neural Networks". Journal of Advanced Research in Natural and Applied Sciences 7 (2021 ): 12-34
RIS TY - JOUR T1 - The Use of Graph Databases for Artificial Neural Networks AU - Doğa Barış Özdemir , Ahmet Cumhur Kınacı Y1 - 2021 PY - 2021 N1 - doi: 10.28979/jarnas.890552 DO - 10.28979/jarnas.890552 T2 - Journal of Advanced Research in Natural and Applied Sciences JF - Journal JO - JOR SP - 12 EP - 34 VL - 7 IS - 1 SN - -2757-5195 M3 - doi: 10.28979/jarnas.890552 UR - https://doi.org/10.28979/jarnas.890552 Y2 - 2020 ER -
EndNote %0 Journal of Advanced Research in Natural and Applied Sciences The Use of Graph Databases for Artificial Neural Networks %A Doğa Barış Özdemir , Ahmet Cumhur Kınacı %T The Use of Graph Databases for Artificial Neural Networks %D 2021 %J Journal of Advanced Research in Natural and Applied Sciences %P -2757-5195 %V 7 %N 1 %R doi: 10.28979/jarnas.890552 %U 10.28979/jarnas.890552
ISNAD Özdemir, Doğa Barış , Kınacı, Ahmet Cumhur . "The Use of Graph Databases for Artificial Neural Networks". Journal of Advanced Research in Natural and Applied Sciences 7 / 1 (Mart 2021): 12-34 . https://doi.org/10.28979/jarnas.890552
AMA Özdemir D. B. , Kınacı A. C. The Use of Graph Databases for Artificial Neural Networks. Journal of Advanced Research in Natural and Applied Sciences. 2021; 7(1): 12-34.
Vancouver Özdemir D. B. , Kınacı A. C. The Use of Graph Databases for Artificial Neural Networks. Journal of Advanced Research in Natural and Applied Sciences. 2021; 7(1): 12-34.
IEEE D. B. Özdemir ve A. C. Kınacı , "The Use of Graph Databases for Artificial Neural Networks", Journal of Advanced Research in Natural and Applied Sciences, c. 7, sayı. 1, ss. 12-34, Mar. 2021, doi:10.28979/jarnas.890552